NASA Astrophysics Data System (ADS)
Ülker, Erkan; Turanboy, Alparslan
2009-07-01
The block stone industry is one of the main commercial use of rock. The economic potential of any block quarry depends on the recovery rate, which is defined as the total volume of useful rough blocks extractable from a fixed rock volume in relation to the total volume of moved material. The natural fracture system, the rock type(s) and the extraction method used directly influence the recovery rate. The major aims of this study are to establish a theoretical framework for optimising the extraction process in marble quarries for a given fracture system, and for predicting the recovery rate of the excavated blocks. We have developed a new approach by taking into consideration only the fracture structure for maximum block recovery in block quarries. The complete model uses a linear approach based on basic geometric features of discontinuities for 3D models, a tree structure (TS) for individual investigation and finally a genetic algorithm (GA) for the obtained cuboid volume(s). We tested our new model in a selected marble quarry in the town of İscehisar (AFYONKARAHİSAR—TURKEY).
A discrete element modelling approach for block impacts on trees
NASA Astrophysics Data System (ADS)
Toe, David; Bourrier, Franck; Olmedo, Ignatio; Berger, Frederic
2015-04-01
These past few year rockfall models explicitly accounting for block shape, especially those using the Discrete Element Method (DEM), have shown a good ability to predict rockfall trajectories. Integrating forest effects into those models still remain challenging. This study aims at using a DEM approach to model impacts of blocks on trees and identify the key parameters controlling the block kinematics after the impact on a tree. A DEM impact model of a block on a tree was developed and validated using laboratory experiments. Then, key parameters were assessed using a global sensitivity analyse. Modelling the impact of a block on a tree using DEM allows taking into account large displacements, material non-linearities and contacts between the block and the tree. Tree stems are represented by flexible cylinders model as plastic beams sustaining normal, shearing, bending, and twisting loading. Root soil interactions are modelled using a rotation stiffness acting on the bending moment at the bottom of the tree and a limit bending moment to account for tree overturning. The crown is taken into account using an additional mass distribute uniformly on the upper part of the tree. The block is represented by a sphere. The contact model between the block and the stem consists of an elastic frictional model. The DEM model was validated using laboratory impact tests carried out on 41 fresh beech (Fagus Sylvatica) stems. Each stem was 1,3 m long with a diameter between 3 to 7 cm. Wood stems were clamped on a rigid structure and impacted by a 149 kg charpy pendulum. Finally an intensive simulation campaign of blocks impacting trees was done to identify the input parameters controlling the block kinematics after the impact on a tree. 20 input parameters were considered in the DEM simulation model : 12 parameters were related to the tree and 8 parameters to the block. The results highlight that the impact velocity, the stem diameter, and the block volume are the three input parameters that control the block kinematics after impact.
NASA Astrophysics Data System (ADS)
Barriopedro, D.; García-Herrera, R.; Trigo, R. M.
2010-12-01
This paper aims to provide a new blocking definition with applicability to observations and model simulations. An updated review of previous blocking detection indices is provided and some of their implications and caveats discussed. A novel blocking index is proposed by reconciling two traditional approaches based on anomaly and absolute flows. Blocks are considered from a complementary perspective as a signature in the anomalous height field capable of reversing the meridional jet-based height gradient in the total flow. The method succeeds in identifying 2-D persistent anomalies associated to a weather regime in the total flow with blockage of the westerlies. The new index accounts for the duration, intensity, extension, propagation, and spatial structure of a blocking event. In spite of its increased complexity, the detection efficiency of the method is improved without hampering the computational time. Furthermore, some misleading identification problems and artificial assumptions resulting from previous single blocking indices are avoided with the new approach. The characteristics of blocking for 40 years of reanalysis (1950-1989) over the Northern Hemisphere are described from the perspective of the new definition and compared to those resulting from two standard blocking indices and different critical thresholds. As compared to single approaches, the novel index shows a better agreement with reported proxies of blocking activity, namely climatological regions of simultaneous wave amplification and maximum band-pass filtered height standard deviation. An additional asset of the method is its adaptability to different data sets. As critical thresholds are specific of the data set employed, the method is useful for observations and model simulations of different resolutions, temporal lengths and time variant basic states, optimizing its value as a tool for model validation. Special attention has been paid on the devise of an objective scheme easily applicable to General Circulation Models where observational thresholds may be unsuitable due to the presence of model bias. Part II of this study deals with a specific implementation of this novel method to simulations of the ECHO-G global climate model.
Expressivism, Relativism, and the Analytic Equivalence Test
Frápolli, Maria J.; Villanueva, Neftalí
2015-01-01
The purpose of this paper is to show that, pace (Field, 2009), MacFarlane’s assessment relativism and expressivism should be sharply distinguished. We do so by arguing that relativism and expressivism exemplify two very different approaches to context-dependence. Relativism, on the one hand, shares with other contemporary approaches a bottom–up, building block, model, while expressivism is part of a different tradition, one that might include Lewis’ epistemic contextualism and Frege’s content individuation, with which it shares an organic model to deal with context-dependence. The building-block model and the organic model, and thus relativism and expressivism, are set apart with the aid of a particular test: only the building-block model is compatible with the idea that there might be analytically equivalent, and yet different, propositions. PMID:26635690
NASA Astrophysics Data System (ADS)
Wang, Haibo; Swee Poo, Gee
2004-08-01
We study the provisioning of virtual private network (VPN) service over WDM optical networks. For this purpose, we investigate the blocking performance of the hose model versus the pipe model for the provisioning. Two techniques are presented: an analytical queuing model and a discrete event simulation. The queuing model is developed from the multirate reduced-load approximation technique. The simulation is done with the OPNET simulator. Several experimental situations were used. The blocking probabilities calculated from the two approaches show a close match, indicating that the multirate reduced-load approximation technique is capable of predicting the blocking performance for the pipe model and the hose model in WDM networks. A comparison of the blocking behavior of the two models shows that the hose model has superior blocking performance as compared with pipe model. By and large, the blocking probability of the hose model is better than that of the pipe model by a few orders of magnitude, particularly at low load regions. The flexibility of the hose model allowing for the sharing of resources on a link among all connections accounts for its superior performance.
Bayesian block-diagonal variable selection and model averaging
Papaspiliopoulos, O.; Rossell, D.
2018-01-01
Summary We propose a scalable algorithmic framework for exact Bayesian variable selection and model averaging in linear models under the assumption that the Gram matrix is block-diagonal, and as a heuristic for exploring the model space for general designs. In block-diagonal designs our approach returns the most probable model of any given size without resorting to numerical integration. The algorithm also provides a novel and efficient solution to the frequentist best subset selection problem for block-diagonal designs. Posterior probabilities for any number of models are obtained by evaluating a single one-dimensional integral, and other quantities of interest such as variable inclusion probabilities and model-averaged regression estimates are obtained by an adaptive, deterministic one-dimensional numerical integration. The overall computational cost scales linearly with the number of blocks, which can be processed in parallel, and exponentially with the block size, rendering it most adequate in situations where predictors are organized in many moderately-sized blocks. For general designs, we approximate the Gram matrix by a block-diagonal matrix using spectral clustering and propose an iterative algorithm that capitalizes on the block-diagonal algorithms to explore efficiently the model space. All methods proposed in this paper are implemented in the R library mombf. PMID:29861501
Ruff, Kiersten M.; Harmon, Tyler S.; Pappu, Rohit V.
2015-01-01
We report the development and deployment of a coarse-graining method that is well suited for computer simulations of aggregation and phase separation of protein sequences with block-copolymeric architectures. Our algorithm, named CAMELOT for Coarse-grained simulations Aided by MachinE Learning Optimization and Training, leverages information from converged all atom simulations that is used to determine a suitable resolution and parameterize the coarse-grained model. To parameterize a system-specific coarse-grained model, we use a combination of Boltzmann inversion, non-linear regression, and a Gaussian process Bayesian optimization approach. The accuracy of the coarse-grained model is demonstrated through direct comparisons to results from all atom simulations. We demonstrate the utility of our coarse-graining approach using the block-copolymeric sequence from the exon 1 encoded sequence of the huntingtin protein. This sequence comprises of 17 residues from the N-terminal end of huntingtin (N17) followed by a polyglutamine (polyQ) tract. Simulations based on the CAMELOT approach are used to show that the adsorption and unfolding of the wild type N17 and its sequence variants on the surface of polyQ tracts engender a patchy colloid like architecture that promotes the formation of linear aggregates. These results provide a plausible explanation for experimental observations, which show that N17 accelerates the formation of linear aggregates in block-copolymeric N17-polyQ sequences. The CAMELOT approach is versatile and is generalizable for simulating the aggregation and phase behavior of a range of block-copolymeric protein sequences. PMID:26723608
Belke, Eva
Anders, Riès, van Maanen and Alario put forward evidence accumulation modelling of object naming times as an alternative to neural network models of lexical retrieval. The authors exemplify their approach using data from the blocked-cyclic naming paradigm, requiring speakers to repeatedly name small sets of related or unrelated objects. The effects observed with this paradigm are understood reasonably well within the tradition of neural network modelling. However, implemented neural network models do not specify interfaces for task-specific top-down influences and response strategies that are likely to play a role in the blocked-cyclic naming paradigm, distinguishing it from continuous, non-cyclic manipulations of the naming context. I argue that the evidence accumulation approach falls short on this account as well, as it does not specify the potential contribution of task-specific top-down processes and strategic facilitation effects to the response time distributions. Future endeavours to model or fit data from blocked-cyclic naming experiments should strive to do so by simultaneously considering data from continuous context manipulations.
Nagarajan, Ramanathan
2015-07-01
Micelles generated in water from most amphiphilic block copolymers are widely recognized to be non-equilibrium structures. Typically, the micelles are prepared by a kinetic process, first allowing molecular scale dissolution of the block copolymer in a common solvent that likes both the blocks and then gradually replacing the common solvent by water to promote the hydrophobic blocks to aggregate and create the micelles. The non-equilibrium nature of the micelle originates from the fact that dynamic exchange between the block copolymer molecules in the micelle and the singly dispersed block copolymer molecules in water is suppressed, because of the glassy nature of the core forming polymer block and/or its very large hydrophobicity. Although most amphiphilic block copolymers generate such non-equilibrium micelles, no theoretical approach to a priori predict the micelle characteristics currently exists. In this work, we propose a predictive approach for non-equilibrium micelles with glassy cores by applying the equilibrium theory of micelles in two steps. In the first, we calculate the properties of micelles formed in the mixed solvent while true equilibrium prevails, until the micelle core becomes glassy. In the second step, we freeze the micelle aggregation number at this glassy state and calculate the corona dimension from the equilibrium theory of micelles. The condition when the micelle core becomes glassy is independently determined from a statistical thermodynamic treatment of diluent effect on polymer glass transition temperature. The predictions based on this "non-equilibrium" model compare reasonably well with experimental data for polystyrene-polyethylene oxide diblock copolymer, which is the most extensively studied system in the literature. In contrast, the application of the equilibrium model to describe such a system significantly overpredicts the micelle core and corona dimensions and the aggregation number. The non-equilibrium model suggests ways to obtain different micelle sizes for the same block copolymer, by the choices we can make of the common solvent and the mode of solvent substitution. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Toe, David; Mentani, Alessio; Govoni, Laura; Bourrier, Franck; Gottardi, Guido; Lambert, Stéphane
2018-04-01
The paper presents a new approach to assess the effecctiveness of rockfall protection barriers, accounting for the wide variety of impact conditions observed on natural sites. This approach makes use of meta-models, considering a widely used rockfall barrier type and was developed from on FE simulation results. Six input parameters relevant to the block impact conditions have been considered. Two meta-models were developed concerning the barrier capability either of stopping the block or in reducing its kinetic energy. The outcome of the parameters range on the meta-model accuracy has been also investigated. The results of the study reveal that the meta-models are effective in reproducing with accuracy the response of the barrier to any impact conditions, providing a formidable tool to support the design of these structures. Furthermore, allowing to accommodate the effects of the impact conditions on the prediction of the block-barrier interaction, the approach can be successfully used in combination with rockfall trajectory simulation tools to improve rockfall quantitative hazard assessment and optimise rockfall mitigation strategies.
A Novel DEM Approach to Simulate Block Propagation on Forested Slopes
NASA Astrophysics Data System (ADS)
Toe, David; Bourrier, Franck; Dorren, Luuk; Berger, Frédéric
2018-03-01
In order to model rockfall on forested slopes, we developed a trajectory rockfall model based on the discrete element method (DEM). This model is able to take the complex mechanical processes at work during an impact into account (large deformations, complex contact conditions) and can explicitly simulate block/soil, block/tree contacts as well as contacts between neighbouring trees. In this paper, we describe the DEM model developed and we use it to assess the protective effect of different types of forest. In addition, we compared it with a more classical rockfall simulation model. The results highlight that forests can significantly reduce rockfall hazard and that the spatial structure of coppice forests has to be taken into account in rockfall simulations in order to avoid overestimating the protective role of these forest structures against rockfall hazard. In addition, the protective role of the forests is mainly influenced by the basal area. Finally, the advantages and limitations of the DEM model were compared with classical rockfall modelling approaches.
Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach
NASA Astrophysics Data System (ADS)
Kumral, Mustafa; Ozer, Umit
2013-03-01
Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution iteratively. A case study was conducted to demonstrate the performance of approach. The findings showed that the approach could be used to plan a new drilling campaign.
NASA Astrophysics Data System (ADS)
Zhang, Yulong; Liu, Zaobao; Shi, Chong; Shao, Jianfu
2018-04-01
This study is devoted to three-dimensional modeling of small falling rocks in block impact analysis in energy view using the particle flow method. The restitution coefficient of rockfall collision is introduced from the energy consumption mechanism to describe rockfall-impacting properties. Three-dimensional reconstruction of falling block is conducted with the help of spherical harmonic functions that have satisfactory mathematical properties such as orthogonality and rotation invariance. Numerical modeling of the block impact to the bedrock is analyzed with both the sphere-simplified model and the 3D reconstructed model. Comparisons of the obtained results suggest that the 3D reconstructed model is advantageous in considering the combination effects of rockfall velocity and rotations during colliding process. Verification of the modeling is carried out with the results obtained from other experiments. In addition, the effects of rockfall morphology, surface characteristics, velocity, and volume, colliding damping and relative angle are investigated. A three-dimensional reconstruction modulus of falling blocks is to be developed and incorporated into the rockfall simulation tools in order to extend the modeling results at block scale to slope scale.
Nieuwveld, D; Mojica, V; Herrera, A E; Pomés, J; Prats, A; Sala-Blanch, X
2017-04-01
Ultrasound-guided infraclavicular block in the costoclavicular space located between the clavicle and the first rib, reaches the secondary trunks when they are clustered together and lateral to the axillary artery. This block is most often performed through a lateral approach, the difficulty being finding the coracoid process an obstacle and guiding the needle towards the vessels and pleura. A medial approach, meaning from inside to outside, will avoid these structures. Traditionally the assessment of a successful block is through motor or sensitive responses but a sympathetic fibre block can also be evaluated measuring the changes in humeral artery blood flow, skin temperature and/or perfusion index. To describe the medial approach of the ultrasound-guided costoclavicular block evaluating its development by motor and sensitive response and measurement of sympathetic changes. Description of the technique and administration of 20ml of contrast in a fresh cadaver model, evaluating the distribution with CT-scan and sagittal sections of the anatomic piece. Subsequently in a clinical phase, including 11 patients, we evaluated the establishment of motor, sensitive and sympathetic blocks. We evaluated the sympathetic changes reflected by humeral artery blood flow, skin temperature and distal perfusion index. In the anatomical model the block was conducted without difficulties, showing an adequate periclavicular distribution of the contrast in the CT-scan and in sagittal sections, reaching the interscalenic space as far as the secondary trunks. Successful blocks were observed in 91% of patients after 25minutes. All the parameters reflecting sympathetic block increased significantly. The humeral artery blood flow showed an increase from 108 ± 86 to 188±141ml/min (P=.05), skin temperature from 32.1±2 to 32.8±9°C (P=.03) and perfusion index from 4±3 to 9±5 (P=.003). The medial approach of the ultrasound-guided costoclavicular block is anatomically feasible, with high clinical effectiveness using 20ml of 1.5% mepivacaine. The sympathetic block can be evaluated with all three parameters studied. Copyright © 2016 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.
Kulhánek, Tomáš; Ježek, Filip; Mateják, Marek; Šilar, Jan; Kofránek, Jří
2015-08-01
This work introduces experiences of teaching modeling and simulation for graduate students in the field of biomedical engineering. We emphasize the acausal and object-oriented modeling technique and we have moved from teaching block-oriented tool MATLAB Simulink to acausal and object oriented Modelica language, which can express the structure of the system rather than a process of computation. However, block-oriented approach is allowed in Modelica language too and students have tendency to express the process of computation. Usage of the exemplar acausal domains and approach allows students to understand the modeled problems much deeper. The causality of the computation is derived automatically by the simulation tool.
Welch, Catherine A; Petersen, Irene; Bartlett, Jonathan W; White, Ian R; Marston, Louise; Morris, Richard W; Nazareth, Irwin; Walters, Kate; Carpenter, James
2014-01-01
Most implementations of multiple imputation (MI) of missing data are designed for simple rectangular data structures ignoring temporal ordering of data. Therefore, when applying MI to longitudinal data with intermittent patterns of missing data, some alternative strategies must be considered. One approach is to divide data into time blocks and implement MI independently at each block. An alternative approach is to include all time blocks in the same MI model. With increasing numbers of time blocks, this approach is likely to break down because of co-linearity and over-fitting. The new two-fold fully conditional specification (FCS) MI algorithm addresses these issues, by only conditioning on measurements, which are local in time. We describe and report the results of a novel simulation study to critically evaluate the two-fold FCS algorithm and its suitability for imputation of longitudinal electronic health records. After generating a full data set, approximately 70% of selected continuous and categorical variables were made missing completely at random in each of ten time blocks. Subsequently, we applied a simple time-to-event model. We compared efficiency of estimated coefficients from a complete records analysis, MI of data in the baseline time block and the two-fold FCS algorithm. The results show that the two-fold FCS algorithm maximises the use of data available, with the gain relative to baseline MI depending on the strength of correlations within and between variables. Using this approach also increases plausibility of the missing at random assumption by using repeated measures over time of variables whose baseline values may be missing. PMID:24782349
Predicting Human Preferences Using the Block Structure of Complex Social Networks
Guimerà, Roger; Llorente, Alejandro; Moro, Esteban; Sales-Pardo, Marta
2012-01-01
With ever-increasing available data, predicting individuals' preferences and helping them locate the most relevant information has become a pressing need. Understanding and predicting preferences is also important from a fundamental point of view, as part of what has been called a “new” computational social science. Here, we propose a novel approach based on stochastic block models, which have been developed by sociologists as plausible models of complex networks of social interactions. Our model is in the spirit of predicting individuals' preferences based on the preferences of others but, rather than fitting a particular model, we rely on a Bayesian approach that samples over the ensemble of all possible models. We show that our approach is considerably more accurate than leading recommender algorithms, with major relative improvements between 38% and 99% over industry-level algorithms. Besides, our approach sheds light on decision-making processes by identifying groups of individuals that have consistently similar preferences, and enabling the analysis of the characteristics of those groups. PMID:22984533
Karmakar, M K; Li, X; Kwok, W H; Ho, A M-H; Ngan Kee, W D
2012-01-01
Objectives The use of ultrasound to guide peripheral nerve blocks is now a well-established technique in regional anaesthesia. However, despite reports of ultrasound guided epidural access via the paramedian approach, there are limited data on the use of ultrasound for central neuraxial blocks, which may be due to a poor understanding of spinal sonoanatomy. The aim of this study was to define the sonoanatomy of the lumbar spine relevant for central neuraxial blocks via the paramedian approach. Methods The sonoanatomy of the lumbar spine relevant for central neuraxial blocks via the paramedian approach was defined using a “water-based spine phantom”, young volunteers and anatomical slices rendered from the Visible Human Project data set. Results The water-based spine phantom was a simple model to study the sonoanatomy of the osseous elements of the lumbar spine. Each osseous element of the lumbar spine, in the spine phantom, produced a “signature pattern” on the paramedian sagittal scans, which was comparable to its sonographic appearance in vivo. In the volunteers, despite the narrow acoustic window, the ultrasound visibility of the neuraxial structures at the L3/L4 and L4/L5 lumbar intervertebral spaces was good, and we were able to delineate the sonoanatomy relevant for ultrasound-guided central neuraxial blocks via the paramedian approach. Conclusion Using a simple water-based spine phantom, volunteer scans and anatomical slices from the Visible Human Project (cadaver) we have described the sonoanatomy relevant for ultrasound-guided central neuraxial blocks via the paramedian approach in the lumbar region. PMID:22010025
Kattner, Florian; Cochrane, Aaron; Green, C Shawn
2017-09-01
The majority of theoretical models of learning consider learning to be a continuous function of experience. However, most perceptual learning studies use thresholds estimated by fitting psychometric functions to independent blocks, sometimes then fitting a parametric function to these block-wise estimated thresholds. Critically, such approaches tend to violate the basic principle that learning is continuous through time (e.g., by aggregating trials into large "blocks" for analysis that each assume stationarity, then fitting learning functions to these aggregated blocks). To address this discrepancy between base theory and analysis practice, here we instead propose fitting a parametric function to thresholds from each individual trial. In particular, we implemented a dynamic psychometric function whose parameters were allowed to change continuously with each trial, thus parameterizing nonstationarity. We fit the resulting continuous time parametric model to data from two different perceptual learning tasks. In nearly every case, the quality of the fits derived from the continuous time parametric model outperformed the fits derived from a nonparametric approach wherein separate psychometric functions were fit to blocks of trials. Because such a continuous trial-dependent model of perceptual learning also offers a number of additional advantages (e.g., the ability to extrapolate beyond the observed data; the ability to estimate performance on individual critical trials), we suggest that this technique would be a useful addition to each psychophysicist's analysis toolkit.
Use of upscaled elevation and surface roughness data in two-dimensional surface water models
Hughes, J.D.; Decker, J.D.; Langevin, C.D.
2011-01-01
In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.
On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach
NASA Astrophysics Data System (ADS)
Liu, Zheng; Xue, Kaiping; Hong, Peilin
The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.
The Fault Block Model: A novel approach for faulted gas reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ursin, J.R.; Moerkeseth, P.O.
1994-12-31
The Fault Block Model was designed for the development of gas production from Sleipner Vest. The reservoir consists of marginal marine sandstone of Hugine Formation. Modeling of highly faulted and compartmentalized reservoirs is severely impeded by the nature and extent of known and undetected faults and, in particular, their effectiveness as flow barrier. The model presented is efficient and superior to other models, for highly faulted reservoir, i.e. grid based simulators, because it minimizes the effect of major undetected faults and geological uncertainties. In this article the authors present the Fault Block Model as a new tool to better understandmore » the implications of geological uncertainty in faulted gas reservoirs with good productivity, with respect to uncertainty in well coverage and optimum gas recovery.« less
A spring-block analogy for the dynamics of stock indexes
NASA Astrophysics Data System (ADS)
Sándor, Bulcsú; Néda, Zoltán
2015-06-01
A spring-block chain placed on a running conveyor belt is considered for modeling stylized facts observed in the dynamics of stock indexes. Individual stocks are modeled by the blocks, while the stock-stock correlations are introduced via simple elastic forces acting in the springs. The dragging effect of the moving belt corresponds to the expected economic growth. The spring-block system produces collective behavior and avalanche like phenomena, similar to the ones observed in stock markets. An artificial index is defined for the spring-block chain, and its dynamics is compared with the one measured for the Dow Jones Industrial Average. For certain parameter regions the model reproduces qualitatively well the dynamics of the logarithmic index, the logarithmic returns, the distribution of the logarithmic returns, the avalanche-size distribution and the distribution of the investment horizons. A noticeable success of the model is that it is able to account for the gain-loss asymmetry observed in the inverse statistics. Our approach has mainly a pedagogical value, bridging between a complex socio-economic phenomena and a basic (mechanical) model in physics.
The Hierarchical Database Decomposition Approach to Database Concurrency Control.
1984-12-01
approach, we postulate a model of transaction behavior under two phase locking as shown in Figure 39(a) and a model of that under multiversion ...transaction put in the block queue until it is reactivated. Under multiversion timestamping, however, the request is always granted. Once the request
Bi-level Multi-Source Learning for Heterogeneous Block-wise Missing Data
Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M.; Ye, Jieping
2013-01-01
Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified “bi-level” learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. PMID:23988272
Bi-level multi-source learning for heterogeneous block-wise missing data.
Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M; Ye, Jieping
2014-11-15
Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified "bi-level" learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. © 2013 Elsevier Inc. All rights reserved.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
NASA Astrophysics Data System (ADS)
Nawalany, Marek; Sinicyn, Grzegorz
2015-09-01
An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
1990-02-01
copies Pl ,...,P. of a multiple module fp resolve nondeterminism (local or global) in an identical manner. 5. The copies PI,...,P, axe physically...recovery block. A recovery block consists of a conventional block (like in ALGOL or PL /I) which is provided with a means of error detection, called an...improved failures model for communicating processes. In Proceeding. NSF- SERC Seminar on Concurrency, volume 197 of Lecture Notes in Computer Science
Witnauer, James; Rhodes, L Jack; Kysor, Sarah; Narasiwodeyar, Sanjay
2017-11-21
The correlation between blocking and within-compound memory is stronger when compound training occurs before elemental training (i.e., backward blocking) than when the phases are reversed (i.e., forward blocking; Melchers et al., 2004, 2006). This trial order effect is often interpreted as problematic for performance-focused models that assume a critical role for within-compound associations in both retrospective revaluation and traditional cue competition. The present manuscript revisits this issue using a computational modeling approach. The fit of sometimes competing retrieval (SOCR; Stout & Miller, 2007) was compared to the fit of an acquisition-focused model of retrospective revaluation and cue competition. These simulations reveal that SOCR explains this trial order effect in some situations based on its use of local error reduction. Published by Elsevier B.V.
Seismic slope-performance analysis: from hazard map to decision support system
Miles, Scott B.; Keefer, David K.; Ho, Carlton L.
1999-01-01
In response to the growing recognition of engineers and decision-makers of the regional effects of earthquake-induced landslides, this paper presents a general approach to conducting seismic landslide zonation, based on the popular Newmark's sliding block analogy for modeling coherent landslides. Four existing models based on the sliding block analogy are compared. The comparison shows that the models forecast notably different levels of slope performance. Considering this discrepancy along with the limitations of static maps as a decision tool, a spatial decision support system (SDSS) for seismic landslide analysis is proposed, which will support investigations over multiple scales for any number of earthquake scenarios and input conditions. Most importantly, the SDSS will allow use of any seismic landslide analysis model and zonation approach. Developments associated with the SDSS will produce an object-oriented model for encapsulating spatial data, an object-oriented specification to allow construction of models using modular objects, and a direct-manipulation, dynamic user-interface that adapts to the particular seismic landslide model configuration.
Block Oriented Simulation System (BOSS)
NASA Technical Reports Server (NTRS)
Ratcliffe, Jaimie
1988-01-01
Computer simulation is assuming greater importance as a flexible and expedient approach to modeling system and subsystem behavior. Simulation has played a key role in the growth of complex, multiple access space communications such as those used by the space shuttle and the TRW-built Tracking and Data Relay Satellites (TDRS). A powerful new simulator for use in designing and modeling the communication system of NASA's planned Space Station is being developed. Progress to date on the Block (Diagram) Oriented Simulation System (BOSS) is described.
Modeling the response of small myelinated axons in a compound nerve to kilohertz frequency signals
NASA Astrophysics Data System (ADS)
Pelot, N. A.; Behrend, C. E.; Grill, W. M.
2017-08-01
Objective. There is growing interest in electrical neuromodulation of peripheral nerves, particularly autonomic nerves, to treat various diseases. Electrical signals in the kilohertz frequency (KHF) range can produce different responses, including conduction block. For example, EnteroMedics’ vBloc® therapy for obesity delivers 5 kHz stimulation to block the abdominal vagus nerves, but the mechanisms of action are unclear. Approach. We developed a two-part computational model, coupling a 3D finite element model of a cuff electrode around the human abdominal vagus nerve with biophysically-realistic electrical circuit equivalent (cable) model axons (1, 2, and 5.7 µm in diameter). We developed an automated algorithm to classify conduction responses as subthreshold (transmission), KHF-evoked activity (excitation), or block. We quantified neural responses across kilohertz frequencies (5-20 kHz), amplitudes (1-8 mA), and electrode designs. Main results. We found heterogeneous conduction responses across the modeled nerve trunk, both for a given parameter set and across parameter sets, although most suprathreshold responses were excitation, rather than block. The firing patterns were irregular near transmission and block boundaries, but otherwise regular, and mean firing rates varied with electrode-fibre distance. Further, we identified excitation responses at amplitudes above block threshold, termed ‘re-excitation’, arising from action potentials initiated at virtual cathodes. Excitation and block thresholds decreased with smaller electrode-fibre distances, larger fibre diameters, and lower kilohertz frequencies. A point source model predicted a larger fraction of blocked fibres and greater change of threshold with distance as compared to the realistic cuff and nerve model. Significance. Our findings of widespread asynchronous KHF-evoked activity suggest that conduction block in the abdominal vagus nerves is unlikely with current clinical parameters. Our results indicate that compound neural or downstream muscle force recordings may be unreliable as quantitative measures of neural activity for in vivo studies or as biomarkers in closed-loop clinical devices.
Interfacial fluctuations of block copolymers: a coarse-grain molecular dynamics simulation study.
Srinivas, Goundla; Swope, William C; Pitera, Jed W
2007-12-13
The lamellar and cylindrical phases of block copolymers have a number of technological applications, particularly when they occur in supported thin films. One such application is block copolymer lithography, the use of these materials to subdivide or enhance submicrometer patterns defined by optical or electron beam methods. A key parameter of all lithographic methods is the line edge roughness (LER), because the electronic or optical activities of interest are sensitive to small pattern variations. While mean-field models provide a partial picture of the LER and interfacial width expected for the block interface in a diblock copolymer, these models lack chemical detail. To complement mean-field approaches, we have carried out coarse-grain molecular dynamics simulations on model poly(ethyleneoxide)-poly(ethylethylene) (PEO-PEE) lamellae, exploring the influence of chain length and hypothetical chemical modifications on the observed line edge roughness. As expected, our simulations show that increasing chi (the Flory-Huggins parameter) is the most direct route to decreased roughness, although the addition of strong specific interactions at the block interface can also produce smoother patterns.
Quasi-Steady Evolution of Hillslopes in Layered Landscapes: An Analytic Approach
NASA Astrophysics Data System (ADS)
Glade, R. C.; Anderson, R. S.
2018-01-01
Landscapes developed in layered sedimentary or igneous rocks are common on Earth, as well as on other planets. Features such as hogbacks, exposed dikes, escarpments, and mesas exhibit resistant rock layers adjoining more erodible rock in tilted, vertical, or horizontal orientations. Hillslopes developed in the erodible rock are typically characterized by steep, linear-to-concave slopes or "ramps" mantled with material derived from the resistant layers, often in the form of large blocks. Previous work on hogbacks has shown that feedbacks between weathering and transport of the blocks and underlying soft rock can create relief over time and lead to the development of concave-up slope profiles in the absence of rilling processes. Here we employ an analytic approach, informed by numerical modeling and field data, to describe the quasi-steady state behavior of such rocky hillslopes for the full spectrum of resistant layer dip angles. We begin with a simple geometric analysis that relates structural dip to erosion rates. We then explore the mechanisms by which our numerical model of hogback evolution self-organizes to meet these geometric expectations, including adjustment of soil depth, erosion rates, and block velocities along the ramp. Analytical solutions relate easily measurable field quantities such as ramp length, slope, block size, and resistant layer dip angle to local incision rate, block velocity, and block weathering rate. These equations provide a framework for exploring the evolution of layered landscapes and pinpoint the processes for which we require a more thorough understanding to predict their evolution over time.
Formal verification of a microcoded VIPER microprocessor using HOL
NASA Technical Reports Server (NTRS)
Levitt, Karl; Arora, Tejkumar; Leung, Tony; Kalvala, Sara; Schubert, E. Thomas; Windley, Philip; Heckman, Mark; Cohen, Gerald C.
1993-01-01
The Royal Signals and Radar Establishment (RSRE) and members of the Hardware Verification Group at Cambridge University conducted a joint effort to prove the correspondence between the electronic block model and the top level specification of Viper. Unfortunately, the proof became too complex and unmanageable within the given time and funding constraints, and is thus incomplete as of the date of this report. This report describes an independent attempt to use the HOL (Cambridge Higher Order Logic) mechanical verifier to verify Viper. Deriving from recent results in hardware verification research at UC Davis, the approach has been to redesign the electronic block model to make it microcoded and to structure the proof in a series of decreasingly abstract interpreter levels, the lowest being the electronic block level. The highest level is the RSRE Viper instruction set. Owing to the new approach and some results on the proof of generic interpreters as applied to simple microprocessors, this attempt required an effort approximately an order of magnitude less than the previous one.
Scene-aware joint global and local homographic video coding
NASA Astrophysics Data System (ADS)
Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.
2016-09-01
Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.
Cascade process modeling with mechanism-based hierarchical neural networks.
Cong, Qiumei; Yu, Wen; Chai, Tianyou
2010-02-01
Cascade process, such as wastewater treatment plant, includes many nonlinear sub-systems and many variables. When the number of sub-systems is big, the input-output relation in the first block and the last block cannot represent the whole process. In this paper we use two techniques to overcome the above problem. Firstly we propose a new neural model: hierarchical neural networks to identify the cascade process; then we use serial structural mechanism model based on the physical equations to connect with neural model. A stable learning algorithm and theoretical analysis are given. Finally, this method is used to model a wastewater treatment plant. Real operational data of wastewater treatment plant is applied to illustrate the modeling approach.
NASA Astrophysics Data System (ADS)
Waqas, Abi; Melati, Daniele; Manfredi, Paolo; Grassi, Flavia; Melloni, Andrea
2018-02-01
The Building Block (BB) approach has recently emerged in photonic as a suitable strategy for the analysis and design of complex circuits. Each BB can be foundry related and contains a mathematical macro-model of its functionality. As well known, statistical variations in fabrication processes can have a strong effect on their functionality and ultimately affect the yield. In order to predict the statistical behavior of the circuit, proper analysis of the uncertainties effects is crucial. This paper presents a method to build a novel class of Stochastic Process Design Kits for the analysis of photonic circuits. The proposed design kits directly store the information on the stochastic behavior of each building block in the form of a generalized-polynomial-chaos-based augmented macro-model obtained by properly exploiting stochastic collocation and Galerkin methods. Using this approach, we demonstrate that the augmented macro-models of the BBs can be calculated once and stored in a BB (foundry dependent) library and then used for the analysis of any desired circuit. The main advantage of this approach, shown here for the first time in photonics, is that the stochastic moments of an arbitrary photonic circuit can be evaluated by a single simulation only, without the need for repeated simulations. The accuracy and the significant speed-up with respect to the classical Monte Carlo analysis are verified by means of classical photonic circuit example with multiple uncertain variables.
Doyle, Jessica M.; Gleeson, Tom; Manning, Andrew H.; Mayer, K. Ulrich
2015-01-01
Environmental tracers provide information on groundwater age, recharge conditions, and flow processes which can be helpful for evaluating groundwater sustainability and vulnerability. Dissolved noble gas data have proven particularly useful in mountainous terrain because they can be used to determine recharge elevation. However, tracer-derived recharge elevations have not been utilized as calibration targets for numerical groundwater flow models. Herein, we constrain and calibrate a regional groundwater flow model with noble-gas-derived recharge elevations for the first time. Tritium and noble gas tracer results improved the site conceptual model by identifying a previously uncertain contribution of mountain block recharge from the Coast Mountains to an alluvial coastal aquifer in humid southwestern British Columbia. The revised conceptual model was integrated into a three-dimensional numerical groundwater flow model and calibrated to hydraulic head data in addition to recharge elevations estimated from noble gas recharge temperatures. Recharge elevations proved to be imperative for constraining hydraulic conductivity, recharge location, and bedrock geometry, and thus minimizing model nonuniqueness. Results indicate that 45% of recharge to the aquifer is mountain block recharge. A similar match between measured and modeled heads was achieved in a second numerical model that excludes the mountain block (no mountain block recharge), demonstrating that hydraulic head data alone are incapable of quantifying mountain block recharge. This result has significant implications for understanding and managing source water protection in recharge areas, potential effects of climate change, the overall water budget, and ultimately ensuring groundwater sustainability.
Modeling and prototyping of biometric systems using dataflow programming
NASA Astrophysics Data System (ADS)
Minakova, N.; Petrov, I.
2018-01-01
The development of biometric systems is one of the labor-intensive processes. Therefore, the creation and analysis of approaches and techniques is an urgent task at present. This article presents a technique of modeling and prototyping biometric systems based on dataflow programming. The technique includes three main stages: the development of functional blocks, the creation of a dataflow graph and the generation of a prototype. A specially developed software modeling environment that implements this technique is described. As an example of the use of this technique, an example of the implementation of the iris localization subsystem is demonstrated. A variant of modification of dataflow programming is suggested to solve the problem related to the undefined order of block activation. The main advantage of the presented technique is the ability to visually display and design the model of the biometric system, the rapid creation of a working prototype and the reuse of the previously developed functional blocks.
Bindewald, Eckart; Grunewald, Calvin; Boyle, Brett; O'Connor, Mary; Shapiro, Bruce A
2008-10-01
One approach to designing RNA nanoscale structures is to use known RNA structural motifs such as junctions, kissing loops or bulges and to construct a molecular model by connecting these building blocks with helical struts. We previously developed an algorithm for detecting internal loops, junctions and kissing loops in RNA structures. Here we present algorithms for automating or assisting many of the steps that are involved in creating RNA structures from building blocks: (1) assembling building blocks into nanostructures using either a combinatorial search or constraint satisfaction; (2) optimizing RNA 3D ring structures to improve ring closure; (3) sequence optimisation; (4) creating a unique non-degenerate RNA topology descriptor. This effectively creates a computational pipeline for generating molecular models of RNA nanostructures and more specifically RNA ring structures with optimized sequences from RNA building blocks. We show several examples of how the algorithms can be utilized to generate RNA tecto-shapes.
Bindewald, Eckart; Grunewald, Calvin; Boyle, Brett; O’Connor, Mary; Shapiro, Bruce A.
2013-01-01
One approach to designing RNA nanoscale structures is to use known RNA structural motifs such as junctions, kissing loops or bulges and to construct a molecular model by connecting these building blocks with helical struts. We previously developed an algorithm for detecting internal loops, junctions and kissing loops in RNA structures. Here we present algorithms for automating or assisting many of the steps that are involved in creating RNA structures from building blocks: (1) assembling building blocks into nanostructures using either a combinatorial search or constraint satisfaction; (2) optimizing RNA 3D ring structures to improve ring closure; (3) sequence optimisation; (4) creating a unique non-degenerate RNA topology descriptor. This effectively creates a computational pipeline for generating molecular models of RNA nanostructures and more specifically RNA ring structures with optimized sequences from RNA building blocks. We show several examples of how the algorithms can be utilized to generate RNA tecto-shapes. PMID:18838281
Tracing regulatory routes in metabolism using generalised supply-demand analysis.
Christensen, Carl D; Hofmeyr, Jan-Hendrik S; Rohwer, Johann M
2015-12-03
Generalised supply-demand analysis is a conceptual framework that views metabolism as a molecular economy. Metabolic pathways are partitioned into so-called supply and demand blocks that produce and consume a particular intermediate metabolite. By studying the response of these reaction blocks to perturbations in the concentration of the linking metabolite, different regulatory routes of interaction between the metabolite and its supply and demand blocks can be identified and their contribution quantified. These responses are mediated not only through direct substrate/product interactions, but also through allosteric effects. Here we subject previously published kinetic models of pyruvate metabolism in Lactococcus lactis and aspartate-derived amino acid synthesis in Arabidopsis thaliana to generalised supply-demand analysis. Multiple routes of regulation are brought about by different mechanisms in each model, leading to behavioural and regulatory patterns that are generally difficult to predict from simple inspection of the reaction networks depicting the models. In the pyruvate model the moiety-conserved cycles of ATP/ADP and NADH/NAD(+) allow otherwise independent metabolic branches to communicate. This causes the flux of one ATP-producing reaction block to increase in response to an increasing ATP/ADP ratio, while an NADH-consuming block flux decreases in response to an increasing NADH/NAD(+) ratio for certain ratio value ranges. In the aspartate model, aspartate semialdehyde can inhibit its supply block directly or by increasing the concentration of two amino acids (Lys and Thr) that occur as intermediates in demand blocks and act as allosteric inhibitors of isoenzymes in the supply block. These different routes of interaction from aspartate semialdehyde are each seen to contribute differently to the regulation of the aspartate semialdehyde supply block. Indirect routes of regulation between a metabolic intermediate and a reaction block that either produces or consumes this intermediate can play a much larger regulatory role than routes mediated through direct interactions. These indirect routes of regulation can also result in counter-intuitive metabolic behaviour. Performing generalised supply-demand analysis on two previously published models demonstrated the utility of this method as an entry point in the analysis of metabolic behaviour and the potential for obtaining novel results from previously analysed models by using new approaches.
PETRORISK: a risk assessment framework for petroleum substances.
Redman, Aaron D; Parkerton, Thomas F; Comber, Mike H I; Paumen, Miriam Leon; Eadsforth, Charles V; Dmytrasz, Bhodan; King, Duncan; Warren, Christopher S; den Haan, Klaas; Djemel, Nadia
2014-07-01
PETRORISK is a modeling framework used to evaluate environmental risk of petroleum substances and human exposure through these routes due to emissions under typical use conditions as required by the European regulation for the Registration, Evaluation, Authorization and Restriction of Chemicals (REACH). Petroleum substances are often complex substances comprised of hundreds to thousands of individual hydrocarbons. The physicochemical, fate, and effects properties of the individual constituents within a petroleum substance can vary over several orders of magnitude, complicating risk assessment. PETRORISK combines the risk assessment strategies used on single chemicals with the hydrocarbon block approach to model complex substances. Blocks are usually defined by available analytical characterization data on substances that are expressed in terms of mass fractions for different structural chemical classes that are specified as a function of C number or boiling point range. The physicochemical and degradation properties of the blocks are determined by the properties of representative constituents in that block. Emissions and predicted exposure concentrations (PEC) are then modeled using mass-weighted individual representative constituents. Overall risk for various environmental compartments at the regional and local level is evaluated by comparing the PECs for individual representative constituents to corresponding predicted no-effect concentrations (PNEC) derived using the Target Lipid Model. Risks to human health are evaluated using the overall predicted human dose resulting from multimedia environmental exposure to a substance-specific derived no-effect level (DNEL). A case study is provided to illustrate how this modeling approach has been applied to assess the risks of kerosene manufacture and use as a fuel. © 2014 SETAC.
Can High Bandwidth and Latency Justify Large Cache Blocks in Scalable Multiprocessors?
1994-01-01
400 MB/second. 4 Dubnicki’s work used trace-driven simulation, with traces collected on an 8-processor machine. We would expect such small-scale...312 1 6 32 64 of odk Sb* Bad64.M Figure 17: Miss rate of Ind Blocked LU. Figure 18: MCPR of Ind Blocked LU. overall miss rate of TGauss is a factor of...easily. 17 (’his approach assunics that the model paramelers we collect from simulations with infinite band- width (such as the miss rate and the
Present-day velocity field and block kinematics of Tibetan Plateau from GPS measurements
NASA Astrophysics Data System (ADS)
Wang, Wei; Qiao, Xuejun; Yang, Shaomin; Wang, Dijin
2017-02-01
In this study, we present a new synthesis of GPS velocities for tectonic deformation within the Tibetan Plateau and its surrounding areas, a combined data set of ˜1854 GPS-derived horizontal velocity vectors. Assuming that crustal deformation is localized along major faults, a block modelling approach is employed to interpret the GPS velocity field. We construct a 30-element block model to describe present-day deformation in western China, with half of them located within the Tibetan Plateau, and the remainder located in its surrounding areas. We model the GPS velocities simultaneously for the effects of block rotations and elastic strain induced by the bounding faults. Our model yields a good fit to the GPS data with a mean residual of 1.08 mm a-1 compared to the mean uncertainty of 1.36 mm a-1 for each velocity component, indicating a good agreement between the predicted and observed velocities. The major strike-slip faults such as the Altyn Tagh, Xianshuihe, Kunlun and Haiyuan faults have relatively uniform slip rates in a range of 5-12 mm a-1 along most of their segments, and the estimated fault slip rates agree well with previous geologic and geodetic results. Blocks having significant residuals are located at the southern and southeastern Tibetan Plateau, suggesting complex tectonic settings and further refinement of accurate definition of block geometry in these regions.
A general U-block model-based design procedure for nonlinear polynomial control systems
NASA Astrophysics Data System (ADS)
Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua
2016-10-01
The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.
Incorporating GIS and remote sensing for census population disaggregation
NASA Astrophysics Data System (ADS)
Wu, Shuo-Sheng'derek'
Census data are the primary source of demographic data for a variety of researches and applications. For confidentiality issues and administrative purposes, census data are usually released to the public by aggregated areal units. In the United States, the smallest census unit is census blocks. Due to data aggregation, users of census data may have problems in visualizing population distribution within census blocks and estimating population counts for areas not coinciding with census block boundaries. The main purpose of this study is to develop methodology for estimating sub-block areal populations and assessing the estimation errors. The City of Austin, Texas was used as a case study area. Based on tax parcel boundaries and parcel attributes derived from ancillary GIS and remote sensing data, detailed urban land use classes were first classified using a per-field approach. After that, statistical models by land use classes were built to infer population density from other predictor variables, including four census demographic statistics (the Hispanic percentage, the married percentage, the unemployment rate, and per capita income) and three physical variables derived from remote sensing images and building footprints vector data (a landscape heterogeneity statistics, a building pattern statistics, and a building volume statistics). In addition to statistical models, deterministic models were proposed to directly infer populations from building volumes and three housing statistics, including the average space per housing unit, the housing unit occupancy rate, and the average household size. After population models were derived or proposed, how well the models predict populations for another set of sample blocks was assessed. The results show that deterministic models were more accurate than statistical models. Further, by simulating the base unit for modeling from aggregating blocks, I assessed how well the deterministic models estimate sub-unit-level populations. I also assessed the aggregation effects and the resealing effects on sub-unit estimates. Lastly, from another set of mixed-land-use sample blocks, a mixed-land-use model was derived and compared with a residential-land-use model. The results of per-field land use classification are satisfactory with a Kappa accuracy statistics of 0.747. Model Assessments by land use show that population estimates for multi-family land use areas have higher errors than those for single-family land use areas, and population estimates for mixed land use areas have higher errors than those for residential land use areas. The assessments of sub-unit estimates using a simulation approach indicate that smaller areas show higher estimation errors, estimation errors do not relate to the base unit size, and resealing improves all levels of sub-unit estimates.
Zakary, Omar; Rachik, Mostafa; Elmouki, Ilias
2017-08-01
First, we devise in this paper, a multi-regions discrete-time model which describes the spatial-temporal spread of an epidemic which starts from one region and enters to regions which are connected with their neighbors by any kind of anthropological movement. We suppose homogeneous Susceptible-Infected-Removed (SIR) populations, and we consider in our simulations, a grid of colored cells, which represents the whole domain affected by the epidemic while each cell can represent a sub-domain or region. Second, in order to minimize the number of infected individuals in one region, we propose an optimal control approach based on a travel-blocking vicinity strategy which aims to control only one cell by restricting movements of infected people coming from all neighboring cells. Thus, we show the influence of the optimal control approach on the controlled cell. We should also note that the cellular modeling approach we propose here, can also describes infection dynamics of regions which are not necessarily attached one to an other, even if no empty space can be viewed between cells. The theoretical method we follow for the characterization of the travel-locking optimal controls, is based on a discrete version of Pontryagin's maximum principle while the numerical approach applied to the multi-points boundary value problems we obtain here, is based on discrete progressive-regressive iterative schemes. We illustrate our modeling and control approaches by giving an example of 100 regions.
Robust estimation of carotid artery wall motion using the elasticity-based state-space approach.
Gao, Zhifan; Xiong, Huahua; Liu, Xin; Zhang, Heye; Ghista, Dhanjoo; Wu, Wanqing; Li, Shuo
2017-04-01
The dynamics of the carotid artery wall has been recognized as a valuable indicator to evaluate the status of atherosclerotic disease in the preclinical stage. However, it is still a challenge to accurately measure this dynamics from ultrasound images. This paper aims at developing an elasticity-based state-space approach for accurately measuring the two-dimensional motion of the carotid artery wall from the ultrasound imaging sequences. In our approach, we have employed a linear elasticity model of the carotid artery wall, and converted it into the state space equation. Then, the two-dimensional motion of carotid artery wall is computed by solving this state-space approach using the H ∞ filter and the block matching method. In addition, a parameter training strategy is proposed in this study for dealing with the parameter initialization problem. In our experiment, we have also developed an evaluation function to measure the tracking accuracy of the motion of the carotid artery wall by considering the influence of the sizes of the two blocks (acquired by our approach and the manual tracing) containing the same carotid wall tissue and their overlapping degree. Then, we have compared the performance of our approach with the manual traced results drawn by three medical physicians on 37 healthy subjects and 103 unhealthy subjects. The results have showed that our approach was highly correlated (Pearson's correlation coefficient equals 0.9897 for the radial motion and 0.9536 for the longitudinal motion), and agreed well (width the 95% confidence interval is 89.62 µm for the radial motion and 387.26 µm for the longitudinal motion) with the manual tracing method. We also compared our approach to the three kinds of previous methods, including conventional block matching methods, Kalman-based block matching methods and the optical flow. Altogether, we have been able to successfully demonstrate the efficacy of our elasticity-model based state-space approach (EBS) for more accurate tracking of the 2-dimensional motion of the carotid artery wall, towards more effective assessment of the status of atherosclerotic disease in the preclinical stage. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-purpose wind tunnel reaction control model block
NASA Technical Reports Server (NTRS)
Dresser, H. S.; Daileda, J. J. (Inventor)
1978-01-01
A reaction control system nozzle block is provided for testing the response characteristics of space vehicles to a variety of reaction control thruster configurations. A pressurized air system is connected with the supply lines which lead to the individual jet nozzles. Each supply line terminates in a compact cylindrical plenum volume, axially perpendicular and adjacent to the throat of the jet nozzle. The volume of the cylindrical plenum is sized to provide uniform thrust characteristics from each jet nozzle irrespective of the angle of approach of the supply line to the plenum. Each supply line may be plugged or capped to stop the air supply to selected jet nozzles, thereby enabling a variety of nozzle configurations to be obtained from a single model nozzle block.
TAD-free analysis of architectural proteins and insulators.
Mourad, Raphaël; Cuvier, Olivier
2018-03-16
The three-dimensional (3D) organization of the genome is intimately related to numerous key biological functions including gene expression and DNA replication regulations. The mechanisms by which molecular drivers functionally organize the 3D genome, such as topologically associating domains (TADs), remain to be explored. Current approaches consist in assessing the enrichments or influences of proteins at TAD borders. Here, we propose a TAD-free model to directly estimate the blocking effects of architectural proteins, insulators and DNA motifs on long-range contacts, making the model intuitive and biologically meaningful. In addition, the model allows analyzing the whole Hi-C information content (2D information) instead of only focusing on TAD borders (1D information). The model outperforms multiple logistic regression at TAD borders in terms of parameter estimation accuracy and is validated by enhancer-blocking assays. In Drosophila, the results support the insulating role of simple sequence repeats and suggest that the blocking effects depend on the number of repeats. Motif analysis uncovered the roles of the transcriptional factors pannier and tramtrack in blocking long-range contacts. In human, the results suggest that the blocking effects of the well-known architectural proteins CTCF, cohesin and ZNF143 depend on the distance between loci, where each protein may participate at different scales of the 3D chromatin organization.
Test aspects of the JPL Viterbi decoder
NASA Technical Reports Server (NTRS)
Breuer, M. A.
1989-01-01
The generation of test vectors and design-for-test aspects of the Jet Propulsion Laboratory (JPL) Very Large Scale Integration (VLSI) Viterbi decoder chip is discussed. Each processor integrated circuit (IC) contains over 20,000 gates. To achieve a high degree of testability, a scan architecture is employed. The logic has been partitioned so that very few test vectors are required to test the entire chip. In addition, since several blocks of logic are replicated numerous times on this chip, test vectors need only be generated for each block, rather than for the entire circuit. These unique blocks of logic have been identified and test sets generated for them. The approach employed for testing was to use pseudo-exhaustive test vectors whenever feasible. That is, each cone of logid is tested exhaustively. Using this approach, no detailed logic design or fault model is required. All faults which modify the function of a block of combinational logic are detected, such as all irredundant single and multiple stuck-at faults.
Manning, Andrew H.; Solomon, D. Kip
2005-01-01
The subsurface transfer of water from a mountain block to an adjacent basin (mountain block recharge (MBR)) is a commonly invoked mechanism of recharge to intermountain basins. However, MBR estimates are highly uncertain. We present an approach to characterize bulk fluid circulation in a mountain block and thus MBR that utilizes environmental tracers from the basin aquifer. Noble gas recharge temperatures, groundwater ages, and temperature data combined with heat and fluid flow modeling are used to identify clearly improbable flow regimes in the southeastern Salt Lake Valley, Utah, and adjacent Wasatch Mountains. The range of possible MBR rates is reduced by 70%. Derived MBR rates (5.5–12.6 × 104 m3 d−1) are on the same order of magnitude as previous large estimates, indicating that significant MBR to intermountain basins is plausible. However, derived rates are 50–100% of the lowest previous estimate, meaning total recharge is probably less than previously thought.
Block oscillation model for impact crater collapse
NASA Astrophysics Data System (ADS)
Ivanov, B. A.; Kostuchenko, V. N.
1997-03-01
Previous investigations of the impact crater formation mechanics have shown that the late stage, a transient cavity collapse in a gravity field, may be modeled with a traditional rock mechanics if one ascribes very specific mechanical properties of rock in the vicinity of a crater: an effective strength of rock needed is around 30 bar, and effective angle of internal friction below 5 deg. The rock media with such properties may be supposed 'temporary fluidized'. The nature of this fluidization is now poorly understood; an acoustic (vibration) nature of this fluidization has been suggested. This model now seems to be the best approach to the problem. The open question is how to implement the model (or other possible models) in a hydrocode for numerical simulation of a dynamic crater collapse. We study more relevant models of mechanical behavior of rocks during cratering. The specific of rock deformation is that the rock media deforms not as a plastic metal-like continuum, but as a system of discrete rock blocks. The deep drilling of impact craters revealed the system of rock blocks of 50 m to 200 m in size. We used the model of these block oscillations to formulate the appropriate rheological law for the subcrater flow during the modification stage.
A simple theory of molecular organization in fullerene-containing liquid crystals
NASA Astrophysics Data System (ADS)
Peroukidis, S. D.; Vanakaras, A. G.; Photinos, D. J.
2005-10-01
Systematic efforts to synthesize fullerene-containing liquid crystals have produced a variety of successful model compounds. We present a simple molecular theory, based on the interconverting shape approach [Vanakaras and Photinos, J. Mater. Chem. 15, 2002 (2005)], that relates the self-organization observed in these systems to their molecular structure. The interactions are modeled by dividing each molecule into a number of submolecular blocks to which specific interactions are assigned. Three types of blocks are introduced, corresponding to fullerene units, mesogenic units, and nonmesogenic linkage units. The blocks are constrained to move on a cubic three-dimensional lattice and molecular flexibility is allowed by retaining a number of representative conformations within the block representation of the molecule. Calculations are presented for a variety of molecular architectures including twin mesogenic branch monoadducts of C60, twin dendromesogenic branch monoadducts, and conical (badminton shuttlecock) multiadducts of C60. The dependence of the phase diagrams on the interaction parameters is explored. In spite of its many simplifications and the minimal molecular modeling used (three types of chemically distinct submolecular blocks with only repulsive interactions), the theory accounts remarkably well for the phase behavior of these systems.
Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks
NASA Astrophysics Data System (ADS)
Leube, P.; Nowak, W.; Sanchez-Vila, X.
2013-12-01
High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.
Firming-Up Core: A Collaborative Approach.
ERIC Educational Resources Information Center
McInnis, Bernadette
The Collaborative Probing Model (CPM) is a heuristic approach to writing across the disciplines that stresses discovery, process, and assessment. Faculty input will help the English department design an oral and written communication block that will be unified by a series of interdisciplinary videotaped presentations. CPM also uses flow charting…
Quantification of Hepatitis C Virus Cell-to-Cell Spread Using a Stochastic Modeling Approach
Martin, Danyelle N.; Perelson, Alan S.; Dahari, Harel
2015-01-01
ABSTRACT It has been proposed that viral cell-to-cell transmission plays a role in establishing and maintaining chronic infections. Thus, understanding the mechanisms and kinetics of cell-to-cell spread is fundamental to elucidating the dynamics of infection and may provide insight into factors that determine chronicity. Because hepatitis C virus (HCV) spreads from cell to cell and has a chronicity rate of up to 80% in exposed individuals, we examined the dynamics of HCV cell-to-cell spread in vitro and quantified the effect of inhibiting individual host factors. Using a multidisciplinary approach, we performed HCV spread assays and assessed the appropriateness of different stochastic models for describing HCV focus expansion. To evaluate the effect of blocking specific host cell factors on HCV cell-to-cell transmission, assays were performed in the presence of blocking antibodies and/or small-molecule inhibitors targeting different cellular HCV entry factors. In all experiments, HCV-positive cells were identified by immunohistochemical staining and the number of HCV-positive cells per focus was assessed to determine focus size. We found that HCV focus expansion can best be explained by mathematical models assuming focus size-dependent growth. Consistent with previous reports suggesting that some factors impact HCV cell-to-cell spread to different extents, modeling results estimate a hierarchy of efficacies for blocking HCV cell-to-cell spread when targeting different host factors (e.g., CLDN1 > NPC1L1 > TfR1). This approach can be adapted to describe focus expansion dynamics under a variety of experimental conditions as a means to quantify cell-to-cell transmission and assess the impact of cellular factors, viral factors, and antivirals. IMPORTANCE The ability of viruses to efficiently spread by direct cell-to-cell transmission is thought to play an important role in the establishment and maintenance of viral persistence. As such, elucidating the dynamics of cell-to-cell spread and quantifying the effect of blocking the factors involved has important implications for the design of potent antiviral strategies and controlling viral escape. Mathematical modeling has been widely used to understand HCV infection dynamics and treatment response; however, these models typically assume only cell-free virus infection mechanisms. Here, we used stochastic models describing focus expansion as a means to understand and quantify the dynamics of HCV cell-to-cell spread in vitro and determined the degree to which cell-to-cell spread is reduced when individual HCV entry factors are blocked. The results demonstrate the ability of this approach to recapitulate and quantify cell-to-cell transmission, as well as the impact of specific factors and potential antivirals. PMID:25833046
Exact algorithms for haplotype assembly from whole-genome sequence data.
Chen, Zhi-Zhong; Deng, Fei; Wang, Lusheng
2013-08-15
Haplotypes play a crucial role in genetic analysis and have many applications such as gene disease diagnoses, association studies, ancestry inference and so forth. The development of DNA sequencing technologies makes it possible to obtain haplotypes from a set of aligned reads originated from both copies of a chromosome of a single individual. This approach is often known as haplotype assembly. Exact algorithms that can give optimal solutions to the haplotype assembly problem are highly demanded. Unfortunately, previous algorithms for this problem either fail to output optimal solutions or take too long time even executed on a PC cluster. We develop an approach to finding optimal solutions for the haplotype assembly problem under the minimum-error-correction (MEC) model. Most of the previous approaches assume that the columns in the input matrix correspond to (putative) heterozygous sites. This all-heterozygous assumption is correct for most columns, but it may be incorrect for a small number of columns. In this article, we consider the MEC model with or without the all-heterozygous assumption. In our approach, we first use new methods to decompose the input read matrix into small independent blocks and then model the problem for each block as an integer linear programming problem, which is then solved by an integer linear programming solver. We have tested our program on a single PC [a Linux (x64) desktop PC with i7-3960X CPU], using the filtered HuRef and the NA 12878 datasets (after applying some variant calling methods). With the all-heterozygous assumption, our approach can optimally solve the whole HuRef data set within a total time of 31 h (26 h for the most difficult block of the 15th chromosome and only 5 h for the other blocks). To our knowledge, this is the first time that MEC optimal solutions are completely obtained for the filtered HuRef dataset. Moreover, in the general case (without the all-heterozygous assumption), for the HuRef dataset our approach can optimally solve all the chromosomes except the most difficult block in chromosome 15 within a total time of 12 days. For both of the HuRef and NA12878 datasets, the optimal costs in the general case are sometimes much smaller than those in the all-heterozygous case. This implies that some columns in the input matrix (after applying certain variant calling methods) still correspond to false-heterozygous sites. Our program, the optimal solutions found for the HuRef dataset available at http://rnc.r.dendai.ac.jp/hapAssembly.html.
A hybrid approach to estimate the complex motions of clouds in sky images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Zhenzhou; Yu, Dantong; Huang, Dong
Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less
A hybrid approach to estimate the complex motions of clouds in sky images
Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...
2016-09-14
Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less
Huber, Heinrich J; Connolly, Niamh M C; Dussmann, Heiko; Prehn, Jochen H M
2012-03-01
We devised an approach to extract control principles of cellular bioenergetics for intact and impaired mitochondria from ODE-based models and applied it to a recently established bioenergetic model of cancer cells. The approach used two methods for varying ODE model parameters to determine those model components that, either alone or in combination with other components, most decisively regulated bioenergetic state variables. We found that, while polarisation of the mitochondrial membrane potential (ΔΨ(m)) and, therefore, the protomotive force were critically determined by respiratory complex I activity in healthy mitochondria, complex III activity was dominant for ΔΨ(m) during conditions of cytochrome-c deficiency. As a further important result, cellular bioenergetics in healthy, ATP-producing mitochondria was regulated by three parameter clusters that describe (1) mitochondrial respiration, (2) ATP production and consumption and (3) coupling of ATP-production and respiration. These parameter clusters resembled metabolic blocks and their intermediaries from top-down control analyses. However, parameter clusters changed significantly when cells changed from low to high ATP levels or when mitochondria were considered to be impaired by loss of cytochrome-c. This change suggests that the assumption of static metabolic blocks by conventional top-down control analyses is not valid under these conditions. Our approach is complementary to both ODE and top-down control analysis approaches and allows a better insight into cellular bioenergetics and its pathological alterations.
Molenaar, Heike; Boehm, Robert; Piepho, Hans-Peter
2017-01-01
Robust phenotypic data allow adequate statistical analysis and are crucial for any breeding purpose. Such data is obtained from experiments laid out to best control local variation. Additionally, experiments frequently involve two phases, each contributing environmental sources of variation. For example, in a former experiment we conducted to evaluate production related traits in Pelargonium zonale , there were two consecutive phases, each performed in a different greenhouse. Phase one involved the propagation of the breeding strains to obtain the stem cutting count, and phase two involved the assessment of root formation. The evaluation of the former study raised questions regarding options for improving the experimental layout: (i) Is there a disadvantage to using exactly the same design in both phases? (ii) Instead of generating a separate layout for each phase, can the design be optimized across both phases, such that the mean variance of a pair-wise treatment difference (MVD) can be decreased? To answer these questions, alternative approaches were explored to generate two-phase designs either in phase-wise order (Option 1) or across phases (Option 2). In Option 1 we considered the scenarios (i) using in both phases the same experimental design and (ii) randomizing each phase separately. In Option 2, we considered the scenarios (iii) generating a single design with eight replicates and splitting these among the two phases, (iv) separating the block structure across phases by dummy coding, and (v) design generation with optimal alignment of block units in the two phases. In both options, we considered the same or different block structures in each phase. The designs were evaluated by the MVD obtained by the intra-block analysis and the joint inter-block-intra-block analysis. The smallest MVD was most frequently obtained for designs generated across phases rather than for each phase separately, in particular when both phases of the design were separated with a single pseudo-level. The joint optimization ensured that treatment concurrences were equally balanced across pairs, one of the prerequisites for an efficient design. The proposed alternative approaches can be implemented with any model-based design packages with facilities to formulate linear models for treatment and block structures.
From spinning conformal blocks to matrix Calogero-Sutherland models
NASA Astrophysics Data System (ADS)
Schomerus, Volker; Sobko, Evgeny
2018-04-01
In this paper we develop further the relation between conformal four-point blocks involving external spinning fields and Calogero-Sutherland quantum mechanics with matrix-valued potentials. To this end, the analysis of [1] is extended to arbitrary dimensions and to the case of boundary two-point functions. In particular, we construct the potential for any set of external tensor fields. Some of the resulting Schrödinger equations are mapped explicitly to the known Casimir equations for 4-dimensional seed conformal blocks. Our approach furnishes solutions of Casimir equations for external fields of arbitrary spin and dimension in terms of functions on the conformal group. This allows us to reinterpret standard operations on conformal blocks in terms of group-theoretic objects. In particular, we shall discuss the relation between the construction of spinning blocks in any dimension through differential operators acting on seed blocks and the action of left/right invariant vector fields on the conformal group.
Contextual Compression of Large-Scale Wind Turbine Array Simulations: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C
Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interactive visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contexualized representation is a valid approach and encourages contextual data management.« less
Contextual Compression of Large-Scale Wind Turbine Array Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C
Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.« less
Universality for 1d Random Band Matrices: Sigma-Model Approximation
NASA Astrophysics Data System (ADS)
Shcherbina, Mariya; Shcherbina, Tatyana
2018-02-01
The paper continues the development of the rigorous supersymmetric transfer matrix approach to the random band matrices started in (J Stat Phys 164:1233-1260, 2016; Commun Math Phys 351:1009-1044, 2017). We consider random Hermitian block band matrices consisting of W× W random Gaussian blocks (parametrized by j,k \\in Λ =[1,n]^d\\cap Z^d ) with a fixed entry's variance J_{jk}=δ _{j,k}W^{-1}+β Δ _{j,k}W^{-2} , β >0 in each block. Taking the limit W→ ∞ with fixed n and β , we derive the sigma-model approximation of the second correlation function similar to Efetov's one. Then, considering the limit β , n→ ∞, we prove that in the dimension d=1 the behaviour of the sigma-model approximation in the bulk of the spectrum, as β ≫ n , is determined by the classical Wigner-Dyson statistics.
NASA Astrophysics Data System (ADS)
Chiessi, Vittorio; D'Orefice, Maurizio; Scarascia Mugnozza, Gabriele; Vitale, Valerio; Cannese, Christian
2010-07-01
This paper describes the results of a rockfall hazard assessment for the village of San Quirico (Abruzzo region, Italy) based on an engineering-geological model. After the collection of geological, geomechanical, and geomorphological data, the rockfall hazard assessment was performed based on two separate approaches: i) simulation of detachment of rock blocks and their downhill movement using a GIS; and ii) application of geostatistical techniques to the analysis of georeferenced observations of previously fallen blocks, in order to assess the probability of arrival of blocks due to potential future collapses. The results show that the trajectographic analysis is significantly influenced by the input parameters, with particular reference to the coefficients of restitution values. In order to solve this problem, the model was calibrated based on repeated field observations. The geostatistical approach is useful because it gives the best estimation of point-source phenomena such as rockfalls; however, the sensitivity of results to basic assumptions, e.g. assessment of variograms and choice of a threshold value, may be problematic. Consequently, interpolations derived from different variograms have been used and compared among them; hence, those showing the lowest errors were adopted. The data sets which were statistically analysed are relevant to both kinetic energy and surveyed rock blocks in the accumulation area. The obtained maps highlight areas susceptible to rock block arrivals, and show that the area accommodating the new settlement of S. Quirico Village has the highest level of hazard according to both probabilistic and deterministic methods.
Lo, Kin Cheung; Hau, King In; Chan, Wai Kin
2018-04-05
Functional polymer/carbon nanotube (CNT) hybrid materials can serve as a good model for light harvesting systems based on CNTs. This paper presents the synthesis of block copolymer/CNT hybrids and the characterization of their photocurrent responses by both experimental and computational approaches. A series of functional diblock copolymers was synthesized by reversible addition-fragmentation chain transfer polymerizations for the dispersion and functionalization of CNTs. The block copolymers contain photosensitizing ruthenium complexes and modified pyrene-based anchoring units. The photocurrent responses of the polymer/CNT hybrids were measured by photoconductive atomic force microscopy (PCAFM), from which the experimental data were analyzed by vigorous statistical models. The difference in photocurrent response among different hybrids was correlated to the conformations of the hybrids, which were elucidated by molecular dynamics simulations, and the electronic properties of polymers. The photoresponse of the block copolymer/CNT hybrids can be enhanced by introducing an electron-accepting block between the photosensitizing block and the CNT. We have demonstrated that the application of a rigorous statistical methodology can unravel the charge transport properties of these hybrid materials and provide general guidelines for the design of molecular light harvesting systems.
An instance theory of associative learning.
Jamieson, Randall K; Crump, Matthew J C; Hannah, Samuel D
2012-03-01
We present and test an instance model of associative learning. The model, Minerva-AL, treats associative learning as cued recall. Memory preserves the events of individual trials in separate traces. A probe presented to memory contacts all traces in parallel and retrieves a weighted sum of the traces, a structure called the echo. Learning of a cue-outcome relationship is measured by the cue's ability to retrieve a target outcome. The theory predicts a number of associative learning phenomena, including acquisition, extinction, reacquisition, conditioned inhibition, external inhibition, latent inhibition, discrimination, generalization, blocking, overshadowing, overexpectation, superconditioning, recovery from blocking, recovery from overshadowing, recovery from overexpectation, backward blocking, backward conditioned inhibition, and second-order retrospective revaluation. We argue that associative learning is consistent with an instance-based approach to learning and memory.
Dollé, Laurent; Chavarriaga, Ricardo
2018-01-01
We present a computational model of spatial navigation comprising different learning mechanisms in mammals, i.e., associative, cognitive mapping and parallel systems. This model is able to reproduce a large number of experimental results in different variants of the Morris water maze task, including standard associative phenomena (spatial generalization gradient and blocking), as well as navigation based on cognitive mapping. Furthermore, we show that competitive and cooperative patterns between different navigation strategies in the model allow to explain previous apparently contradictory results supporting either associative or cognitive mechanisms for spatial learning. The key computational mechanism to reconcile experimental results showing different influences of distal and proximal cues on the behavior, different learning times, and different abilities of individuals to alternatively perform spatial and response strategies, relies in the dynamic coordination of navigation strategies, whose performance is evaluated online with a common currency through a modular approach. We provide a set of concrete experimental predictions to further test the computational model. Overall, this computational work sheds new light on inter-individual differences in navigation learning, and provides a formal and mechanistic approach to test various theories of spatial cognition in mammals. PMID:29630600
Testing block subdivision algorithms on block designs
NASA Astrophysics Data System (ADS)
Wiseman, Natalie; Patterson, Zachary
2016-01-01
Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.
NASA Astrophysics Data System (ADS)
Yeckel, Andrew; Lun, Lisa; Derby, Jeffrey J.
2009-12-01
A new, approximate block Newton (ABN) method is derived and tested for the coupled solution of nonlinear models, each of which is treated as a modular, black box. Such an approach is motivated by a desire to maintain software flexibility without sacrificing solution efficiency or robustness. Though block Newton methods of similar type have been proposed and studied, we present a unique derivation and use it to sort out some of the more confusing points in the literature. In particular, we show that our ABN method behaves like a Newton iteration preconditioned by an inexact Newton solver derived from subproblem Jacobians. The method is demonstrated on several conjugate heat transfer problems modeled after melt crystal growth processes. These problems are represented by partitioned spatial regions, each modeled by independent heat transfer codes and linked by temperature and flux matching conditions at the boundaries common to the partitions. Whereas a typical block Gauss-Seidel iteration fails about half the time for the model problem, quadratic convergence is achieved by the ABN method under all conditions studied here. Additional performance advantages over existing methods are demonstrated and discussed.
Developing the Model of Fuel Injection Process Efficiency Analysis for Injector for Diesel Engines
NASA Astrophysics Data System (ADS)
Anisimov, M. Yu; Kayukov, S. S.; Gorshkalev, A. A.; Belousov, A. V.; Gallyamov, R. E.; Lysenko, Yu D.
2018-01-01
The article proposes an assessment option for analysing the quality of fuel injection by the injector constituting the development of calculation blocks in a common injector model within LMS Imagine.Lab AMESim. The parameters of the injector model in the article correspond to the serial injector Common Rail-type with solenoid. The possibilities of this approach are demonstrated with providing the results using the example of modelling the modified injector. Following the research results, the advantages of the proposed approach to analysing assessing the fuel injection quality were detected.
Convex Regression with Interpretable Sharp Partitions
Petersen, Ashley; Simon, Noah; Witten, Daniela
2016-01-01
We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120
Ponce-de-León, Miguel; Montero, Francisco; Peretó, Juli
2013-10-31
Metabolic reconstruction is the computational-based process that aims to elucidate the network of metabolites interconnected through reactions catalyzed by activities assigned to one or more genes. Reconstructed models may contain inconsistencies that appear as gap metabolites and blocked reactions. Although automatic methods for solving this problem have been previously developed, there are many situations where manual curation is still needed. We introduce a general definition of gap metabolite that allows its detection in a straightforward manner. Moreover, a method for the detection of Unconnected Modules, defined as isolated sets of blocked reactions connected through gap metabolites, is proposed. The method has been successfully applied to the curation of iCG238, the genome-scale metabolic model for the bacterium Blattabacterium cuenoti, obligate endosymbiont of cockroaches. We found the proposed approach to be a valuable tool for the curation of genome-scale metabolic models. The outcome of its application to the genome-scale model B. cuenoti iCG238 is a more accurate model version named as B. cuenoti iMP240.
Unsupervised Spatial, Temporal and Relational Models for Social Processes
2012-02-01
Andrej Mrvar . A partitioning approach to structural balance. Social Networks, 18(2):149 – 168, 1996 . [37] Thi V. Duong, Hung H. Bui, Dinh Q. Phung, and...partitioning provided by Doreian and Mrvar [36], who demonstrate that there was increasing evidence over time that 62 CHAPTER 4. COMMUNITY DETECTION this...foursome was a genuine group. Doreian and Mrvar used a block modeling approach optimiz- ing structural balance, a measure of cohesion incorporating
Kavrut Ozturk, Nilgun; Kavakli, Ali Sait
2017-08-01
This prospective randomized study compared the coracoid and retroclavicular approaches to ultrasound-guided infraclavicular brachial plexus block (IBPB) in terms of needle tip and shaft visibility and quality of block. We hypothesized that the retroclavicular approach would increase needle tip and shaft visibility and decrease the number of needle passes compared to the coracoid approach. A total of 100 adult patients who received IBPB block for upper limb surgery were randomized into two groups: a coracoid approach group (group C) and a retroclavicular approach group (group R). In group C, the needle was inserted 2 cm medial and 2 cm inferior to the coracoid process and directed from ventral to dorsal. In group R, the needle insertion point was posterior to the clavicle and the needle was advanced from cephalad to caudal. All ultrasound images were digitally stored for analysis. The primary aim of the present study was to compare needle tip and shaft visibility between the coracoid approach and retroclavicular approach in patients undergoing upper limb surgery. The secondary aim was to investigate differences between the two groups in the number of needle passes, sensory and motor block success rates, surgical success rate, block performance time, block performance-related pain, patient satisfaction, use of supplemental local anesthetic and analgesic, and complications. Needle tip visibility and needle shaft visibility were significantly better in group R (p = 0.040, p = 0.032, respectively). Block performance time and anesthesia-related time were significantly shorter in group R (p = 0.022, p = 0.038, respectively). Number of needle passes was significantly lower in group R (p = 0.044). Paresthesia during block performance was significantly higher in group C (p = 0.045). There were no statistically significant differences between the two groups in terms of sensory or motor block success, surgical success, block-related pain, and patient satisfaction. The retroclavicular approach is associated with better needle tip and shaft visibility, reduced performance time and anesthesia-related time, less paresthesia during block performance, and fewer needle passes than the coracoid approach. TRıAL REGISTRY NUMBER: Clinicaltrials.gov (no. NCT02673086).
NASA Astrophysics Data System (ADS)
Jerbi, Chahir; Fourno, André; Noetinger, Benoit; Delay, Frederick
2017-05-01
Single and multiphase flows in fractured porous media at the scale of natural reservoirs are often handled by resorting to homogenized models that avoid the heavy computations associated with a complete discretization of both fractures and matrix blocks. For example, the two overlapping continua (fractures and matrix) of a dual porosity system are coupled by way of fluid flux exchanges that deeply condition flow at the large scale. This characteristic is a key to realistic flow simulations, especially for multiphase flow as capillary forces and contrasts of fluid mobility compete in the extraction of a fluid from a capacitive matrix then conveyed through the fractures. The exchange rate between fractures and matrix is conditioned by the so-called mean matrix block size which can be viewed as the size of a single matrix block neighboring a single fracture within a mesh of a dual porosity model. We propose a new evaluation of this matrix block size based on the analysis of discrete fracture networks. The fundaments rely upon establishing at the scale of a fractured block the equivalence between the actual fracture network and a Warren and Root network only made of three regularly spaced fracture families parallel to the facets of the fractured block. The resulting matrix block sizes are then compared via geometrical considerations and two-phase flow simulations to the few other available methods. It is shown that the new method is stable in the sense it provides accurate sizes irrespective of the type of fracture network investigated. The method also results in two-phase flow simulations from dual porosity models very close to that from references calculated in finely discretized networks. Finally, calculations of matrix block sizes by this new technique reveal very rapid, which opens the way to cumbersome applications such as preconditioning a dual porosity approach applied to regional fractured reservoirs.
Nonparametric weighted stochastic block models
NASA Astrophysics Data System (ADS)
Peixoto, Tiago P.
2018-01-01
We present a Bayesian formulation of weighted stochastic block models that can be used to infer the large-scale modular structure of weighted networks, including their hierarchical organization. Our method is nonparametric, and thus does not require the prior knowledge of the number of groups or other dimensions of the model, which are instead inferred from data. We give a comprehensive treatment of different kinds of edge weights (i.e., continuous or discrete, signed or unsigned, bounded or unbounded), as well as arbitrary weight transformations, and describe an unsupervised model selection approach to choose the best network description. We illustrate the application of our method to a variety of empirical weighted networks, such as global migrations, voting patterns in congress, and neural connections in the human brain.
A Model for Risk Analysis of Oil Tankers
NASA Astrophysics Data System (ADS)
Montewka, Jakub; Krata, Przemysław; Goerland, Floris; Kujala, Pentti
2010-01-01
The paper presents a model for risk analysis regarding marine traffic, with the emphasis on two types of the most common marine accidents which are: collision and grounding. The focus is on oil tankers as these pose the highest environmental risk. A case study in selected areas of Gulf of Finland in ice free conditions is presented. The model utilizes a well-founded formula for risk calculation, which combines the probability of an unwanted event with its consequences. Thus the model is regarded a block type model, consisting of blocks for the probability of collision and grounding estimation respectively as well as blocks for consequences of an accident modelling. Probability of vessel colliding is assessed by means of a Minimum Distance To Collision (MDTC) based model. The model defines in a novel way the collision zone, using mathematical ship motion model and recognizes traffic flow as a non homogeneous process. The presented calculations address waterways crossing between Helsinki and Tallinn, where dense cross traffic during certain hours is observed. For assessment of a grounding probability, a new approach is proposed, which utilizes a newly developed model, where spatial interactions between objects in different locations are recognized. A ship at a seaway and navigational obstructions may be perceived as interacting objects and their repulsion may be modelled by a sort of deterministic formulation. Risk due to tankers running aground addresses an approach fairway to an oil terminal in Sköldvik, near Helsinki. The consequences of an accident are expressed in monetary terms, and concern costs of an oil spill, based on statistics of compensations claimed from the International Oil Pollution Compensation Funds (IOPC Funds) by parties involved.
3D Printed Block Copolymer Nanostructures
ERIC Educational Resources Information Center
Scalfani, Vincent F.; Turner, C. Heath; Rupar, Paul A.; Jenkins, Alexander H.; Bara, Jason E.
2015-01-01
The emergence of 3D printing has dramatically advanced the availability of tangible molecular and extended solid models. Interestingly, there are few nanostructure models available both commercially and through other do-it-yourself approaches such as 3D printing. This is unfortunate given the importance of nanotechnology in science today. In this…
Lexical Retrieval is not by Competition: Evidence from the Blocked Naming Paradigm
Navarrete, Eduardo; Del Prato, Paul; Peressotti, Francesca; Mahon, Bradford Z.
2014-01-01
A central issue in research on speech production is whether or not the retrieval of words from the mental lexicon is a competitive process. An important experimental paradigm to study the dynamics of lexical retrieval is the blocked naming paradigm, in which participants name pictures of objects that are grouped by semantic category (‘homogenous’ or ‘related’ blocks) or not grouped by semantic category (‘heterogeneous’ or ‘unrelated’ blocks). Typically, pictures are repeated multiple times (or cycles) within both related and unrelated blocks. It is known that participants are slower in related than in unrelated blocks when the data are collapsed over all within-block repetitions. This semantic interference effect, as observed in the blocked naming task, is the strongest empirical evidence for the hypothesis of lexical selection by competition. Here we show, contrary to the accepted view, that the default polarity of semantic context effects in the blocked naming paradigm is facilitation, rather than interference. In a series of experiments we find that interference arises only when items repeat within a block, and only because of that repetition: What looks to be ‘semantic interference’ in the blocked naming paradigm is actually less repetition priming in related compared to unrelated blocks. These data undermine the theory of lexical selection by competition and indicate a model in which the most highly activated word is retrieved, regardless of the activation levels of nontarget words. We conclude that the theory of lexical selection by competition, and by extension the important psycholinguistic models based on that assumption, are no longer viable, and frame a new way to approach the question of how words are retrieved in spoken language production. PMID:25284954
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahalik, Jyoti P.; Dugger, Jason W.; Sides, Scott W.
Mixtures of block copolymers and nanoparticles (block copolymer nanocomposites) are known to microphase separate into a plethora of microstructures, depending on the composition, length scale and nature of interactions among its different constituents. Theoretical and experimental works on this class of nanocomposites have already high-lighted intricate relations among chemical details of the polymers, nanoparticles, and various microstructures. Confining these nanocomposites in thin films yields an even larger array of structures, which are not normally observed in the bulk. In contrast to the bulk, exploring various microstructures in thin films by the experimental route remains a challenging task. Here in thismore » work, we construct a model for the thin films of lamellar forming diblock copolymers containing spherical nanoparticles based on a hybrid particle-field approach. The model is benchmarked by comparison with the depth profiles obtained from the neutron reflectivity experiments for symmetric poly(deuterated styrene-b-n butyl methacrylate) copolymers blended with spherical magnetite nanoparticles covered with hydrogenated poly(styrene) corona. We show that the model based on a hybrid particle-field approach provides details of the underlying microphase separation in the presence of the nanoparticles through a direct comparison to the neutron reflectivity data. This work benchmarks the application of the hybrid particle-field model to extract the interaction parameters for exploring different microstructures in thin films containing block copolymers and nanocomposites.« less
Mahalik, Jyoti P.; Dugger, Jason W.; Sides, Scott W.; ...
2018-04-10
Mixtures of block copolymers and nanoparticles (block copolymer nanocomposites) are known to microphase separate into a plethora of microstructures, depending on the composition, length scale and nature of interactions among its different constituents. Theoretical and experimental works on this class of nanocomposites have already high-lighted intricate relations among chemical details of the polymers, nanoparticles, and various microstructures. Confining these nanocomposites in thin films yields an even larger array of structures, which are not normally observed in the bulk. In contrast to the bulk, exploring various microstructures in thin films by the experimental route remains a challenging task. Here in thismore » work, we construct a model for the thin films of lamellar forming diblock copolymers containing spherical nanoparticles based on a hybrid particle-field approach. The model is benchmarked by comparison with the depth profiles obtained from the neutron reflectivity experiments for symmetric poly(deuterated styrene-b-n butyl methacrylate) copolymers blended with spherical magnetite nanoparticles covered with hydrogenated poly(styrene) corona. We show that the model based on a hybrid particle-field approach provides details of the underlying microphase separation in the presence of the nanoparticles through a direct comparison to the neutron reflectivity data. This work benchmarks the application of the hybrid particle-field model to extract the interaction parameters for exploring different microstructures in thin films containing block copolymers and nanocomposites.« less
Data update in a land information network
NASA Astrophysics Data System (ADS)
Mullin, Robin C.
1988-01-01
The on-going update of data exchanged in a land information network is examined. In the past, major developments have been undertaken to enable the exchange of data between land information systems. A model of a land information network and the data update process have been developed. Based on these, a functional description of the database and software to perform data updating is presented. A prototype of the data update process was implemented using the ARC/INFO geographic information system. This was used to test four approaches to data updating, i.e., bulk, block, incremental, and alert updates. A bulk update is performed by replacing a complete file with an updated file. A block update requires that the data set be partitioned into blocks. When an update occurs, only the blocks which are affected need to be transferred. An incremental update approach records each feature which is added or deleted and transmits only the features needed to update the copy of the file. An alert is a marker indicating that an update has occurred. It can be placed in a file to warn a user that if he is active in an area containing markers, updated data is available. The four approaches have been tested using a cadastral data set.
Generation of 3D synthetic breast tissue
NASA Astrophysics Data System (ADS)
Elangovan, Premkumar; Dance, David R.; Young, Kenneth C.; Wells, Kevin
2016-03-01
Virtual clinical trials are an emergent approach for the rapid evaluation and comparison of various breast imaging technologies and techniques using computer-based modeling tools. A fundamental requirement of this approach for mammography is the use of realistic looking breast anatomy in the studies to produce clinically relevant results. In this work, a biologically inspired approach has been used to simulate realistic synthetic breast phantom blocks for use in virtual clinical trials. A variety of high and low frequency features (including Cooper's ligaments, blood vessels and glandular tissue) have been extracted from clinical digital breast tomosynthesis images and used to simulate synthetic breast blocks. The appearance of the phantom blocks was validated by presenting a selection of simulated 2D and DBT images interleaved with real images to a team of experienced readers for rating using an ROC paradigm. The average areas under the curve for 2D and DBT images were 0.53+/-.04 and 0.55+/-.07 respectively; errors are the standard errors of the mean. The values indicate that the observers had difficulty in differentiating the real images from simulated images. The statistical properties of simulated images of the phantom blocks were evaluated by means of power spectrum analysis. The power spectrum curves for real and simulated images closely match and overlap indicating good agreement.
Ultrasound description of Pecs II (modified Pecs I): a novel approach to breast surgery.
Blanco, R; Fajardo, M; Parras Maldonado, T
2012-11-01
The Pecs block (pectoral nerves block) is an easy and reliable superficial block inspired by the infraclavicular block approach and the transversus abdominis plane blocks. Once the pectoralis muscles are located under the clavicle the space between the two muscles is dissected to reach the lateral pectoral and the medial pectoral nerves. The main indications are breast expanders and subpectoral prosthesis where the distension of these muscles is extremely painful. A second version of the Pecs block is described, called "modified Pecs block" or Pecs block type II. This novel approach aims to block at least the pectoral nerves, the intercostobrachial, intercostals III-IV-V-VI and the long thoracic nerve. These nerves need to be blocked to provide complete analgesia during breast surgery, and it is an alternative or a rescue block if paravertebral blocks and thoracic epidurals failed. This block has been used in our unit in the past year for the Pecs I indications described, and in addition for, tumorectomies, wide excisions, and axillary clearances. The ultrasound sequence to perform this block is shown, together with simple X-ray dye images and gadolinium MRI images to understand the spread and pathways that can explain the benefit of this novel approach. Copyright © 2012 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Published by Elsevier España. All rights reserved.
Jenett, Benjamin; Calisch, Sam; Cellucci, Daniel; Cramer, Nick; Gershenfeld, Neil; Swei, Sean; Cheung, Kenneth C
2017-03-01
We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness. This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure's intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system. As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures.
Digital Morphing Wing: Active Wing Shaping Concept Using Composite Lattice-Based Cellular Structures
Jenett, Benjamin; Calisch, Sam; Cellucci, Daniel; Cramer, Nick; Gershenfeld, Neil; Swei, Sean
2017-01-01
Abstract We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness. This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure's intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system. As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures. PMID:28289574
Consistency Analysis of Genome-Scale Models of Bacterial Metabolism: A Metamodel Approach
Ponce-de-Leon, Miguel; Calle-Espinosa, Jorge; Peretó, Juli; Montero, Francisco
2015-01-01
Genome-scale metabolic models usually contain inconsistencies that manifest as blocked reactions and gap metabolites. With the purpose to detect recurrent inconsistencies in metabolic models, a large-scale analysis was performed using a previously published dataset of 130 genome-scale models. The results showed that a large number of reactions (~22%) are blocked in all the models where they are present. To unravel the nature of such inconsistencies a metamodel was construed by joining the 130 models in a single network. This metamodel was manually curated using the unconnected modules approach, and then, it was used as a reference network to perform a gap-filling on each individual genome-scale model. Finally, a set of 36 models that had not been considered during the construction of the metamodel was used, as a proof of concept, to extend the metamodel with new biochemical information, and to assess its impact on gap-filling results. The analysis performed on the metamodel allowed to conclude: 1) the recurrent inconsistencies found in the models were already present in the metabolic database used during the reconstructions process; 2) the presence of inconsistencies in a metabolic database can be propagated to the reconstructed models; 3) there are reactions not manifested as blocked which are active as a consequence of some classes of artifacts, and; 4) the results of an automatic gap-filling are highly dependent on the consistency and completeness of the metamodel or metabolic database used as the reference network. In conclusion the consistency analysis should be applied to metabolic databases in order to detect and fill gaps as well as to detect and remove artifacts and redundant information. PMID:26629901
A new lumped-parameter model for flow in unsaturated dual-porosity media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, Robert W.; Hadgu, Teklu; Bodvarsson, Gudmundur S.
A new lumped-parameter approach to simulating unsaturated flow processes in dual-porosity media such as fractured rocks or aggregated soils is presented. Fluid flow between the fracture network and the matrix blocks is described by a non-linear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. Unlike a Warren-Root-type equation, this equation is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into an existing unsaturated flow simulator, to serve as a source/sink term for fracture gridblocks. Flow processes are then simulated usingmore » only fracture gridblocks in the computational grid. This new lumped-parameter approach has been tested on two problems involving transient flow in fractured/porous media, and compared with simulations performed using explicit discretization of the matrix blocks. The new procedure seems to accurately simulate flow processes in unsaturated fractured rocks, and typically requires an order of magnitude less computational time than do simulations using fully-discretized matrix blocks. [References: 37]« less
SOSlope: a new slope stability model for vegetated hillslopes
NASA Astrophysics Data System (ADS)
Cohen, D.; Schwarz, M.
2016-12-01
Roots contribute to increase soil strength but forces mobilized by roots depend on soil relative displacement. This effect is not included in models of slope stability. Here we present a new numerical model of shallow landslides for vegetated hillslopes that uses a strain-step loading approach for force redistributions within a soil mass including the effects of root strength in both tension and compression. The hillslope is discretized into a two-dimensional array of blocks connected by bonds. During a rainfall event the blocks's mass increases and the soil shear strength decreases. At each time step, we compute a factor of safety for each block. If the factor of safety of one or more blocks is less than one, those blocks are moved in the direction of the local active force by a predefined amount and the factor of safety is recalculated for all blocks. Because of the relative motion between blocks that have moved and those that remain stationary, mechanical bond forces between blocks that depend on relative displacement change, modifying the force balance. This relative motion triggers instantaneous force redistributions across the entire hillslope similar to a self-organized critical system. Looping over blocks and moving those that are unstable is repeated until all blocks are stable and the system reaches a new equilibrium, or, some blocks have failed causing a landslide. Spatial heterogeneity of vegetation is included by computing the root density and distribution as a function of distance form trees. A simple subsurface hydrological model based on dual permeability concepts is used to compute the temporal evolution of water content, pore-water pressure, suction stress, and soil shear strength. Simulations for a conceptual slope indicates that forces mobilized in tension and compression both contribute to the stability of the slope. However, the maximum tensional and compressional forces imparted by roots do not contribute simultaneously to the stability of the soil mass, in contrast to what is commonly assumed in models. Simulations with different tree sizes (different magnitude of root reinforcement) indicate that there is a threshold in tree spacing (or tree diameter) above (or below) which root density and root sizes no longer provide sufficient reinforcement to keep the slope stable during a rainfall event.
PERTS: A Prototyping Environment for Real-Time Systems
NASA Technical Reports Server (NTRS)
Liu, Jane W. S.; Lin, Kwei-Jay; Liu, C. L.
1993-01-01
PERTS is a prototyping environment for real-time systems. It is being built incrementally and will contain basic building blocks of operating systems for time-critical applications, tools, and performance models for the analysis, evaluation and measurement of real-time systems and a simulation/emulation environment. It is designed to support the use and evaluation of new design approaches, experimentations with alternative system building blocks, and the analysis and performance profiling of prototype real-time systems.
A Two-Dimensional Helmholtz Equation Solution for the Multiple Cavity Scattering Problem
2013-02-01
obtained by using the block Gauss – Seidel iterative meth- od. To show the convergence of the iterative method, we define the error between two...models to the general multiple cavity setting. Numerical examples indicate that the convergence of the Gauss – Seidel iterative method depends on the...variational approach. A block Gauss – Seidel iterative method is introduced to solve the cou- pled system of the multiple cavity scattering problem, where
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H.; Li, G., E-mail: gli@clemson.edu
2014-08-28
An accelerated Finite Element Contact Block Reduction (FECBR) approach is presented for computational analysis of ballistic transport in nanoscale electronic devices with arbitrary geometry and unstructured mesh. Finite element formulation is developed for the theoretical CBR/Poisson model. The FECBR approach is accelerated through eigen-pair reduction, lead mode space projection, and component mode synthesis techniques. The accelerated FECBR is applied to perform quantum mechanical ballistic transport analysis of a DG-MOSFET with taper-shaped extensions and a DG-MOSFET with Si/SiO{sub 2} interface roughness. The computed electrical transport properties of the devices obtained from the accelerated FECBR approach and associated computational cost as amore » function of system degrees of freedom are compared with those obtained from the original CBR and direct inversion methods. The performance of the accelerated FECBR in both its accuracy and efficiency is demonstrated.« less
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-06-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-01-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290
Schnepper, Gregory D; Kightlinger, Benjamin I; Jiang, Yunyun; Wolf, Bethany J; Bolin, Eric D; Wilson, Sylvia H
2017-09-23
Examination of the effectiveness of perineural dexamethasone administered in very low and low doses on ropivacaine brachial plexus block duration. Retrospective evaluation of brachial plexus block duration in a large cohort of patients receiving peripheral nerve blocks with and without perineural dexamethasone in a prospectively collected quality assurance database. A single academic medical center. A total of 1,942 brachial plexus blocks placed over a 16-month period were reviewed. Demographics, nerve block location, and perineural dexamethasone utilization and dose were examined in relation to block duration. Perineural dexamethasone was examined as none (0 mg), very low dose (2 mg or less), and low dose (greater than 2 mg to 4 mg). Continuous catheter techniques, local anesthetics other than ropivacaine, and block locations with fewer than 15 subjects were excluded. Associations between block duration and predictors of interest were examined using multivariable regression models. A subgroup analysis of the impact of receiving dexamethasone on block duration within each block type was also conducted using a univariate linear regression approach. A total of 1,027 subjects were evaluated. More than 90% of brachial plexus blocks contained perineural dexamethasone (≤4 mg), with a median dose of 2 mg. Increased block duration was associated with receiving any dose of perineural dexamethasone (P < 0.0001), female gender (P = 0.022), increased age (P = 0.048), and increased local anesthetic dose (P = 0.01). In a multivariable model, block duration did not differ with very low- or low-dose perineural dexamethasone after controlling for other factors (P = 0.420). Perineural dexamethasone prolonged block duration compared with ropivacaine alone; however, duration was not greater with low-dose compared with very low-dose perineural dexamethasone. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Richling, Andy; Rust, Henning W.; Bissolli, Peter; Ulbrich, Uwe
2017-04-01
Atmospheric blocking plays a crucial role in climate variability in the mid-latitudes. Especially meteorological extremes like heatwaves, cold spells and droughts are often related to persistent and stationary blocking events. For climate monitoring it is important to identify and characterise such blocking events as well as to analyse the relationship between blockings and meteorological extremes in a quantitative way. In this study we identify atmospheric blocking events and analyse the influence on temperature and precipitation extremes with statistical models. For the detection of atmospheric blocking events, we apply modified 2-dimensional versions of commonly used blocking indices suggested by Tibaldi and Molteni (1990) as well as Masato et al. (2013) on daily fields of 500hPa geopotential heights of the Era-Interim reanalysis dataset. A result is a list of blocking events with a multidimensional index characterising area, intensity, location and duration and maps of these parameters, which are intended to be used operationally for regular climate diagnostics at the German Meteorological Service. In addition, relationships between grid-point-base blocking frequency, intensity and location parameters and the number of daily temperature/precipitation extremes based on the E-OBS gridded dataset are investigated using general linear models on a monthly time scale. The number of counts as well as probabilities of occurrence of daily extremes within a certain calendar month will be analysed in this framework. G. Masato, B. J. Hoskins, and T. Woollings. Winter and Summer Northern Hemisphere Blocking in CMIP5 Models. J. Climate, 26:7044-7059, 2013a. doi: http://dx.doi.org/10.1175/JCLI-D- 12-00466.1. G. Masato, B. J. Hoskins, and T. Woollings. Wave-Breaking Characteristics of Northern Hemi- sphere Winter Blocking: A Two-Dimensional Approach. J. Climate, 26:4535-4549, 2013b. doi: http://dx.doi.org/10.1175/JCLI-D-12-00240.1. S. Tibaldi and F. Molteni. On the operational predictability of blocking. Tellus, 42A:343-365, 1990. doi: 10.1034/j.1600-0870.1990.t01-2-00003.x.
NASA Astrophysics Data System (ADS)
Erlyana, Yana; Hartono, Henny
2017-12-01
The advancement of technology has huge impact on commerce world, especially in the marketplace that has shifted from brick-and-mortar to digital/online marketplace. Grasping the opportunity, ABC joined venture with DEF to create a new online venture namely XYZ Online Shop - an e-commerce website that has large segmentations. The objective of this research is to analyze the business model conducted by XYZ Online Shop by utilizing Business Model Canvas Framework and SWOT analysis. The results obtained from the research are that the business model conducted by XYZ Online Shop excels in customer relationship block and still needs to improve key partner and key activities blocks. Business Model Canvas along with SWOT analysis describes how XYZ Online Shop creates, delivers, and captures value based on its internal and external environments.
Computer model of cardiovascular control system responses to exercise
NASA Technical Reports Server (NTRS)
Croston, R. C.; Rummel, J. A.; Kay, F. J.
1973-01-01
Approaches of systems analysis and mathematical modeling together with computer simulation techniques are applied to the cardiovascular system in order to simulate dynamic responses of the system to a range of exercise work loads. A block diagram of the circulatory model is presented, taking into account arterial segments, venous segments, arterio-venous circulation branches, and the heart. A cardiovascular control system model is also discussed together with model test results.
NASA Astrophysics Data System (ADS)
Hedman, Mojdeh Khorsand
After a major disturbance, the power system response is highly dependent on protection schemes and system dynamics. Improving power systems situational awareness requires proper and simultaneous modeling of both protection schemes and dynamic characteristics in power systems analysis tools. Historical information and ex-post analysis of blackouts reaffirm the critical role of protective devices in cascading events, thereby confirming the necessity to represent protective functions in transient stability studies. This dissertation is aimed at studying the importance of representing protective relays in power system dynamic studies. Although modeling all of the protective relays within transient stability studies may result in a better estimation of system behavior, representing, updating, and maintaining the protection system data becomes an insurmountable task. Inappropriate or outdated representation of the relays may result in incorrect assessment of the system behavior. This dissertation presents a systematic method to determine essential relays to be modeled in transient stability studies. The desired approach should identify protective relays that are critical for various operating conditions and contingencies. The results of the transient stability studies confirm that modeling only the identified critical protective relays is sufficient to capture system behavior for various operating conditions and precludes the need to model all of the protective relays. Moreover, this dissertation proposes a method that can be implemented to determine the appropriate location of out-of-step blocking relays. During unstable power swings, a generator or group of generators may accelerate or decelerate leading to voltage depression at the electrical center along with generator tripping. This voltage depression may cause protective relay mis-operation and unintentional separation of the system. In order to avoid unintentional islanding, the potentially mis-operating relays should be blocked from tripping with the use of out-of-step blocking schemes. Blocking these mis-operating relays, combined with an appropriate islanding scheme, help avoid a system wide collapse. The proposed method is tested on data from the Western Electricity Coordinating Council. A triple line outage of the California-Oregon Intertie is studied. The results show that the proposed method is able to successfully identify proper locations of out-of-step blocking scheme.
Hybrid architecture for encoded measurement-based quantum computation
Zwerger, M.; Briegel, H. J.; Dür, W.
2014-01-01
We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication. PMID:24946906
CD47-blocking immunotherapies stimulate macrophage-mediated destruction of small-cell lung cancer
Weiskopf, Kipp; Jahchan, Nadine S.; Schnorr, Peter J.; Ring, Aaron M.; Maute, Roy L.; Volkmer, Anne K.; Volkmer, Jens-Peter; Liu, Jie; Lim, Jing Shan; Yang, Dian; Seitz, Garrett; Nguyen, Thuyen; Wu, Di; Guerston, Heather; Trapani, Francesca; George, Julie; Poirier, John T.; Gardner, Eric E.; Miles, Linde A.; de Stanchina, Elisa; Lofgren, Shane M.; Vogel, Hannes; Winslow, Monte M.; Dive, Caroline; Thomas, Roman K.; Rudin, Charles M.; van de Rijn, Matt; Majeti, Ravindra; Garcia, K. Christopher; Weissman, Irving L.
2016-01-01
Small-cell lung cancer (SCLC) is a highly aggressive subtype of lung cancer with limited treatment options. CD47 is a cell-surface molecule that promotes immune evasion by engaging signal-regulatory protein alpha (SIRPα), which serves as an inhibitory receptor on macrophages. Here, we found that CD47 is highly expressed on the surface of human SCLC cells; therefore, we investigated CD47-blocking immunotherapies as a potential approach for SCLC treatment. Disruption of the interaction of CD47 with SIRPα using anti-CD47 antibodies induced macrophage-mediated phagocytosis of human SCLC patient cells in culture. In a murine model, administration of CD47-blocking antibodies or targeted inactivation of the Cd47 gene markedly inhibited SCLC tumor growth. Furthermore, using comprehensive antibody arrays, we identified several possible therapeutic targets on the surface of SCLC cells. Antibodies to these targets, including CD56/neural cell adhesion molecule (NCAM), promoted phagocytosis in human SCLC cell lines that was enhanced when combined with CD47-blocking therapies. In light of recent clinical trials for CD47-blocking therapies in cancer treatment, these findings identify disruption of the CD47/SIRPα axis as a potential immunotherapeutic strategy for SCLC. This approach could enable personalized immunotherapeutic regimens in patients with SCLC and other cancers. PMID:27294525
Kumar, Abhishek; Clement, Shibu; Agrawal, V P
2010-07-15
An attempt is made to address a few ecological and environment issues by developing different structural models for effluent treatment system for electroplating. The effluent treatment system is defined with the help of different subsystems contributing to waste minimization. Hierarchical tree and block diagram showing all possible interactions among subsystems are proposed. These non-mathematical diagrams are converted into mathematical models for design improvement, analysis, comparison, storage retrieval and commercially off-the-shelf purchases of different subsystems. This is achieved by developing graph theoretic model, matrix models and variable permanent function model. Analysis is carried out by permanent function, hierarchical tree and block diagram methods. Storage and retrieval is done using matrix models. The methodology is illustrated with the help of an example. Benefits to the electroplaters/end user are identified. 2010 Elsevier B.V. All rights reserved.
Toolan, Daniel T W; Adlington, Kevin; Isakova, Anna; Kalamiotis, Alexis; Mokarian-Tabari, Parvaneh; Dimitrakis, Georgios; Dodds, Christopher; Arnold, Thomas; Terrill, Nick J; Bras, Wim; Hermida Merino, Daniel; Topham, Paul D; Irvine, Derek J; Howse, Jonathan R
2017-08-09
Microwave annealing has emerged as an alternative to traditional thermal annealing approaches for optimising block copolymer self-assembly. A novel sample environment enabling small angle X-ray scattering to be performed in situ during microwave annealing is demonstrated, which has enabled, for the first time, the direct study of the effects of microwave annealing upon the self-assembly behavior of a model, commercial triblock copolymer system [polystyrene-block-poly(ethylene-co-butylene)-block-polystyrene]. Results show that the block copolymer is a poor microwave absorber, resulting in no change in the block copolymer morphology upon application of microwave energy. The block copolymer species may only indirectly interact with the microwave energy when a small molecule microwave-interactive species [diethylene glycol dibenzoate (DEGDB)] is incorporated directly into the polymer matrix. Then significant morphological development is observed at DEGDB loadings ≥6 wt%. Through spatial localisation of the microwave-interactive species, we demonstrate targeted annealing of specific regions of a multi-component system, opening routes for the development of "smart" manufacturing methodologies.
NASA Astrophysics Data System (ADS)
Ito, T.; Mora-Páez, H.; Peláez-Gaviria, J. R.; Kimura, H.; Sagiya, T.
2017-12-01
IntroductionEcuador-Colombia trench is located at the boundary between South-America plate, Nazca Plate and Caribrian plate. This region is very complexes such as subducting Caribrian plate and Nazca plate, and collision between Panama and northern part of the Andes mountains. The previous large earthquakes occurred along the subducting boundary of Nazca plate, such as 1906 (M8.8) and 1979 (M8.2). And also, earthquakes occurred inland, too. So, it is important to evaluate earthquake potentials for preparing huge damage due to large earthquake in near future. GNSS observation In the last decade, the GNSS observation was established in Columbia. The GNSS observation is called by GEORED, which is operated by servicing Geologico Colomiano. The purpose of GEORED is research of crustal deformation. The number of GNSS site of GEORED is consist of 60 continuous GNSS observation site at 2017 (Mora et al., 2017). The sampling interval of almost GNSS site is 30 seconds. These GNSS data were processed by PPP processing using GIPSY-OASYS II software. GEORED can obtain the detailed crustal deformation map in whole Colombia. In addition, we use 100 GNSS data at Ecuador-Peru region (Nocquet et al. 2014). Method We developed a crustal block movements model based on crustal deformation derived from GNSS observation. Our model considers to the block motion with pole location and angular velocity and the interplate coupling between each block boundaries, including subduction between the South-American plate and the Nazca plate. And also, our approach of estimation of crustal block motion and coefficient of interplate coupling are based on MCMC method. The estimated each parameter is obtained probably density function (PDF). Result We tested 11 crustal block models based on geological data, such as active fault trace at surface. The optimal number of crustal blocks is 11 for based on geological and geodetic data using AIC. We use optimal block motion model. And also, we estimate interplate coupling along the plate interface and rigid block motion. We can evaluate to contribution of elastic deformation and rigid motion. In result, weak plate coupling was found northern part of 3 degree in latitude. Almost crustal deformation are explained by rigid block motion.
Dedkov, V S
2009-01-01
The specificity of DNA-methyltransferase M.Bsc4I was defined in cellular lysate of Bacillus schlegelii 4. For this purpose, we used methylation sensitivity of restriction endonucleases, and also modeling of methylation. The modeling consisted in editing sequences of DNA using replacements of methylated bases and their complementary bases. The substratum DNA processed by M.Bsc4I also were used for studying sensitivity of some restriction endonucleases to methylation. Thus, it was shown that M.Bsc4I methylated 5'-Cm4CNNNNNNNGG-3' and the overlapped dcm-methylation blocked its activity. The offered approach can appear universal enough and simple for definition of specificity of DNA-methyltransferases.
Nonparametric Bayesian inference of the microcanonical stochastic block model
NASA Astrophysics Data System (ADS)
Peixoto, Tiago P.
2017-01-01
A principled approach to characterize the hidden modular structure of networks is to formulate generative models and then infer their parameters from data. When the desired structure is composed of modules or "communities," a suitable choice for this task is the stochastic block model (SBM), where nodes are divided into groups, and the placement of edges is conditioned on the group memberships. Here, we present a nonparametric Bayesian method to infer the modular structure of empirical networks, including the number of modules and their hierarchical organization. We focus on a microcanonical variant of the SBM, where the structure is imposed via hard constraints, i.e., the generated networks are not allowed to violate the patterns imposed by the model. We show how this simple model variation allows simultaneously for two important improvements over more traditional inference approaches: (1) deeper Bayesian hierarchies, with noninformative priors replaced by sequences of priors and hyperpriors, which not only remove limitations that seriously degrade the inference on large networks but also reveal structures at multiple scales; (2) a very efficient inference algorithm that scales well not only for networks with a large number of nodes and edges but also with an unlimited number of modules. We show also how this approach can be used to sample modular hierarchies from the posterior distribution, as well as to perform model selection. We discuss and analyze the differences between sampling from the posterior and simply finding the single parameter estimate that maximizes it. Furthermore, we expose a direct equivalence between our microcanonical approach and alternative derivations based on the canonical SBM.
Synergistic Anti-arrhythmic Effects in Human Atria with Combined Use of Sodium Blockers and Acacetin
Ni, Haibo; Whittaker, Dominic G.; Wang, Wei; Giles, Wayne R.; Narayan, Sanjiv M.; Zhang, Henggui
2017-01-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia. Developing effective and safe anti-AF drugs remains an unmet challenge. Simultaneous block of both atrial-specific ultra-rapid delayed rectifier potassium (K+) current (IKur) and the Na+ current (INa) has been hypothesized to be anti-AF, without inducing significant QT prolongation and ventricular side effects. However, the antiarrhythmic advantage of simultaneously blocking these two channels vs. individual block in the setting of AF-induced electrical remodeling remains to be documented. Furthermore, many IKur blockers such as acacetin and AVE0118, partially inhibit other K+ currents in the atria. Whether this multi-K+-block produces greater anti-AF effects compared with selective IKur-block has not been fully understood. The aim of this study was to use computer models to (i) assess the impact of multi-K+-block as exhibited by many IKur blokers, and (ii) evaluate the antiarrhythmic effect of blocking IKur and INa, either alone or in combination, on atrial and ventricular electrical excitation and recovery in the setting of AF-induced electrical-remodeling. Contemporary mathematical models of human atrial and ventricular cells were modified to incorporate dose-dependent actions of acacetin (a multichannel blocker primarily inhibiting IKur while less potently blocking Ito, IKr, and IKs). Rate- and atrial-selective inhibition of INa was also incorporated into the models. These single myocyte models were then incorporated into multicellular two-dimensional (2D) and three-dimensional (3D) anatomical models of the human atria. As expected, application of IKur blocker produced pronounced action potential duration (APD) prolongation in atrial myocytes. Furthermore, combined multiple K+-channel block that mimicked the effects of acacetin exhibited synergistic APD prolongations. Synergistically anti-AF effects following inhibition of INa and combined IKur/K+-channels were also observed. The attainable maximal AF-selectivity of INa inhibition was greatly augmented by blocking IKur or multiple K+-currents in the atrial myocytes. This enhanced anti-arrhythmic effects of combined block of Na+- and K+-channels were also seen in 2D and 3D simulations; specially, there was an enhanced efficacy in terminating re-entrant excitation waves, exerting improved antiarrhythmic effects in the human atria as compared to a single-channel block. However, in the human ventricular myocytes and tissue, cellular repolarization and computed QT intervals were modestly affected in the presence of actions of acacetin and INa blockers (either alone or in combination). In conclusion, this study demonstrates synergistic antiarrhythmic benefits of combined block of IKur and INa, as well as those of INa and combined multi K+-current block of acacetin, without significant alterations of ventricular repolarization and QT intervals. This approach may be a valuable strategy for the treatment of AF. PMID:29218016
Kennedy, Kristen M.; Rodrigue, Karen M.; Lindenberger, Ulman; Raz, Naftali
2010-01-01
The effects of advanced age and cognitive resources on the course of skill acquisition are unclear, and discrepancies among studies may reflect limitations of data analytic approaches. We applied a multilevel negative exponential model to skill acquisition data from 80 trials (four 20-trial blocks) of a pursuit rotor task administered to healthy adults (19–80 years old). The analyses conducted at the single-trial level indicated that the negative exponential function described performance well. Learning parameters correlated with measures of task-relevant cognitive resources on all blocks except the last and with age on all blocks after the second. Thus, age differences in motor skill acquisition may evolve in 2 phases: In the first, age differences are collinear with individual differences in task-relevant cognitive resources; in the second, age differences orthogonal to these resources emerge. PMID:20047985
NASA Astrophysics Data System (ADS)
Penner, Joyce E.; Andronova, Natalia; Oehmke, Robert C.; Brown, Jonathan; Stout, Quentin F.; Jablonowski, Christiane; van Leer, Bram; Powell, Kenneth G.; Herzog, Michael
2007-07-01
One of the most important advances needed in global climate models is the development of atmospheric General Circulation Models (GCMs) that can reliably treat convection. Such GCMs require high resolution in local convectively active regions, both in the horizontal and vertical directions. During previous research we have developed an Adaptive Mesh Refinement (AMR) dynamical core that can adapt its grid resolution horizontally. Our approach utilizes a finite volume numerical representation of the partial differential equations with floating Lagrangian vertical coordinates and requires resolving dynamical processes on small spatial scales. For the latter it uses a newly developed general-purpose library, which facilitates 3D block-structured AMR on spherical grids. The library manages neighbor information as the blocks adapt, and handles the parallel communication and load balancing, freeing the user to concentrate on the scientific modeling aspects of their code. In particular, this library defines and manages adaptive blocks on the sphere, provides user interfaces for interpolation routines and supports the communication and load-balancing aspects for parallel applications. We have successfully tested the library in a 2-D (longitude-latitude) implementation. During the past year, we have extended the library to treat adaptive mesh refinement in the vertical direction. Preliminary results are discussed. This research project is characterized by an interdisciplinary approach involving atmospheric science, computer science and mathematical/numerical aspects. The work is done in close collaboration between the Atmospheric Science, Computer Science and Aerospace Engineering Departments at the University of Michigan and NOAA GFDL.
Upscaling of Hydraulic Conductivity using the Double Constraint Method
NASA Astrophysics Data System (ADS)
El-Rawy, Mustafa; Zijl, Wouter; Batelaan, Okke
2013-04-01
The mathematics and modeling of flow through porous media is playing an increasingly important role for the groundwater supply, subsurface contaminant remediation and petroleum reservoir engineering. In hydrogeology hydraulic conductivity data are often collected at a scale that is smaller than the grid block dimensions of a groundwater model (e.g. MODFLOW). For instance, hydraulic conductivities determined from the field using slug and packer tests are measured in the order of centimeters to meters, whereas numerical groundwater models require conductivities representative of tens to hundreds of meters of grid cell length. Therefore, there is a need for upscaling to decrease the number of grid blocks in a groundwater flow model. Moreover, models with relatively few grid blocks are simpler to apply, especially when the model has to run many times, as is the case when it is used to assimilate time-dependent data. Since the 1960s different methods have been used to transform a detailed description of the spatial variability of hydraulic conductivity to a coarser description. In this work we will investigate a relatively simple, but instructive approach: the Double Constraint Method (DCM) to identify the coarse-scale conductivities to decrease the number of grid blocks. Its main advantages are robustness and easy implementation, enabling to base computations on any standard flow code with some post processing added. The inversion step of the double constraint method is based on a first forward run with all known fluxes on the boundary and in the wells, followed by a second forward run based on the heads measured on the phreatic surface (i.e. measured in shallow observation wells) and in deeper observation wells. Upscaling, in turn is inverse modeling (DCM) to determine conductivities in coarse-scale grid blocks from conductivities in fine-scale grid blocks. In such a way that the head and flux boundary conditions applied to the fine-scale model are also honored at the coarse-scale. Exemplification will be presented for the Kleine Nete catchment, Belgium. As a result we identified coarse-scale conductivities while decreasing the number of grid blocks with the advantage that a model run costs less computation time and requires less memory space. In addition, ranking of models was investigated.
De Lisi, Rosario; Milioto, Stefania; Muratore, Nicola
2009-01-01
The thermodynamics of conventional surfactants, block copolymers and their mixtures in water was described to the light of the enthalpy function. The two methodologies, i.e. the van’t Hoff approach and the isothermal calorimetry, used to determine the enthalpy of micellization of pure surfactants and block copolymers were described. The van’t Hoff method was critically discussed. The aqueous copolymer+surfactant mixtures were analyzed by means of the isothermal titration calorimetry and the enthalpy of transfer of the copolymer from the water to the aqueous surfactant solutions. Thermodynamic models were presented to show the procedure to extract straightforward molecular insights from the bulk properties. PMID:19742173
Samosky, Joseph T; Allen, Pete; Boronyak, Steve; Branstetter, Barton; Hein, Steven; Juhas, Mark; Nelson, Douglas A; Orebaugh, Steven; Pinto, Rohan; Smelko, Adam; Thompson, Mitch; Weaver, Robert A
2011-01-01
We are developing a simulator of peripheral nerve block utilizing a mixed-reality approach: the combination of a physical model, an MRI-derived virtual model, mechatronics and spatial tracking. Our design uses tangible (physical) interfaces to simulate surface anatomy, haptic feedback during needle insertion, mechatronic display of muscle twitch corresponding to the specific nerve stimulated, and visual and haptic feedback for the injection syringe. The twitch response is calculated incorporating the sensed output of a real neurostimulator. The virtual model is isomorphic with the physical model and is derived from segmented MRI data. This model provides the subsurface anatomy and, combined with electromagnetic tracking of a sham ultrasound probe and a standard nerve block needle, supports simulated ultrasound display and measurement of needle location and proximity to nerves and vessels. The needle tracking and virtual model also support objective performance metrics of needle targeting technique.
Approach to Computer Implementation of Mathematical Model of 3-Phase Induction Motor
NASA Astrophysics Data System (ADS)
Pustovetov, M. Yu
2018-03-01
This article discusses the development of the computer model of an induction motor based on the mathematical model in a three-phase stator reference frame. It uses an approach that allows combining during preparation of the computer model dual methods: means of visual programming circuitry (in the form of electrical schematics) and logical one (in the form of block diagrams). The approach enables easy integration of the model of an induction motor as part of more complex models of electrical complexes and systems. The developed computer model gives the user access to the beginning and the end of a winding of each of the three phases of the stator and rotor. This property is particularly important when considering the asymmetric modes of operation or when powered by the special circuitry of semiconductor converters.
Advanced compilation techniques in the PARADIGM compiler for distributed-memory multicomputers
NASA Technical Reports Server (NTRS)
Su, Ernesto; Lain, Antonio; Ramaswamy, Shankar; Palermo, Daniel J.; Hodges, Eugene W., IV; Banerjee, Prithviraj
1995-01-01
The PARADIGM compiler project provides an automated means to parallelize programs, written in a serial programming model, for efficient execution on distributed-memory multicomputers. .A previous implementation of the compiler based on the PTD representation allowed symbolic array sizes, affine loop bounds and array subscripts, and variable number of processors, provided that arrays were single or multi-dimensionally block distributed. The techniques presented here extend the compiler to also accept multidimensional cyclic and block-cyclic distributions within a uniform symbolic framework. These extensions demand more sophisticated symbolic manipulation capabilities. A novel aspect of our approach is to meet this demand by interfacing PARADIGM with a powerful off-the-shelf symbolic package, Mathematica. This paper describes some of the Mathematica routines that performs various transformations, shows how they are invoked and used by the compiler to overcome the new challenges, and presents experimental results for code involving cyclic and block-cyclic arrays as evidence of the feasibility of the approach.
Complete-block scheduling for advanced pharmacy practice experiences.
Hatton, Randy C; Weitzel, Kristin W
2013-12-01
An innovative approach to meeting increased student demand for advanced pharmacy practice experiences (APPEs) is described, including lessons learned during a two-year pilot project. To achieve more efficient allocation of preceptor resources, the University of Florida College of Pharmacy (UFCOP) adopted a new APPE rotation model in which 20 pharmacy students per year complete all required and elective APPEs at one practice site, an affiliated academic medical center. Relative to the prevailing model of experiential training for Pharm.D. students, the "complete-block scheduling" model offers a number of potential benefits to students, preceptors, and the pharmacy school. In addition to potentially reduced student housing expenses and associated conveniences, complete-block scheduling may enable (1) more efficient use of teaching resources, (2) increased collaboration among preceptors, (3) greater continuity and standardization of educational experiences, and (4) enhanced opportunities for students to engage in longer and more complex research projects. The single-site APPE rotation model also can provide value to the training site by enabling the extension of clinical pharmacy services; for example, UFCOP students perform anticoagulation monitoring and discharge medication counseling at the host institution. Despite logistical and other challenges encountered during pilot testing of the new scheduling model, the program has been well received by students and preceptors alike. Complete-block APPE scheduling is a viable model for some health systems to consider as a means of streamlining experiential education practices and helping to ensure high-quality clinical rotations for Pharm.D. students.
Experimental design matters for statistical analysis: how to handle blocking.
Jensen, Signe M; Schaarschmidt, Frank; Onofri, Andrea; Ritz, Christian
2018-03-01
Nowadays, evaluation of the effects of pesticides often relies on experimental designs that involve multiple concentrations of the pesticide of interest or multiple pesticides at specific comparable concentrations and, possibly, secondary factors of interest. Unfortunately, the experimental design is often more or less neglected when analysing data. Two data examples were analysed using different modelling strategies. First, in a randomized complete block design, mean heights of maize treated with a herbicide and one of several adjuvants were compared. Second, translocation of an insecticide applied to maize as a seed treatment was evaluated using incomplete data from an unbalanced design with several layers of hierarchical sampling. Extensive simulations were carried out to further substantiate the effects of different modelling strategies. It was shown that results from suboptimal approaches (two-sample t-tests and ordinary ANOVA assuming independent observations) may be both quantitatively and qualitatively different from the results obtained using an appropriate linear mixed model. The simulations demonstrated that the different approaches may lead to differences in coverage percentages of confidence intervals and type 1 error rates, confirming that misleading conclusions can easily happen when an inappropriate statistical approach is chosen. To ensure that experimental data are summarized appropriately, avoiding misleading conclusions, the experimental design should duly be reflected in the choice of statistical approaches and models. We recommend that author guidelines should explicitly point out that authors need to indicate how the statistical analysis reflects the experimental design. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, David M.; Belev, Gueorgi; DeCrescenzo, Giovanni
2007-08-15
Blocking layers are used to reduce leakage current in amorphous selenium detectors. The effect of the thickness of the blocking layer on the presampling modulation transfer function (MTF) and on dark current was experimentally determined in prototype single-line CCD-based amorphous selenium (a-Se) x-ray detectors. The sampling pitch of the detectors evaluated was 25 {mu}m and the blocking layer thicknesses varied from 1 to 51 {mu}m. The blocking layers resided on the signal collection electrodes which, in this configuration, were used to collect electrons. The combined thickness of the blocking layer and a-Se bulk in each detector was {approx}200 {mu}m. Asmore » expected, the dark current increased monotonically as the thickness of the blocking layer was decreased. It was found that if the blocking layer thickness was small compared to the sampling pitch, it caused a negligible reduction in MTF. However, the MTF was observed to decrease dramatically at spatial frequencies near the Nyquist frequency as the blocking layer thickness approached or exceeded the electrode sampling pitch. This observed reduction in MTF is shown to be consistent with predictions of an electrostatic model wherein the image charge from the a-Se is trapped at a characteristic depth within the blocking layer, generally near the interface between the blocking layer and the a-Se bulk.« less
NASA Astrophysics Data System (ADS)
Lee, K.; Buscheck, T. A.; Glascoe, L. G.; Gansemer, J.; Sun, Y.
2002-12-01
In support of the characterization of Yucca Mountain as a potential site for as a geologic repository for high-level nuclear waste, the US Department of Energy conducted the Large Block Test (LBT) at nearby Fran Ridge. The LBT was conducted in an excavated 3x 3x 4.5m block of partially saturated, fractured nonlithophysal Topopah Spring tuff, which is one of the host-rock units for the potential repository at Yucca Mountain. The LBT was one of a series of field-scale thermohydrologic tests conducted in the repository host-rock units. The LBT was heated by line heaters installed in five boreholes lying in a horizontal plane 2.75 m below the upper surface of the block. The field-scale thermal tests were designed to help investigators better understand the coupled thermohydrologic-mechanical-chemical processes that would occur in the host rock in response to the radioactive heat of decay from emplaced waste packages. The tests also provide data for the calibration and validation of numerical models used to analyze the thermohydrologic response of the near-field host rock and Engineered Barrier System (EBS). Using the NUFT code and the dual-permeability approach to representing fracture-matrix interaction, we simulated the thermohydrologic response of the block to a heating and cooling cycle. The primary goals of the analysis were to study the heat-flow mechanisms and water redistribution patterns in the boiling and sub-boiling zones, and to compare model results with measured temperature and liquid saturation data, and thereby evaluate two rock property data sets available for modeling thermohydrologic behavior in the rock. Model results were also used for model calibration and validation. We obtained a good to excellent match between model and observed temperatures, and found that the distinct dryout and condensation zones modeled above and below the heater level agreed fairly well with the liquid-saturation measurements. We identified the best-fit data set by using a statistical analysis to compare model and field temperatures, and found that heat flow in the block was dominated by conduction.
NASA Astrophysics Data System (ADS)
Chen, Y.; Gu, Y. J.; Hung, S. H.
2014-12-01
Based on finite-frequency theory and cross-correlation teleseismic relative traveltime data from the USArray, Canadian National Seismograph Network (CNSN) and Canadian Rockies and Alberta Network (CRANE), we present a new tomographic model of P-wave velocity perturbations for the lithosphere and upper mantle beneath the Cordillera-cration transition region in southwestern Canada. The inversion procedure properly accounts for the finite-volume sensitivities of measured travel time residuals, and the resulting model shows a greater resolution of upper mantle velocity heterogeneity beneath the study area than earlier approaches based on the classical ray-theoretical approach. Our model reveals a lateral change of P velocities from -0.5% to 0.5% down to ~200-km depth in a 50-km wide zone between the Alberta Basin and the foothills of the Rocky Mountains, which suggests a sharp structural gradient along the Cordillera deformation front. The stable cratonic lithosphere, delineated by positive P-velocity perturbations of 0.5% and greater, extends down to a maximum depth of ~180 km beneath the Archean Loverna Block (LB). In comparison, the mantle beneath the controversial Medicine Hat Block (MHB) exhibits significantly higher velocities in the uppermost mantle and a shallower (130-150 km depth) root, generally consistent with the average depth of the lithosphere-asthenosphere boundary beneath Southwest Western Canada Sedimentary Basin (WCSB). The complex shape of the lithospheric velocities under the MHB may be evidence of extensive erosion or a partial detachment of the Precambrian lithospheric root. Furthermore, distinct high velocity anomalies in LB and MHB, which are separated by 'normal' mantle block beneath the Vulcan structure (VS), suggest different Archean assembly and collision histories between these two tectonic blocks.
NASA Astrophysics Data System (ADS)
Lee, Jonghyun; Rolle, Massimo; Kitanidis, Peter K.
2018-05-01
Most recent research on hydrodynamic dispersion in porous media has focused on whole-domain dispersion while other research is largely on laboratory-scale dispersion. This work focuses on the contribution of a single block in a numerical model to dispersion. Variability of fluid velocity and concentration within a block is not resolved and the combined spreading effect is approximated using resolved quantities and macroscopic parameters. This applies whether the formation is modeled as homogeneous or discretized into homogeneous blocks but the emphasis here being on the latter. The process of dispersion is typically described through the Fickian model, i.e., the dispersive flux is proportional to the gradient of the resolved concentration, commonly with the Scheidegger parameterization, which is a particular way to compute the dispersion coefficients utilizing dispersivity coefficients. Although such parameterization is by far the most commonly used in solute transport applications, its validity has been questioned. Here, our goal is to investigate the effects of heterogeneity and mass transfer limitations on block-scale longitudinal dispersion and to evaluate under which conditions the Scheidegger parameterization is valid. We compute the relaxation time or memory of the system; changes in time with periods larger than the relaxation time are gradually leading to a condition of local equilibrium under which dispersion is Fickian. The method we use requires the solution of a steady-state advection-dispersion equation, and thus is computationally efficient, and applicable to any heterogeneous hydraulic conductivity K field without requiring statistical or structural assumptions. The method was validated by comparing with other approaches such as the moment analysis and the first order perturbation method. We investigate the impact of heterogeneity, both in degree and structure, on the longitudinal dispersion coefficient and then discuss the role of local dispersion and mass transfer limitations, i.e., the exchange of mass between the permeable matrix and the low permeability inclusions. We illustrate the physical meaning of the method and we show how the block longitudinal dispersivity approaches, under certain conditions, the Scheidegger limit at large Péclet numbers. Lastly, we discuss the potential and limitations of the method to accurately describe dispersion in solute transport applications in heterogeneous aquifers.
Toward best practices for developing regional connectivity maps.
Beier, Paul; Spencer, Wayne; Baldwin, Robert F; McRae, Brad H
2011-10-01
To conserve ecological connectivity (the ability to support animal movement, gene flow, range shifts, and other ecological and evolutionary processes that require large areas), conservation professionals need coarse-grained maps to serve as decision-support tools or vision statements and fine-grained maps to prescribe site-specific interventions. To date, research has focused primarily on fine-grained maps (linkage designs) covering small areas. In contrast, we devised 7 steps to coarsely map dozens to hundreds of linkages over a large area, such as a nation, province, or ecoregion. We provide recommendations on how to perform each step on the basis of our experiences with 6 projects: California Missing Linkages (2001), Arizona Wildlife Linkage Assessment (2006), California Essential Habitat Connectivity (2010), Two Countries, One Forest (northeastern United States and southeastern Canada) (2010), Washington State Connected Landscapes (2010), and the Bhutan Biological Corridor Complex (2010). The 2 most difficult steps are mapping natural landscape blocks (areas whose conservation value derives from the species and ecological processes within them) and determining which pairs of blocks can feasibly be connected in a way that promotes conservation. Decision rules for mapping natural landscape blocks and determining which pairs of blocks to connect must reflect not only technical criteria, but also the values and priorities of stakeholders. We recommend blocks be mapped on the basis of a combination of naturalness, protection status, linear barriers, and habitat quality for selected species. We describe manual and automated procedures to identify currently functioning or restorable linkages. Once pairs of blocks have been identified, linkage polygons can be mapped by least-cost modeling, other approaches from graph theory, or individual-based movement models. The approaches we outline make assumptions explicit, have outputs that can be improved as underlying data are improved, and help implementers focus strictly on ecological connectivity. ©2011 Society for Conservation Biology.
Paci, M; Hyttinen, J; Rodriguez, B
2015-01-01
Background and Purpose Two new technologies are likely to revolutionize cardiac safety and drug development: in vitro experiments on human‐induced pluripotent stem cell‐derived cardiomyocytes (hiPSC‐CMs) and in silico human adult ventricular cardiomyocyte (hAdultV‐CM) models. Their combination was recently proposed as a potential replacement for the present hERG‐based QT study for pharmacological safety assessments. Here, we systematically compared in silico the effects of selective ionic current block on hiPSC‐CM and hAdultV‐CM action potentials (APs), to identify similarities/differences and to illustrate the potential of computational models as supportive tools for evaluating new in vitro technologies. Experimental Approach In silico AP models of ventricular‐like and atrial‐like hiPSC‐CMs and hAdultV‐CM were used to simulate the main effects of four degrees of block of the main cardiac transmembrane currents. Key Results Qualitatively, hiPSC‐CM and hAdultV‐CM APs showed similar responses to current block, consistent with results from experiments. However, quantitatively, hiPSC‐CMs were more sensitive to block of (i) L‐type Ca2+ currents due to the overexpression of the Na+/Ca2+ exchanger (leading to shorter APs) and (ii) the inward rectifier K+ current due to reduced repolarization reserve (inducing diastolic potential depolarization and repolarization failure). Conclusions and Implications In silico hiPSC‐CMs and hAdultV‐CMs exhibit a similar response to selective current blocks. However, overall hiPSC‐CMs show greater sensitivity to block, which may facilitate in vitro identification of drug‐induced effects. Extrapolation of drug effects from hiPSC‐CM to hAdultV‐CM and pro‐arrhythmic risk assessment can be facilitated by in silico predictions using biophysically‐based computational models. PMID:26276951
Model based design introduction: modeling game controllers to microprocessor architectures
NASA Astrophysics Data System (ADS)
Jungwirth, Patrick; Badawy, Abdel-Hameed
2017-04-01
We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.
Improved cost-effectiveness of the block co-polymer anneal process for DSA
NASA Astrophysics Data System (ADS)
Pathangi, Hari; Stokhof, Maarten; Knaepen, Werner; Vaid, Varun; Mallik, Arindam; Chan, Boon Teik; Vandenbroeck, Nadia; Maes, Jan Willem; Gronheid, Roel
2016-04-01
This manuscript first presents a cost model to compare the cost of ownership of DSA and SAQP for a typical front end of line (FEoL) line patterning exercise. Then, we proceed to a feasibility study of using a vertical furnace to batch anneal the block co-polymer for DSA applications. We show that the defect performance of such a batch anneal process is comparable to the process of record anneal methods. This helps in increasing the cost benefit for DSA compared to the conventional multiple patterning approaches.
A building block for hardware belief networks.
Behin-Aein, Behtash; Diep, Vinh; Datta, Supriyo
2016-07-21
Belief networks represent a powerful approach to problems involving probabilistic inference, but much of the work in this area is software based utilizing standard deterministic hardware based on the transistor which provides the gain and directionality needed to interconnect billions of them into useful networks. This paper proposes a transistor like device that could provide an analogous building block for probabilistic networks. We present two proof-of-concept examples of belief networks, one reciprocal and one non-reciprocal, implemented using the proposed device which is simulated using experimentally benchmarked models.
Exchangeability, extreme returns and Value-at-Risk forecasts
NASA Astrophysics Data System (ADS)
Huang, Chun-Kai; North, Delia; Zewotir, Temesgen
2017-07-01
In this paper, we propose a new approach to extreme value modelling for the forecasting of Value-at-Risk (VaR). In particular, the block maxima and the peaks-over-threshold methods are generalised to exchangeable random sequences. This caters for the dependencies, such as serial autocorrelation, of financial returns observed empirically. In addition, this approach allows for parameter variations within each VaR estimation window. Empirical prior distributions of the extreme value parameters are attained by using resampling procedures. We compare the results of our VaR forecasts to that of the unconditional extreme value theory (EVT) approach and the conditional GARCH-EVT model for robust conclusions.
2012-09-30
oscillation (SAO) and quasi-biennial oscillation ( QBO ) of stratospheric equatorial winds in long-term (10-year) nature runs. The ability of these new schemes...to generate and maintain tropical SAO and QBO circulations in Navy models for the first time is an important breakthrough, since these circulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peryshkin, A. Yu., E-mail: alexb700@yandex.ru; Makarov, P. V., E-mail: bacardi@ispms.ru; Eremin, M. O., E-mail: bacardi@ispms.ru
An evolutionary approach proposed in [1, 2] combining the achievements of traditional macroscopic theory of solid mechanics and basic ideas of nonlinear dynamics is applied in a numerical simulation of present-day tectonic plates motion and seismic process in Central Asia. Relative values of strength parameters of rigid blocks with respect to the soft zones were characterized by the δ parameter that was varied in the numerical experiments within δ = 1.1–1.8 for different groups of the zonal-block divisibility. In general, the numerical simulations of tectonic block motion and accompanying seismic process in the model geomedium indicate that the numerical solutionsmore » of the solid mechanics equations characterize its deformation as a typical behavior of a nonlinear dynamic system under conditions of self-organized criticality.« less
PERTS: A Prototyping Environment for Real-Time Systems
NASA Technical Reports Server (NTRS)
Liu, Jane W. S.; Lin, Kwei-Jay; Liu, C. L.
1991-01-01
We discuss an ongoing project to build a Prototyping Environment for Real-Time Systems, called PERTS. PERTS is a unique prototyping environment in that it has (1) tools and performance models for the analysis and evaluation of real-time prototype systems, (2) building blocks for flexible real-time programs and the support system software, (3) basic building blocks of distributed and intelligent real time applications, and (4) an execution environment. PERTS will make the recent and future theoretical advances in real-time system design and engineering readily usable to practitioners. In particular, it will provide an environment for the use and evaluation of new design approaches, for experimentation with alternative system building blocks and for the analysis and performance profiling of prototype real-time systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margulis, Katherine; Zhang, Xiangyi; Joubert, Lydia -Marie
Template–free fabrication of non–spherical polymeric nanoparticles is desirable for various applications, but has had limited success owing to thermodynamic favorability of sphere formation. Herein we present a simple way to prepare cubic nanoparticles of block copolymers by self–assembly from aqueous solutions at room temperature. Nanocubes with edges of 40–200 nm are formed spontaneously on different surfaces upon water evaporation from micellar solutions of triblock copolymers containing a central poly(ethylene oxide) block and terminal trimethylene carbonate/dithiolane blocks. These polymers self–assemble into 28±5 nm micelles in water. Upon drying, micelle aggregation and a kinetically controlled crystallization of central blocks evidently induce solidmore » cubic particle formation. An approach for preserving the structures of these cubes in water by thiol– or photo–induced crosslinking was developed. In conclusion, the ability to solubilize a model hydrophobic drug, curcumin, was also explored.« less
Margulis, Katherine; Zhang, Xiangyi; Joubert, Lydia -Marie; ...
2017-10-27
Template–free fabrication of non–spherical polymeric nanoparticles is desirable for various applications, but has had limited success owing to thermodynamic favorability of sphere formation. Herein we present a simple way to prepare cubic nanoparticles of block copolymers by self–assembly from aqueous solutions at room temperature. Nanocubes with edges of 40–200 nm are formed spontaneously on different surfaces upon water evaporation from micellar solutions of triblock copolymers containing a central poly(ethylene oxide) block and terminal trimethylene carbonate/dithiolane blocks. These polymers self–assemble into 28±5 nm micelles in water. Upon drying, micelle aggregation and a kinetically controlled crystallization of central blocks evidently induce solidmore » cubic particle formation. An approach for preserving the structures of these cubes in water by thiol– or photo–induced crosslinking was developed. In conclusion, the ability to solubilize a model hydrophobic drug, curcumin, was also explored.« less
Development of Continuum-Atomistic Approach for Modeling Metal Irradiation by Heavy Ions
NASA Astrophysics Data System (ADS)
Batgerel, Balt; Dimova, Stefka; Puzynin, Igor; Puzynina, Taisia; Hristov, Ivan; Hristova, Radoslava; Tukhliev, Zafar; Sharipov, Zarif
2018-02-01
Over the last several decades active research in the field of materials irradiation by high-energy heavy ions has been worked out. The experiments in this area are labor-consuming and expensive. Therefore the improvement of the existing mathematical models and the development of new ones based on the experimental data of interaction of high-energy heavy ions with materials are of interest. Presently, two approaches are used for studying these processes: a thermal spike model and molecular dynamics methods. The combination of these two approaches - the continuous-atomistic model - will give the opportunity to investigate more thoroughly the processes of irradiation of materials by high-energy heavy ions. To solve the equations of the continuous-atomistic model, a software package was developed and the block of molecular dynamics software was tested on the heterogeneous cluster HybriLIT.
NASA Astrophysics Data System (ADS)
Yang, X.; von der Kammer, F.; Wiesner, M.; Yang, Y.; Hofmann, T.
2016-12-01
Humic acid (HA) is widespread in environment and may interfere with nanoparticle transport in porous media. Quantification of the HA's influence is challenging due to the heterogeneous natural of the organic compounds. Through a series of laboratory and modeling studies, we explored (1) the differential mechanisms operated by the sediment - and solution-phase HA in controlling particle transport; (2) the interplay of the HA with several important environmental factors including solution pH, ionic strength (IS), flow rate, organic & particle concentration, and particle size; (3) modeling tools to quantify the above identified influential mechanisms. Study results suggest that site blocking is the main effect imposed by sediment-phase HA on nanoparticle transport while competitive deposition (with nanoparticles) and continuous site blocking occur simultaneously for the solution-phase HA. Solution pH and IS jointly control the HA's blocking efficiency by varying the adsorbed organic conformation. Conversely, the effect of the adsorbed organic concentration appeared to be insignificant. In addition to the chemical parameters, physical parameters like particle size and flow rate also impact on the organic blockage: the blocking efficiency was stronger on larger particles than on smaller ones; increasing flow rate magnifies the HA's blocking efficiency on larger particles but had insignificant impact on smaller ones. Those mechanistic investigations were supported by a quantification approach and a mathematical model developed in those studies. These results can improve the understanding on particle mobility in heterogeneous natural porous media.
Ecological Risk Assessment of Chemicals Migrated from a Recycled Plastic Product
Roh, Ji-Yeon; Kim, Min-Hyuck; Kim, Woo Il; Kang, Young-Yeul; Shin, Sun Kyoung; Kim, Jong-Guk
2013-01-01
Objectives Potential environmental risks caused by chemicals that could be released from a recycled plastic product were assessed using a screening risk assessment procedure for chemicals in recycled products. Methods Plastic slope protection blocks manufactured from recycled plastics were chosen as model recycled products. Ecological risks caused by four model chemicals -di-(2-ethylhexyl) phthalate (DEHP), diisononyl phthalate (DINP), cadmium (Cd), and lead (Pb)- were assessed. Two exposure models were built for soil below the block and a hypothetic stream receiving runoff water. Based on the predicted no-effect concentrations for the selected chemicals and exposure scenarios, the allowable leaching rates from and the allowable contents in the recycled plastic blocks were also derived. Results Environmental risks posed by slope protection blocks were much higher in the soil compartment than in the hypothetic stream. The allowable concentrations in leachate were 1.0×10-4, 1.2×10-5, 9.5×10-3, and 5.3×10-3 mg/L for DEHP, DINP, Cd, and Pb, respectively. The allowable contents in the recycled products were 5.2×10-3, 6.0×10-4, 5.0×10-1, and 2.7×10-1 mg/kg for DEHP, DINP, Cd, and Pb, respectively. Conclusions A systematic ecological risk assessment approach for slope protection blocks would be useful for regulatory decisions for setting the allowable emission rates of chemical contaminants, although the method needs refinement. PMID:24303349
Molenaar, Heike; Boehm, Robert; Piepho, Hans-Peter
2018-01-01
Robust phenotypic data allow adequate statistical analysis and are crucial for any breeding purpose. Such data is obtained from experiments laid out to best control local variation. Additionally, experiments frequently involve two phases, each contributing environmental sources of variation. For example, in a former experiment we conducted to evaluate production related traits in Pelargonium zonale, there were two consecutive phases, each performed in a different greenhouse. Phase one involved the propagation of the breeding strains to obtain the stem cutting count, and phase two involved the assessment of root formation. The evaluation of the former study raised questions regarding options for improving the experimental layout: (i) Is there a disadvantage to using exactly the same design in both phases? (ii) Instead of generating a separate layout for each phase, can the design be optimized across both phases, such that the mean variance of a pair-wise treatment difference (MVD) can be decreased? To answer these questions, alternative approaches were explored to generate two-phase designs either in phase-wise order (Option 1) or across phases (Option 2). In Option 1 we considered the scenarios (i) using in both phases the same experimental design and (ii) randomizing each phase separately. In Option 2, we considered the scenarios (iii) generating a single design with eight replicates and splitting these among the two phases, (iv) separating the block structure across phases by dummy coding, and (v) design generation with optimal alignment of block units in the two phases. In both options, we considered the same or different block structures in each phase. The designs were evaluated by the MVD obtained by the intra-block analysis and the joint inter-block–intra-block analysis. The smallest MVD was most frequently obtained for designs generated across phases rather than for each phase separately, in particular when both phases of the design were separated with a single pseudo-level. The joint optimization ensured that treatment concurrences were equally balanced across pairs, one of the prerequisites for an efficient design. The proposed alternative approaches can be implemented with any model-based design packages with facilities to formulate linear models for treatment and block structures. PMID:29354145
IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING.
Bayard, David S; Schumitzky, Alan
2010-03-01
This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches.
Wang, Cynthia X; Utech, Stefanie; Gopez, Jeffrey D; Mabesoone, Mathijs F J; Hawker, Craig J; Klinger, Daniel
2016-07-06
Well-defined microgel particles were prepared by combining coacervate-driven cross-linking of ionic triblock copolymers with the ability to control particle size and encapsulate functional cargos inherent in microfluidic devices. In this approach, the efficient assembly of PEO-based triblock copolymers with oppositely charged end-blocks allows for bioinspired cross-linking under mild conditions in dispersed aqueous droplets. This strategy enables the integration of charged cargos into the coacervate domains (e.g., the loading of anionic model compounds through electrostatic association with cationic end-blocks). Distinct release profiles can be realized by systematically varying the chemical nature of the payload and the microgel dimensions. This mild and noncovalent assembly method represents a promising new approach to tunable microgels as scaffolds for colloidal biomaterials in therapeutics and regenerative medicine.
NASA Technical Reports Server (NTRS)
Fatig, Curtis; Ochs, William; Johns, Alan; Seaton, Bonita; Adams, Cynthia; Wasiak, Francis; Jones, Ronald; Jackson, Wallace
2012-01-01
The James Webb Space Telescope (JWST) Project has an extended integration and test (I&T) phase due to long procurement and development times of various components as well as recent launch delays. The JWST Ground Segment and Operations group has developed a roadmap of the various ground and flight elements and their use in the various JWST I&T test programs. The JWST Project s building block approach to the eventual operational systems, while not new, is complex and challenging; a large-scale mission like JWST involves international partners, many vendors across the United States, and competing needs for the same systems. One of the challenges is resource balancing so simulators and flight products for various elements congeal into integrated systems used for I&T and flight operations activities. This building block approach to an incremental buildup provides for early problem identification with simulators and exercises the flight operations systems, products, and interfaces during the JWST I&T test programs. The JWST Project has completed some early I&T with the simulators, engineering models and some components of the operational ground system. The JWST Project is testing the various flight units as they are delivered and will continue to do so for the entire flight and operational system. The JWST Project has already and will continue to reap the value of the building block approach on the road to launch and flight operations.
Aggarwal, Arun K; Gupta, Rakesh; Das, Dhritiman; Dhakar, Anar S; Sharma, Gourav; Anand, Himani; Kaur, Kamalpreet; Sheoran, Kiran; Dalpath, Suresh; Khatri, Jaidev; Gupta, Madhu
2018-01-01
"Integrated Management of Neonatal and Childhood Illnesses" (IMNCI) needs regular supportive supervision (SS). The aim of this study was to find suitable SS model for implementing IMNCI. This was a prospective interventional study in 10 high-focus districts of Haryana. Two methods of SS were used: (a) visit to subcenters and home visits (model 1) and (b) organization of IMNCI clinics/camps at primary health center (PHC) and community health center (CHC) (model 2). Skill scores were measured at different time points. Routine IMNCI data from study block and randomly selected control block of each district were retrieved for 4 months before and after the training and supervision. Change in percentage mean skill score difference and percentage difference in median number of children were assessed in two areas. Mean skill scores increased significantly from 2.1 (pretest) to 7.0 (posttest 1). Supportive supervisory visits sustained and improved skill scores. While model 2 of SS could positively involve health system officials, model 1 was not well received. Outcome indicator in terms of number of children assessed showed a significant improvement in intervention areas. SS in IMNCI clinics/camps at PHC/CHC level and innovative skill scoring method is a promising approach.
Technical Manual for the SAM Physical Trough Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, M. J.; Gilman, P.
2011-06-01
NREL, in conjunction with Sandia National Lab and the U.S Department of Energy, developed the System Advisor Model (SAM) analysis tool for renewable energy system performance and economic analysis. This paper documents the technical background and engineering formulation for one of SAM's two parabolic trough system models in SAM. The Physical Trough model calculates performance relationships based on physical first principles where possible, allowing the modeler to predict electricity production for a wider range of component geometries than is possible in the Empirical Trough model. This document describes the major parabolic trough plant subsystems in detail including the solar field,more » power block, thermal storage, piping, auxiliary heating, and control systems. This model makes use of both existing subsystem performance modeling approaches, and new approaches developed specifically for SAM.« less
Steady evolution of hillslopes in layered landscapes: self-organization of a numerical hogback
NASA Astrophysics Data System (ADS)
Glade, R.; Anderson, R. S.
2017-12-01
Landscapes developed in layered sedimentary or igneous rocks are common across Earth, as well as on other planets. Features such as hogbacks, exposed dikes, escarpments and mesas exhibit resistant rock layers in tilted, vertical, or horizontal orientations adjoining more erodible rock. Hillslopes developed in the erodible rock are typically characterized by steep, linear-to-concave slopes or "ramps" mantled with material derived from the resistant layers, often in the form of large blocks. Our previous work on hogbacks has shown that feedbacks between weathering and transport of the blocks and underlying soft rock are fundamental to their formation; our numerical model incorporating these feedbacks explain the development of commonly observed concave-up slope profiles in the absence of rilling processes. Here we employ an analytic approach to describe the steady behavior of our model, in which hillslope form and erosion rates remain constant in the reference frame of the retreating feature. We first revisit a simple geometric analysis that relates structural dip to erosion rates. We then explore the mechanisms by which our numerical model of hogback evolution self-organizes to meet these geometric expectations. Autogenic adjustment of soil depth, slope and erosion rates enables efficient transport of resistant blocks; this allows erosion of the resistant layer to keep up with base level fall rate, leading to steady evolution of the feature. Analytic solutions relate easily measurable field quantities such as ramp length, slope, block size and resistant layer dip angle to local incision rate, block velocity, and block weathering rate. These equations provide a framework for exploring the evolution of layered landscapes, and pinpoint the processes for which we require a more thorough understanding to predict the evolution of such signature landscapes over time.
Evolutionary Concepts for Decentralized Air Traffic Flow Management
NASA Technical Reports Server (NTRS)
Adams, Milton; Kolitz, Stephan; Milner, Joseph; Odoni, Amedeo
1997-01-01
Alternative concepts for modifying the policies and procedures under which the air traffic flow management system operates are described, and an approach to the evaluation of those concepts is discussed. Here, air traffic flow management includes all activities related to the management of the flow of aircraft and related system resources from 'block to block.' The alternative concepts represent stages in the evolution from the current system, in which air traffic management decision making is largely centralized within the FAA, to a more decentralized approach wherein the airlines and other airspace users collaborate in air traffic management decision making with the FAA. The emphasis in the discussion is on a viable medium-term partially decentralized scenario representing a phase of this evolution that is consistent with the decision-making approaches embodied in proposed Free Flight concepts for air traffic management. System-level metrics for analyzing and evaluating the various alternatives are defined, and a simulation testbed developed to generate values for those metrics is described. The fundamental issue of modeling airline behavior in decentralized environments is also raised, and an example of such a model, which deals with the preservation of flight bank integrity in hub airports, is presented.
Unified-theory-of-reinforcement neural networks do not simulate the blocking effect.
Calvin, Nicholas T; J McDowell, J
2015-11-01
For the last 20 years the unified theory of reinforcement (Donahoe et al., 1993) has been used to develop computer simulations to evaluate its plausibility as an account for behavior. The unified theory of reinforcement states that operant and respondent learning occurs via the same neural mechanisms. As part of a larger project to evaluate the operant behavior predicted by the theory, this project was the first replication of neural network models based on the unified theory of reinforcement. In the process of replicating these neural network models it became apparent that a previously published finding, namely, that the networks simulate the blocking phenomenon (Donahoe et al., 1993), was a misinterpretation of the data. We show that the apparent blocking produced by these networks is an artifact of the inability of these networks to generate the same conditioned response to multiple stimuli. The piecemeal approach to evaluate the unified theory of reinforcement via simulation is critiqued and alternatives are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Methodology for finding and evaluating safe landing sites on small bodies
NASA Astrophysics Data System (ADS)
Rodgers, Douglas J.; Ernst, Carolyn M.; Barnouin, Olivier S.; Murchie, Scott L.; Chabot, Nancy L.
2016-12-01
Here we develop and demonstrate a three-step strategy for finding a safe landing ellipse for a legged spacecraft on a small body such as an asteroid or planetary satellite. The first step, acquisition of a high-resolution terrain model of a candidate landing region, is simulated using existing statistics on block abundances measured at Phobos, Eros, and Itokawa. The synthetic terrain model is generated by randomly placing hemispheric shaped blocks with the empirically determined size-frequency distribution. The resulting terrain is much rockier than typical lunar or martian landing sites. The second step, locating a landing ellipse with minimal hazards, is demonstrated for an assumed approach to landing that uses Autonomous Landing and Hazard Avoidance Technology. The final step, determination of the probability distribution for orientation of the landed spacecraft, is demonstrated for cases of differing regional slope. The strategy described here is both a prototype for finding a landing site during a flight mission and provides tools for evaluating the design of small-body landers. We show that for bodies with Eros-like block distributions, there may be >99% probability of landing stably at a low tilt without blocks impinging on spacecraft structures so as to pose a survival hazard.
Approaching the design of a failsafe turbine monitor with simple microcontroller blocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zapolin, R.E.
1995-12-31
The proper approach to early instrumentation design for tasks like failsafe turbine monitoring permits meeting requirements without resorting to traditional complex special-purpose electronics. Instead a small network of basic microcontroller building blocks can split the effort with each block optimized for its portion of the overall system. This paper discusses approaching design by partitioning intricate system specifications to permit each block to be optimized to the safety level appropriate for its portion of the overall task while retaining and production and reliability advantages of having common simple modules. It illustrates that approach with a modular microcontroller-based speed monitor which metmore » user needs for the latest in power plant monitoring equipment.« less
Wave Current Interactions and Wave-blocking Predictions Using NHWAVE Model
2013-03-01
Navier-Stokes equation. In this approach, as with previous modeling techniques, there is difficulty in simulating the free surface that inhibits accurate...hydrostatic, free - surface , rotational flows in multiple dimensions. It is useful in predicting transformations of surface waves and rapidly varied...Stelling, G., and M. Zijlema, 2003: An accurate and efficient finite-differencing algorithm for non-hydrostatic free surface flow with application to
Data Policy Construction Set - Building Blocks from Childhood Constructions
NASA Astrophysics Data System (ADS)
Fleischer, Dirk; Paul-Stueve, Thilo; Jobmann, Alexandra; Farrenkopf, Stefan
2016-04-01
A complete construction set of building blocks usually comes with instructions and these instruction include building stages. The products of these building stages usually build from very general parts become highly specialized building parts for very unique features of the whole construction model. This sounds very much like the construction or organization of an interdisciplinary research project, institution or association, doesn't it! The creation process of an overarching data policy for a project group or institution is exactly the combination of individual interests with the common goal of a collaborative data policy and can be compared with the building stages of a construction set of building blocks and the building instructions. Keeping this in mind we created the data policy construction set of textual building blocks. This construction set is subdivided into several building stages or parts each containing multiple building blocks as text blocks. By combining building blocks of all subdivisions it is supposed to create a cascading data policy document. Cascading from the top level as a construction set provider for all further down existing levels such as project, themes, work packages or Universities, faculties, institutes down to the working level of working groups. The working groups are picking from the remaining building blocks in the provided construction set the suitable blocks for its working procedures to create a very specific policy from the available construction set provided by the top level community. Nevertheless, if a working group realized that there are missing building blocks or worse that there are missing building parts, then they have the chance to add the missing pieces to the construction set of direct an future use. This cascading approach enables project or institution wide application of the encoded rules from the textual level on access to data storage infrastructure. This structured approach is flexible enough to allow for the fact that interdisciplinary research projects always bring together very diverse amount of working habits, methods and requirements. All these need to be considered for the creation of the general document on data sharing and research data management. This approach focused on the recommendation of the RDA practical policy working group to implement practical policies derived from the textual level. Therefore it aims to move the data policy creation procedure and implementation towards the consortium or institutional formation with all the benefits of an existing data policy construction set already during the proposal creation and proposal review. Picking up the metaphor of real building blocks in context of data policies provides also the insight that existing building blocks and building parts can be reused as they are, but also can be redesigned with very little changes or a full overhaul.
A Mathematical Model for Railway Control Systems
NASA Technical Reports Server (NTRS)
Hoover, D. N.
1996-01-01
We present a general method for modeling safety aspects of railway control systems. Using our modeling method, one can progressively refine an abstract railway safety model, sucessively adding layers of detail about how a real system actually operates, while maintaining a safety property that refines the original abstract safety property. This method supports a top-down approach to specification of railway control systems and to proof of a variety of safety-related properties. We demonstrate our method by proving safety of the classical block control system.
Emerging CFD technologies and aerospace vehicle design
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.
1995-01-01
With the recent focus on the needs of design and applications CFD, research groups have begun to address the traditional bottlenecks of grid generation and surface modeling. Now, a host of emerging technologies promise to shortcut or dramatically simplify the simulation process. This paper discusses the current status of these emerging technologies. It will argue that some tools are already available which can have positive impact on portions of the design cycle. However, in most cases, these tools need to be integrated into specific engineering systems and process cycles to be used effectively. The rapidly maturing status of unstructured and Cartesian approaches for inviscid simulations makes suggests the possibility of highly automated Euler-boundary layer simulations with application to loads estimation and even preliminary design. Similarly, technology is available to link block structured mesh generation algorithms with topology libraries to avoid tedious re-meshing of topologically similar configurations. Work in algorithmic based auto-blocking suggests that domain decomposition and point placement operations in multi-block mesh generation may be properly posed as problems in Computational Geometry, and following this approach may lead to robust algorithmic processes for automatic mesh generation.
NASA Astrophysics Data System (ADS)
Bianchetti, Matteo; Agliardi, Federico; Villa, Alberto; Battista Crosta, Giovanni; Rivolta, Carlo
2015-04-01
Rockfall risk analysis require quantifying rockfall onset susceptibility and magnitude scenarios at source areas, and the expected rockfall trajectories and related dynamic quantities. Analysis efforts usually focus on the rockfall runout component, whereas rock mass characterization and block size distribution quantification, monitoring and analysis of unstable rock volumes are usually performed using simplified approaches, due to technological and site-specific issues. Nevertheless, proper quantification of rock slope stability and rockfall magnitude scenarios is key when dealing with high rock walls, where widespread rockfall sources and high variability of release mechanisms and block volumes can result in excessive modelling uncertainties and poorly constrained mitigation measures. We explored the potential of integrating field, remote sensing, structural analysis and stability modelling techniques to improve hazard assessment at the Gallivaggio sanctuary site, a XVI century heritage located along the State Road 36 in the Spluga Valley (Italian Central Alps). The site is impended by a subvertical cliff up to 600 m high, made of granitic orthogneiss of the Truzzo granitic complex (Tambo Nappe, upper Pennidic domain). The rock mass is cut by NNW and NW-trending slope-scale structural lineaments and by 5-6 fracture sets with variable spatial distribution, spacing and persistence, which bound blocks up to tens of cubic meters and control the 3D slope morphology. The area is characterised by widespread rock slope instability from rockfalls to massive failures. Although a 180 m long embankment was built to protect the site from rockfalls, concerns remain about potential large unstable rock volumes or flyrocks projected by the widely observed impact fragmentation of stiff rock blocks. Thus, the authority in charge started a series of periodical GB-InSAR monitoring surveys using LiSALabTM technology (12 surveys in 2011-2014), which outlined the occurrence of unstable spots spread over the cliff, with cm-scale cumulative displacements in the observation period. To support the interpretation and analysis of these data, we carried out multitemporal TLS surveys (5 sessions between September 2012 and October 2014) using a Riegl VZ-1000 long-range laser scanner. We performed rock mass structural analyses on dense TLS point clouds using two different approaches: 1) manual discontinuity orientation and intensity measurement from digital outcrops; 2) automatic feature extraction and intensity evaluation through the development of an original Matlab tool, suited for multi-scale applications and optimized for parallel computing. Results were validated using field discontinuity measurements and compared to evaluate advantages and limitations of different approaches, and allowed: 1) outlining the precise location, geometry and kinematics of unstable blocks and block clusters corresponding to radar moving spots; 2) performing stability analyses; 3) quantifying rockwall changes over the observation period. Our analysis provided a robust spatial characterization of rockfall sources, block size distribution and onset susceptibility as input for 3D runout modelling and quantitative risk analysis.
Faiz, Seyed Hamid Reza; Alebouyeh, Mahmoud Reza; Derakhshan, Pooya; Imani, Farnad; Rahimzadeh, Poupak; Ghaderi Ashtiani, Maryam
2018-01-01
Due to the importance of pain control after abdominal surgery, several methods such as transversus abdominis plane (TAP) block are used to reduce the pain after surgery. TAP blocks can be performed using various ultrasound-guided approaches. Two important approaches to do this are ultrasound-guided lateral and posterior approaches. This study aimed to compare the two approaches of ultrasound-guided lateral and posterior TAP blocks to control pain after cesarean section. In this double-blind clinical trial study, 76 patients scheduled for elective cesarean section were selected and randomly divided into two groups of 38 and underwent spinal anesthesia. For pain management after the surgery, one group underwent lateral TAP block and the other group underwent posterior TAP block using 20cc of ropivacaine 0.2% on both sides. Pain intensity was evaluated based on Numerical Analog Scale (NAS) at rest and when coughing, 2, 4, 6, 12, 24 and 36 hours after surgery. The pain at rest in the posterior group at all hours post surgery was lower than the lateral group, especially at 6, 12 and 24 hours after the surgery and the difference was statistically significant ( p =0.03, p <0.004, p =0.001). The results of this study show that ultrasound-guided posterior TAP block compared with the lateral TAP block was more effective in pain control after cesarean section.
Fan, Wei; Shi, Wen; Zhang, Wenting; Jia, Yinnong; Zhou, Zhengyuan; Brusnahan, Susan K; Garrison, Jered C
2016-10-01
This work continues our efforts to improve the diagnostic and radiotherapeutic effectiveness of nanomedicine platforms by developing approaches to reduce the non-target accumulation of these agents. Herein, we developed multi-block HPMA copolymers with backbones that are susceptible to cleavage by cathepsin S, a protease that is abundantly expressed in tissues of the mononuclear phagocyte system (MPS). Specifically, a bis-thiol terminated HPMA telechelic copolymer containing 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) was synthesized by reversible addition-fragmentation chain transfer (RAFT) polymerization. Three maleimide modified linkers with different sequences, including cathepsin S degradable oligopeptide, scramble oligopeptide and oligo ethylene glycol, were subsequently synthesized and used for the extension of the HPMA copolymers by thiol-maleimide click chemistry. All multi-block HPMA copolymers could be labeled by (177)Lu with high labeling efficiency and exhibited high serum stability. In vitro cleavage studies demonstrated highly selective and efficient cathepsin S mediated cleavage of the cathepsin S-susceptible multi-block HPMA copolymer. A modified multi-block HPMA copolymer series capable of Förster Resonance Energy Transfer (FRET) was utilized to investigate the rate of cleavage of the multi-block HPMA copolymers in monocyte-derived macrophages. Confocal imaging and flow cytometry studies revealed substantially higher rates of cleavage for the multi-block HPMA copolymers containing the cathepsin S-susceptible linker. The efficacy of the cathepsin S-cleavable multi-block HPMA copolymer was further examined using an in vivo model of pancreatic ductal adenocarcinoma. Based on the biodistribution and SPECT/CT studies, the copolymer extended with the cathepsin S susceptible linker exhibited significantly faster clearance and lower non-target retention without compromising tumor targeting. Overall, these results indicate that exploitation of the cathepsin S activity in MPS tissues can be utilized to substantially lower non-target accumulation, suggesting this is a promising approach for the development of diagnostic and radiotherapeutic nanomedicine platforms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Using fuzzy rule-based knowledge model for optimum plating conditions search
NASA Astrophysics Data System (ADS)
Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Arzamastsev, A. A.; Glazkov, V. P.; L’vov, A. A.
2018-03-01
The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.
Loehle, C.; Van Deusen, P.; Wigley, T.B.; Mitchell, M.S.; Rutzmoser, S.H.; Aggett, J.; Beebe, J.A.; Smith, M.L.
2006-01-01
Wildlife-habitat relationship models have sometimes been linked with forest simulators to aid in evaluating outcomes of forest management alternatives. However, linking wildlife-habitat models with harvest scheduling software would provide a more direct method for assessing economic and ecological implications of alternative harvest schedules in commercial forest operations. We demonstrate an approach for frontier analyses of wildlife benefits using the Habplan harvest scheduler and spatially explicit wildlife response models in the context of operational forest planning. We used the Habplan harvest scheduler to plan commercial forest management over a 40-year horizon at a landscape scale under five scenarios: unmanaged, an unlimited block-size option both with and without riparian buffers, three cases with different block-size restrictions, and a set-asides scenario in which older stands were withheld from cutting. The potential benefit to wildlife was projected based on spatial models of bird guild richness and species probability of detection. Harvested wood volume provided a measure of scenario costs, which provides an indication of management feasibility. Of nine species and guilds, none appeared to benefit from 50 m riparian buffers, response to an unmanaged scenario was mixed and expensive, and block-size restrictions (maximum harvest unit size) provided no apparent benefit and in some cases were possibly detrimental to bird richness. A set-aside regime, however, appeared to provide significant benefits to all species and groups, probably through increased landscape heterogeneity and increased availability of older forest. Our approach shows promise for evaluating costs and benefits of forest management guidelines in commercial forest enterprises and improves upon the state of the art by utilizing an optimizing harvest scheduler as in commercial forest management, multiple measures of biodiversity (models for multiple species and guilds), and spatially explicit wildlife response models. ?? 2006 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel
2011-12-01
This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.
Deformed Palmprint Matching Based on Stable Regions.
Wu, Xiangqian; Zhao, Qiushi
2015-12-01
Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.
Shang, Zhehai; Lee, Zhongping; Dong, Qiang; Wei, Jianwei
2017-09-01
Self-shading associated with a skylight-blocked approach (SBA) system for the measurement of water-leaving radiance (L w ) and its correction [Appl. Opt.52, 1693 (2013)APOPAI0003-693510.1364/AO.52.001693] is characterized by Monte Carlo simulations, and it is found that this error is in a range of ∼1%-20% under most water properties and solar positions. A model for estimating this shading error is further developed, and eventually a scheme to correct this error based on the shaded measurements is proposed and evaluated. It is found that the shade-corrected value in the visible domain is within 3% of the true value, which thus indicates that we can obtain not only high precision but also high accuracy L w in the field with the SBA scheme.
Lee, Jonghyun; Rolle, Massimo; Kitanidis, Peter K
2017-09-15
Most recent research on hydrodynamic dispersion in porous media has focused on whole-domain dispersion while other research is largely on laboratory-scale dispersion. This work focuses on the contribution of a single block in a numerical model to dispersion. Variability of fluid velocity and concentration within a block is not resolved and the combined spreading effect is approximated using resolved quantities and macroscopic parameters. This applies whether the formation is modeled as homogeneous or discretized into homogeneous blocks but the emphasis here being on the latter. The process of dispersion is typically described through the Fickian model, i.e., the dispersive flux is proportional to the gradient of the resolved concentration, commonly with the Scheidegger parameterization, which is a particular way to compute the dispersion coefficients utilizing dispersivity coefficients. Although such parameterization is by far the most commonly used in solute transport applications, its validity has been questioned. Here, our goal is to investigate the effects of heterogeneity and mass transfer limitations on block-scale longitudinal dispersion and to evaluate under which conditions the Scheidegger parameterization is valid. We compute the relaxation time or memory of the system; changes in time with periods larger than the relaxation time are gradually leading to a condition of local equilibrium under which dispersion is Fickian. The method we use requires the solution of a steady-state advection-dispersion equation, and thus is computationally efficient, and applicable to any heterogeneous hydraulic conductivity K field without requiring statistical or structural assumptions. The method was validated by comparing with other approaches such as the moment analysis and the first order perturbation method. We investigate the impact of heterogeneity, both in degree and structure, on the longitudinal dispersion coefficient and then discuss the role of local dispersion and mass transfer limitations, i.e., the exchange of mass between the permeable matrix and the low permeability inclusions. We illustrate the physical meaning of the method and we show how the block longitudinal dispersivity approaches, under certain conditions, the Scheidegger limit at large Péclet numbers. Lastly, we discuss the potential and limitations of the method to accurately describe dispersion in solute transport applications in heterogeneous aquifers. Copyright © 2017. Published by Elsevier B.V.
Arafat, Basel; Wojsz, Magdalena; Isreb, Abdullah; Forbes, Robert T; Isreb, Mohammad; Ahmed, Waqar; Arafat, Tawfiq; Alhnan, Mohamed A
2018-06-15
Fused deposition modelling (FDM) 3D printing has shown the most immediate potential for on-demand dose personalisation to suit particular patient's needs. However, FDM 3D printing often involves employing a relatively large molecular weight thermoplastic polymer and results in extended release pattern. It is therefore essential to fast-track drug release from the 3D printed objects. This work employed an innovative design approach of tablets with unique built-in gaps (Gaplets) with the aim of accelerating drug release. The novel tablet design is composed of 9 repeating units (blocks) connected with 3 bridges to allow the generation of 8 gaps. The impact of size of the block, the number of bridges and the spacing between different blocks was investigated. Increasing the inter-block space reduced mechanical resistance of the unit, however, tablets continued to meet pharmacopeial standards for friability. Upon introduction into gastric medium, the 1 mm spaces gaplet broke into mini-structures within 4 min and met the USP criteria of immediate release products (86.7% drug release at 30 min). Real-time ultraviolet (UV) imaging indicated that the cellulosic matrix expanded due to swelling of hydroxypropyl cellulose (HPC) upon introduction to the dissolution medium. This was followed by a steady erosion of the polymeric matrix at a rate of 8 μm/min. The design approach was more efficient than a comparison conventional formulation approach of adding disintegrants to accelerate tablet disintegration and drug release. This work provides a novel example where computer-aided design was instrumental at modifying the performance of solid dosage forms. Such an example may serve as the foundation for a new generation of dosage forms with complicated geometric structures to achieve functionality that is usually achieved by a sophisticated formulation approach. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Agliardi, Federico; Galletti, Laura; Riva, Federico; Zanchi, Andrea; Crosta, Giovanni B.
2017-04-01
An accurate characterization of the geometry and intensity of discontinuities in a rock mass is key to assess block size distribution and degree of freedom. These are the main controls on the magnitude and mechanisms of rock slope instabilities (structurally-controlled, step-path or mass failures) and rock mass strength and deformability. Nevertheless, the use of over-simplified discontinuity characterization approaches, unable to capture the stochastic nature of discontinuity features, often hampers a correct identification of dominant rock mass behaviour. Discrete Fracture Network (DFN) modelling tools have provided new opportunities to overcome these caveats. Nevertheless, their ability to provide a representative picture of reality strongly depends on the quality and scale of field data collection. Here we used DFN modelling with FracmanTM to investigate the influence of fracture intensity, characterized on different scales and with different techniques, on the geometry and size distribution of generated blocks, in a rock slope stability perspective. We focused on a test site near Lecco (Southern Alps, Italy), where 600 m high cliffs in thickly-bedded limestones folded at the slope scale impend on the Lake Como. We characterized the 3D slope geometry by Structure-from-Motion photogrammetry (range: 150-1500m; point cloud density > 50 pts/m2). Since the nature and attributes of discontinuities are controlled by brittle failure processes associated to large-scale folding, we performed a field characterization of meso-structural features (faults and related kinematics, vein and joint associations) in different fold domains. We characterized the discontinuity populations identified by structural geology on different spatial scales ranging from outcrops (field surveys and photo-mapping) to large slope sectors (point cloud and photo-mapping). For each sampling domain, we characterized discontinuity orientation statistics and performed fracture mapping and circular window analyses in order to measure fracture intensity (P21) and persistence (trace length distributions). Then, we calibrated DFN models for different combinations of P21/P32 and trace length distributions, characteristic of data collected on different scale. Comparing fracture patterns and block size distributions obtained from different models, we outline the strong influence of field data quality and scale on the rock mass behaviours predicted by DFN. We show that accounting for small scale features (close but short fractures) results in smaller but more interconnected blocks, eventually characterized by low removability and partly supported by intact rock strength. On the other hand, DFN based on data surveyed on slope scale enhance the structural control of persistent fracture on the kinematic degree-of freedom of medium-sized blocks, with significant impacts on the selection and parametrization of rock slope stability modelling approaches.
Bergmann, Lars; Martini, Stefan; Kesselmeier, Miriam; Armbruster, Wolf; Notheisen, Thomas; Adamzik, Michael; Eichholz, Rϋdiger
2016-07-29
Interscalene brachial plexus (ISB) block is often associated with phrenic nerve block and diaphragmatic paresis. The goal of our study was to test if the anterior or the posterior ultrasound guided approach of the ISB is associated with a lower incidence of phrenic nerve blocks and impaired lung function. This was a prospective, randomized and single-blinded study of 84 patients scheduled for elective shoulder surgery who fullfilled the inclusion and exclusion critereria. Patients were randomized in two groups to receive either the anterior (n = 42) or the posterior (n = 42) approach for ISB. Clinical data were recorded. In both groups patients received ISB with a total injection volume of 15 ml of ropivacaine 1 %. Spirometry was conducted at baseline (T0) and 30 min (T30) after accomplishing the block. Changes in spirometrical variables between T0 and T30 were investigated by Wilcoxon signed-rank test for each puncture approach. The temporal difference between the posterior and the anterior puncture approach groups were again analyzed by the Wilcoxon-Mann-Whitney test. The spirometric results showed a significant decrease in vital capacity, forced expiratory volume per second, and maximum nasal inspiratory breathing after the Interscalene brachial plexus block; indicating a phrenic nerve block (p <0.001, Wilcoxon signed-rank). A significant difference in the development of the spirometric parameters between the anterior and the posterior group could not be identified (Wilcoxon-Mann-Whitney test). Despite the changes in spirometry, no cases of dyspnea were reported. A different site of injection (anterior or posterior) did not show an effect in reducing the cervical block spread of the local anesthetic and the incidence of phrenic nerve blocks during during ultrasound guided Interscalene brachial plexus block. Clinical breathing effects of phrenic nerve blocks are, however, usually well compensated, and subjective dyspnea did not occur in our patients. German Clinical Trials Register (DRKS number 00009908 , registered 26 January 2016).
Zheng, Qingshan; Yang, Xiaolin; Lv, Rong; Ma, Longxiang; Liu, Jin; Zhu, Tao; Zhang, Wensheng
2017-01-01
Objective The quaternary lidocaine derivative (QX-314) in combination with bupivacaine can produce long-lasting nerve blocks in vivo, indicating potential clinical application. The aim of the study was to investigate the efficacy, safety, and the optimal formulation of this combination. Methods QX-314 and bupivacaine at different concentration ratios were injected in the vicinity of the sciatic nerve in rats; bupivacaine and saline served as controls (n = 6~10). Rats were inspected for durations of effective sensory and motor nerve blocks, systemic adverse effects, and histological changes of local tissues. Mathematical models were established to reveal drug-interaction, concentration-effect relationships, and the optimal ratio of QX-314 to bupivacaine. Results 0.2~1.5% QX-314 with 0.03~0.5% bupivacaine produced 5.8~23.8 h of effective nerve block; while 0.5% bupivacaine alone was effective for 4 h. No systemic side effects were observed; local tissue reactions were similar to those caused by 0.5% bupivacaine if QX-314 were used < 1.2%. The weighted modification model was successfully established, which revealed that QX-314 was the main active ingredient while bupivacaine was the synergist. The formulation, 0.9% QX-314 plus 0.5% bupivacaine, resulted in 10.1 ± 0.8 h of effective sensory and motor nerve blocks. Conclusion The combination of QX-314 and bupivacaine facilitated prolonged sciatic nerve block in rats with a satisfactory safety profile, maximizing the duration of nerve block without clinically important systemic and local tissue toxicity. It may emerge as an alternative approach to post-operative pain treatment. PMID:28334014
Yin, Qinqin; Li, Jun; Zheng, Qingshan; Yang, Xiaolin; Lv, Rong; Ma, Longxiang; Liu, Jin; Zhu, Tao; Zhang, Wensheng
2017-01-01
The quaternary lidocaine derivative (QX-314) in combination with bupivacaine can produce long-lasting nerve blocks in vivo, indicating potential clinical application. The aim of the study was to investigate the efficacy, safety, and the optimal formulation of this combination. QX-314 and bupivacaine at different concentration ratios were injected in the vicinity of the sciatic nerve in rats; bupivacaine and saline served as controls (n = 6~10). Rats were inspected for durations of effective sensory and motor nerve blocks, systemic adverse effects, and histological changes of local tissues. Mathematical models were established to reveal drug-interaction, concentration-effect relationships, and the optimal ratio of QX-314 to bupivacaine. 0.2~1.5% QX-314 with 0.03~0.5% bupivacaine produced 5.8~23.8 h of effective nerve block; while 0.5% bupivacaine alone was effective for 4 h. No systemic side effects were observed; local tissue reactions were similar to those caused by 0.5% bupivacaine if QX-314 were used < 1.2%. The weighted modification model was successfully established, which revealed that QX-314 was the main active ingredient while bupivacaine was the synergist. The formulation, 0.9% QX-314 plus 0.5% bupivacaine, resulted in 10.1 ± 0.8 h of effective sensory and motor nerve blocks. The combination of QX-314 and bupivacaine facilitated prolonged sciatic nerve block in rats with a satisfactory safety profile, maximizing the duration of nerve block without clinically important systemic and local tissue toxicity. It may emerge as an alternative approach to post-operative pain treatment.
An integer programming approach to a real-world recyclable waste collection problem in Argentina.
Braier, Gustavo; Durán, Guillermo; Marenco, Javier; Wesner, Francisco
2017-05-01
This article reports on the use of mathematical programming techniques to optimise the routes of a recyclable waste collection system servicing Morón, a large municipality outside Buenos Aires, Argentina. The truck routing problem posed by the system is a particular case of the generalised directed open rural postman problem. An integer programming model is developed with a solving procedure built around a subtour-merging algorithm and the addition of subtour elimination constraints. The route solutions generated by the proposed methodology perform significantly better than the previously used, manually designed routes, the main improvement being that coverage of blocks within the municipality with the model solutions is 100% by construction, whereas with the manual routes as much as 16% of the blocks went unserviced. The model-generated routes were adopted by the municipality in 2014 and the national government is planning to introduce the methodology elsewhere in the country.
Multiple-length-scale deformation analysis in a thermoplastic polyurethane
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.
2015-01-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945
Three-Dimensional Cellular Structures Enhanced By Shape Memory Alloys
NASA Technical Reports Server (NTRS)
Nathal, Michael V.; Krause, David L.; Wilmoth, Nathan G.; Bednarcyk, Brett A.; Baker, Eric H.
2014-01-01
This research effort explored lightweight structural concepts married with advanced smart materials to achieve a wide variety of benefits in airframe and engine components. Lattice block structures were cast from an aerospace structural titanium alloy Ti-6Al-4V and a NiTi shape memory alloy (SMA), and preliminary properties have been measured. A finite element-based modeling approach that can rapidly and accurately capture the deformation response of lattice architectures was developed. The Ti-6-4 and SMA material behavior was calibrated via experimental tests of ligaments machined from the lattice. Benchmark testing of complete lattice structures verified the main aspects of the model as well as demonstrated the advantages of the lattice structure. Shape memory behavior of a sample machined from a lattice block was also demonstrated.
Determination of Phobos' rotational parameters by an inertial frame bundle block adjustment
NASA Astrophysics Data System (ADS)
Burmeister, Steffi; Willner, Konrad; Schmidt, Valentina; Oberst, Jürgen
2018-01-01
A functional model for a bundle block adjustment in the inertial reference frame was developed, implemented and tested. This approach enables the determination of rotation parameters of planetary bodies on the basis of photogrammetric observations. Tests with a self-consistent synthetic data set showed that the implementation converges reliably toward the expected values of the introduced unknown parameters of the adjustment, e.g., spin pole orientation, and that it can cope with typical observational errors in the data. We applied the model to a data set of Phobos using images from the Mars Express and the Viking mission. With Phobos being in a locked rotation, we computed a forced libration amplitude of 1.14^circ ± 0.03^circ together with a control point network of 685 points.
Mizutani, Eiji; Demmel, James W
2003-01-01
This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).
Topology Optimization of Lightweight Lattice Structural Composites Inspired by Cuttlefish Bone
NASA Astrophysics Data System (ADS)
Hu, Zhong; Gadipudi, Varun Kumar; Salem, David R.
2018-03-01
Lattice structural composites are of great interest to various industries where lightweight multifunctionality is important, especially aerospace. However, strong coupling among the composition, microstructure, porous topology, and fabrication of such materials impedes conventional trial-and-error experimental development. In this work, a discontinuous carbon fiber reinforced polymer matrix composite was adopted for structural design. A reliable and robust design approach for developing lightweight multifunctional lattice structural composites was proposed, inspired by biomimetics and based on topology optimization. Three-dimensional periodic lattice blocks were initially designed, inspired by the cuttlefish bone microstructure. The topologies of the three-dimensional periodic blocks were further optimized by computer modeling, and the mechanical properties of the topology optimized lightweight lattice structures were characterized by computer modeling. The lattice structures with optimal performance were identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, S.I.; Bharati, S.; Glass, J.
1981-04-01
A 20-year-old man contracted Hodgkin's disease and was treated with mantle radiotherapy. Heart block developed 11 years later. Electrocardiograms revealed predominant atrioventricular (AV) block and occasional AV conduction. Intracardiac electrograms demonstrated that the site of AV block was above the level of the His bundle. A permanent transvenous pacemaker was implanted. Seven months later the patient died of complications from cryptococcal meningitis. Pathological study of the heart revealed marked arteriosclerosis with fibrosis of the epicardium, myocardium, and endocardium. Examination of the conduction system revealed extensive arteriolosclerosis of the sinoatrial node and its approaches. In addition, there was marked fibrosis ofmore » the approaches to the AV node, the AV bundle, and both bundle branches. There was no evidence of Hodgkin's disease. This case documents the rare occurrence of AV block due to tissue destruction by radiotherapy. There was a good correlation between block proximal to the His bundle recording site and fibrosis of the approaches to the AV node.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, S.I.; Bharati, S.; Glass, J.
1981-04-01
A 20-year-old man contracted Hodgkin's disease and was treated with mantle radiotherapy. Heart block developed 11 years later. Electrocardiograms revealed predominant atrioventricular (AV) block and occasional AV conduction. Intracardiac electrograms demonstrated that the site of AV block was above the level of the His bundle. A permanent transvenous pacemaker was implanted. Seven months later the patient died of complications from cryptococcal meningitis. Pathological study of the heart revealed marked arteriosclerosis with fibrosis of the epicardium, myocardium, and endocardium. Examination of the conduction system revealed extensive arteriolosclerosis of the sinoatrial node and its approaches. In addition, there was marked fibrosis ofmore » the approaches to the AV node, the AV bundle, and both bundle branches. There was no evidence of Hodgkin's disease. This case documents the rare occurrence of AV block due to tissue destruction by radiotherapy. There was a good correlation between block proximal to the His bundle recording site and fibrosis of the approaches to the AV node.« less
A 2d Block Model For Landslide Simulation: An Application To The 1963 Vajont Case
NASA Astrophysics Data System (ADS)
Tinti, S.; Zaniboni, F.; Manucci, A.; Bortolucci, E.
A 2D block model to study the motion of a sliding mass is presented. The slide is par- titioned into a matrix of blocks the basis of which are quadrilaterals. The blocks move on a specified sliding surface and follow a trajectory that is computed by the model. The forces acting on the blocks are gravity, basal friction, buoyancy in case of under- water motion, and interaction with neighbouring blocks. At any time step, the position of the blocks on the sliding surface is determined in curvilinear (local) co-ordinates by computing the position of the vertices of the quadrilaterals and the position of the block centre of mass. Mathematically, the topology of the system is invariant during the motion, which means that the number of blocks is constant and that each block has always the same neighbours. Physically, this means that blocks are allowed to change form, but not to penetrate into each other, not to coalesce, not to split. The change of form is compensated by the change of height, under the computational assumption that the block volume is constant during motion: consequently lateral expansion or contraction yield respectively height reduction or increment of the blocks. This model is superior to the analogous 1D model where the mass is partitioned into a chain of interacting blocks. 1D models require the a-priori specification of the sliding path, that is of the trajectory of the blocks, which the 2D block model supplies as one of its output. In continuation of previous studies on the catastrophic slide of Vajont that occurred in 1963 in northern Italy and caused more than 2000 victims, the 2D block model has been applied to the Vajont case. The results are compared to the outcome of the 1D model, and more importantly to the observational data concerning the deposit position and morphology. The agreement between simulation and data is found to be quite good.
Inferior vena cava segmentation with parameter propagation and graph cut.
Yan, Zixu; Chen, Feng; Wu, Fa; Kong, Dexing
2017-09-01
The inferior vena cava (IVC) is one of the vital veins inside the human body. Accurate segmentation of the IVC from contrast-enhanced CT images is of great importance. This extraction not only helps the physician understand its quantitative features such as blood flow and volume, but also it is helpful during the hepatic preoperative planning. However, manual delineation of the IVC is time-consuming and poorly reproducible. In this paper, we propose a novel method to segment the IVC with minimal user interaction. The proposed method performs the segmentation block by block between user-specified beginning and end masks. At each stage, the proposed method builds the segmentation model based on information from image regional appearances, image boundaries, and a prior shape. The intensity range and the prior shape for this segmentation model are estimated based on the segmentation result from the last block, or from user- specified beginning mask if at first stage. Then, the proposed method minimizes the energy function and generates the segmentation result for current block using graph cut. Finally, a backward tracking step from the end of the IVC is performed if necessary. We have tested our method on 20 clinical datasets and compared our method to three other vessel extraction approaches. The evaluation was performed using three quantitative metrics: the Dice coefficient (Dice), the mean symmetric distance (MSD), and the Hausdorff distance (MaxD). The proposed method has achieved a Dice of [Formula: see text], an MSD of [Formula: see text] mm, and a MaxD of [Formula: see text] mm, respectively, in our experiments. The proposed approach can achieve a sound performance with a relatively low computational cost and a minimal user interaction. The proposed algorithm has high potential to be applied for the clinical applications in the future.
Finely Resolved On-Road PM2.5 and Estimated Premature Mortality in Central North Carolina.
Chang, Shih Ying; Vizuete, William; Serre, Marc; Vennam, Lakshmi Pradeepa; Omary, Mohammad; Isakov, Vlad; Breen, Michael; Arunachalam, Saravanan
2017-12-01
To quantify the on-road PM 2.5 -related premature mortality at a national scale, previous approaches to estimate concentrations at a 12-km × 12-km or larger grid cell resolution may not fully characterize concentration hotspots that occur near roadways and thus the areas of highest risk. Spatially resolved concentration estimates from on-road emissions to capture these hotspots may improve characterization of the associated risk, but are rarely used for estimating premature mortality. In this study, we compared the on-road PM 2.5 -related premature mortality in central North Carolina with two different concentration estimation approaches-(i) using the Community Multiscale Air Quality (CMAQ) model to model concentration at a coarser resolution of a 36-km × 36-km grid resolution, and (ii) using a hybrid of a Gaussian dispersion model, CMAQ, and a space-time interpolation technique to provide annual average PM 2.5 concentrations at a Census-block level (∼105,000 Census blocks). The hybrid modeling approach estimated 24% more on-road PM 2.5 -related premature mortality than CMAQ. The major difference is from the primary on-road PM 2.5 where the hybrid approach estimated 2.5 times more primary on-road PM 2.5 -related premature mortality than CMAQ due to predicted exposure hotspots near roadways that coincide with high population areas. The results show that 72% of primary on-road PM 2.5 premature mortality occurs within 1,000 m from roadways where 50% of the total population resides, highlighting the importance to characterize near-road primary PM 2.5 and suggesting that previous studies may have underestimated premature mortality due to PM 2.5 from traffic-related emissions. © 2017 Society for Risk Analysis.
Analysis of composite plates by using mechanics of structure genome and comparison with ANSYS
NASA Astrophysics Data System (ADS)
Zhao, Banghua
Motivated by a recently discovered concept, Structure Genome (SG) which is defined as the smallest mathematical building block of a structure, a new approach named Mechanics of Structure Genome (MSG) to model and analyze composite plates is introduced. MSG is implemented in a general-purpose code named SwiftComp(TM), which provides the constitutive models needed in structural analysis by homogenization and pointwise local fields by dehomogenization. To improve the user friendliness of SwiftComp(TM), a simple graphic user interface (GUI) based on ANSYS Mechanical APDL platform, called ANSYS-SwiftComp GUI is developed, which provides a convenient way to create some common SG models or arbitrary customized SG models in ANSYS and invoke SwiftComp(TM) to perform homogenization and dehomogenization. The global structural analysis can also be handled in ANSYS after homogenization, which could predict the global behavior and provide needed inputs for dehomogenization. To demonstrate the accuracy and efficiency of the MSG approach, several numerical cases are studied and compared using both MSG and ANSYS. In the ANSYS approach, 3D solid element models (ANSYS 3D approach) are used as reference models and the 2D shell element models created by ANSYS Composite PrepPost (ACP approach) are compared with the MSG approach. The results of the MSG approach agree well with the ANSYS 3D approach while being as efficient as the ACP approach. Therefore, the MSG approach provides an efficient and accurate new way to model composite plates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2014-04-01
The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less
Modeling Framework for Fracture in Multiscale Cement-Based Material Structures
Qian, Zhiwei; Schlangen, Erik; Ye, Guang; van Breugel, Klaas
2017-01-01
Multiscale modeling for cement-based materials, such as concrete, is a relatively young subject, but there are already a number of different approaches to study different aspects of these classical materials. In this paper, the parameter-passing multiscale modeling scheme is established and applied to address the multiscale modeling problem for the integrated system of cement paste, mortar, and concrete. The block-by-block technique is employed to solve the length scale overlap challenge between the mortar level (0.1–10 mm) and the concrete level (1–40 mm). The microstructures of cement paste are simulated by the HYMOSTRUC3D model, and the material structures of mortar and concrete are simulated by the Anm material model. Afterwards the 3D lattice fracture model is used to evaluate their mechanical performance by simulating a uniaxial tensile test. The simulated output properties at a lower scale are passed to the next higher scale to serve as input local properties. A three-level multiscale lattice fracture analysis is demonstrated, including cement paste at the micrometer scale, mortar at the millimeter scale, and concrete at centimeter scale. PMID:28772948
Synchronized Trajectories in a Climate "Supermodel"
NASA Astrophysics Data System (ADS)
Duane, Gregory; Schevenhoven, Francine; Selten, Frank
2017-04-01
Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.
Gwadz, Robert W.; Carter, Richard; Green, Ira
1979-01-01
We have recently proposed an approach to malaria control based on immunization of the host against extracellular malarial gametes, the stage in the mosquito guts, in order to block transmission by the mosquito vector. Our studies with avian and primate models have demonstrated that immunization of the host with extracellular gametes totally suppresses infectivity to the mosquito of a subsequent blood meal. Gametocytes within the erythrocytes are unaffected by the immunity, since resuspending the gametocytes in serum from normal nonimmune animals restores their infectivity to mosquitos. Immunity is mediated by antibodies that are ingested with the blood meal. These antibodies interact with extracellular gametes and prevent fertilization (the fusion of male and female gametes). Thus the infection in the mosquito is blocked, and in this way transmission is interrupted. PMID:317439
Nanoporous polymeric nanofibers based on selectively etched PS-b-PDMS block copolymers.
Demirel, Gokcen B; Buyukserin, Fatih; Morris, Michael A; Demirel, Gokhan
2012-01-01
One-dimensional nanoporous polymeric nanofibers have been fabricated within an anodic aluminum oxide (AAO) membrane by a facile approach based on selective etching of poly(dimethylsiloxane) (PDMS) domains in polystyrene-block-poly(dimethylsiloxane) (PS-b-PDMS) block copolymers that had been formed within the AAO template. It was observed that prior to etching, the well-ordered PS-b-PDMS nanofibers are solid and do not have any porosity. The postetched PS nanofibers, on the other hand, had a highly porous structure having about 20-50 nm pore size. The nanoporous polymeric fibers were also employed as a drug carrier for the native, continuous, and pulsatile drug release using Rhodamine B (RB) as a model drug. These studies showed that enhanced drug release and tunable drug dosage can be achieved by using ultrasound irradiation. © 2011 American Chemical Society
Oberbichler, S; Hackl, W O; Hörbst, A
2017-10-18
Long-term data collection is a challenging task in the domain of medical research. Many effects in medicine require long periods of time to become traceable e.g. the development of secondary malignancies based on a given radiotherapeutic treatment of the primary disease. Nevertheless, long-term studies often suffer from an initial lack of available information, thus disallowing a standardized approach for their approval by the ethics committee. This is due to several factors, such as the lack of existing case report forms or an explorative research approach in which data elements may change over time. In connection with current medical research and the ongoing digitalization in medicine, Long Term Medical Data Registries (MDR-LT) have become an important means of collecting and analyzing study data. As with any clinical study, ethical aspects must be taken into account when setting up such registries. This work addresses the problem of creating a valid, high-quality ethics committee proposal for medical registries by suggesting groups of tasks (building blocks), information sources and appropriate methods for collecting and analyzing the information, as well as a process model to compile an ethics committee proposal (EsPRit). To derive the building blocks and associated methods software and requirements engineering approaches were utilized. Furthermore, a process-oriented approach was chosen, as information required in the creating process of ethics committee proposals remain unknown in the beginning of planning an MDR-LT. Here, we derived the needed steps from medical product certification. This was done as the medical product certification itself also communicates a process-oriented approach rather than merely focusing on content. A proposal was created for validation and inspection of applicability by using the proposed building blocks. The proposed best practice was tested and refined within SEMPER (Secondary Malignoma - Prospective Evaluation of the Radiotherapeutics dose distribution as the cause for induction) as a case study. The proposed building blocks cover the topics of "Context Analysis", "Requirements Analysis", "Requirements Validation", "Electronic Case Report (eCRF) Design" and "Overall Concept Creation". Additional methods are attached with regards to each topic. The goals of each block can be met by applying those methods. The proposed methods are proven methods as applied in e.g. existing Medical Data Registry projects, as well as in software or requirements engineering. Several building blocks and attached methods could be identified in the creation of a generic ethics committee proposal. Hence, an Ethics Committee can make informed decisions on the suggested study via said blocks, using the suggested methods such as "Defining Clinical Questions" within the Context Analysis. The study creators have to confirm that they adhere to the proposed procedure within the ethic proposal statement. Additional existing Medical Data Registry projects can be compared to EsPRit for conformity to the proposed procedure. This allows for the identification of gaps, which can lead to amendments requested by the ethics committee.
NASA Astrophysics Data System (ADS)
Sadovskii, Vladimir; Sadovskaya, Oxana
2017-04-01
A thermodynamically consistent approach to the description of linear and nonlinear wave processes in a blocky medium, which consists of a large number of elastic blocks interacting with each other via pliant interlayers, is proposed. The mechanical properties of interlayers are defined by means of the rheological schemes of different levels of complexity. Elastic interaction between the blocks is considered in the framework of the linear elasticity theory [1]. The effects of viscoelastic shear in the interblock interlayers are taken into consideration using the Pointing-Thomson rheological scheme. The model of an elastic porous material is used in the interlayers, where the pores collapse if an abrupt compressive stress is applied. On the basis of the Biot equations for a fluid-saturated porous medium, a new mathematical model of a blocky medium is worked out, in which the interlayers provide a convective fluid motion due to the external perturbations. The collapse of pores is modeled within the generalized rheological approach, wherein the mechanical properties of a material are simulated using four rheological elements. Three of them are the traditional elastic, viscous and plastic elements, the fourth element is the so-called rigid contact [2], which is used to describe the behavior of materials with different resistance to tension and compression. Thermodynamic consistency of the equations in interlayers with the equations in blocks guarantees fulfillment of the energy conservation law for a blocky medium in a whole, i.e. kinetic and potential energy of the system is the sum of kinetic and potential energies of the blocks and interlayers. As a result of discretization of the equations of the model, robust computational algorithm is constructed, that is stable because of the thermodynamic consistency of the finite difference equations at a discrete level. The splitting method by the spatial variables and the Godunov gap decay scheme are used in the blocks, the dissipationless finite difference Ivanov scheme is applied in the interlayers. The parallel program is designed, using the MPI technology. By means of this software, nonlinear wave processes in the case of initial rotation of the central block in a rock mass as well as in the case of concentrated couple stress load, applied at the boundary of a rock mass, are analyzed. Results of computations on the multiprocessor computer systems demonstrate the strong anisotropy of a blocky medium. This work was supported by the Complex Fundamental Research Program no. II.2P "Integration and Development" of Siberian Branch of the Russian Academy of Sciences. References 1. Sadovskii V.M., Sadovskaya O.V. Modeling of Elastic Waves in a Blocky Medium Based on Equations of the Cosserat Continuum // Wave Motion. 2015. V. 52. P. 138-150. 2. Sadovskaya O., Sadovskii V. Mathematical Modeling in Mechanics of Granular Materials. Ser.: Advanced Structured Materials, V. 21. Heidelberg - New York - Dordrecht - London, Springer, 2012. 390 p.
A communication-avoiding, hybrid-parallel, rank-revealing orthogonalization method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoemmen, Mark
2010-11-01
Orthogonalization consumes much of the run time of many iterative methods for solving sparse linear systems and eigenvalue problems. Commonly used algorithms, such as variants of Gram-Schmidt or Householder QR, have performance dominated by communication. Here, 'communication' includes both data movement between the CPU and memory, and messages between processors in parallel. Our Tall Skinny QR (TSQR) family of algorithms requires asymptotically fewer messages between processors and data movement between CPU and memory than typical orthogonalization methods, yet achieves the same accuracy as Householder QR factorization. Furthermore, in block orthogonalizations, TSQR is faster and more accurate than existing approaches formore » orthogonalizing the vectors within each block ('normalization'). TSQR's rank-revealing capability also makes it useful for detecting deflation in block iterative methods, for which existing approaches sacrifice performance, accuracy, or both. We have implemented a version of TSQR that exploits both distributed-memory and shared-memory parallelism, and supports real and complex arithmetic. Our implementation is optimized for the case of orthogonalizing a small number (5-20) of very long vectors. The shared-memory parallel component uses Intel's Threading Building Blocks, though its modular design supports other shared-memory programming models as well, including computation on the GPU. Our implementation achieves speedups of 2 times or more over competing orthogonalizations. It is available now in the development branch of the Trilinos software package, and will be included in the 10.8 release.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venghaus, Florian; Eisfeld, Wolfgang, E-mail: wolfgang.eisfeld@uni-bielefeld.de
2016-03-21
Robust diabatization techniques are key for the development of high-dimensional coupled potential energy surfaces (PESs) to be used in multi-state quantum dynamics simulations. In the present study we demonstrate that, besides the actual diabatization technique, common problems with the underlying electronic structure calculations can be the reason why a diabatization fails. After giving a short review of the theoretical background of diabatization, we propose a method based on the block-diagonalization to analyse the electronic structure data. This analysis tool can be used in three different ways: First, it allows to detect issues with the ab initio reference data and ismore » used to optimize the setup of the electronic structure calculations. Second, the data from the block-diagonalization are utilized for the development of optimal parametrized diabatic model matrices by identifying the most significant couplings. Third, the block-diagonalization data are used to fit the parameters of the diabatic model, which yields an optimal initial guess for the non-linear fitting required by standard or more advanced energy based diabatization methods. The new approach is demonstrated by the diabatization of 9 electronic states of the propargyl radical, yielding fully coupled full-dimensional (12D) PESs in closed form.« less
Harmony of spinning conformal blocks
NASA Astrophysics Data System (ADS)
Schomerus, Volker; Sobko, Evgeny; Isachenkov, Mikhail
2017-03-01
Conformal blocks for correlation functions of tensor operators play an increasingly important role for the conformal bootstrap programme. We develop a universal approach to such spinning blocks through the harmonic analysis of certain bundles over a coset of the conformal group. The resulting Casimir equations are given by a matrix version of the Calogero-Sutherland Hamiltonian that describes the scattering of interacting spinning particles in a 1-dimensional external potential. The approach is illustrated in several examples including fermionic seed blocks in 3D CFT where they take a very simple form.
NASA Astrophysics Data System (ADS)
Panagiotopoulou, Antigoni; Bratsolis, Emmanuel; Charou, Eleni; Perantonis, Stavros
2017-10-01
The detailed three-dimensional modeling of buildings utilizing elevation data, such as those provided by light detection and ranging (LiDAR) airborne scanners, is increasingly demanded today. There are certain application requirements and available datasets to which any research effort has to be adapted. Our dataset includes aerial orthophotos, with a spatial resolution 20 cm, and a digital surface model generated from LiDAR, with a spatial resolution 1 m and an elevation resolution 20 cm, from an area of Athens, Greece. The aerial images are fused with LiDAR, and we classify these data with a multilayer feedforward neural network for building block extraction. The innovation of our approach lies in the preprocessing step in which the original LiDAR data are super-resolution (SR) reconstructed by means of a stochastic regularized technique before their fusion with the aerial images takes place. The Lorentzian estimator combined with the bilateral total variation regularization performs the SR reconstruction. We evaluate the performance of our approach against that of fusing unprocessed LiDAR data with aerial images. We present the classified images and the statistical measures confusion matrix, kappa coefficient, and overall accuracy. The results demonstrate that our approach predominates over that of fusing unprocessed LiDAR data with aerial images.
Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model
NASA Astrophysics Data System (ADS)
Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin
2016-08-01
This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.
Gronau, Greta; Jacobsen, Matthew M.; Huang, Wenwen; Rizzo, Daniel J.; Li, David; Staii, Cristian; Pugno, Nicola M.; Wong, Joyce Y.; Kaplan, David L.; Buehler, Markus J.
2016-01-01
Scalable computational modelling tools are required to guide the rational design of complex hierarchical materials with predictable functions. Here, we utilize mesoscopic modelling, integrated with genetic block copolymer synthesis and bioinspired spinning process, to demonstrate de novo materials design that incorporates chemistry, processing and material characterization. We find that intermediate hydrophobic/hydrophilic block ratios observed in natural spider silks and longer chain lengths lead to outstanding silk fibre formation. This design by nature is based on the optimal combination of protein solubility, self-assembled aggregate size and polymer network topology. The original homogeneous network structure becomes heterogeneous after spinning, enhancing the anisotropic network connectivity along the shear flow direction. Extending beyond the classical polymer theory, with insights from the percolation network model, we illustrate the direct proportionality between network conductance and fibre Young's modulus. This integrated approach provides a general path towards de novo functional network materials with enhanced mechanical properties and beyond (optical, electrical or thermal) as we have experimentally verified. PMID:26017575
Lin, Shangchao; Ryu, Seunghwa; Tokareva, Olena; Gronau, Greta; Jacobsen, Matthew M; Huang, Wenwen; Rizzo, Daniel J; Li, David; Staii, Cristian; Pugno, Nicola M; Wong, Joyce Y; Kaplan, David L; Buehler, Markus J
2015-05-28
Scalable computational modelling tools are required to guide the rational design of complex hierarchical materials with predictable functions. Here, we utilize mesoscopic modelling, integrated with genetic block copolymer synthesis and bioinspired spinning process, to demonstrate de novo materials design that incorporates chemistry, processing and material characterization. We find that intermediate hydrophobic/hydrophilic block ratios observed in natural spider silks and longer chain lengths lead to outstanding silk fibre formation. This design by nature is based on the optimal combination of protein solubility, self-assembled aggregate size and polymer network topology. The original homogeneous network structure becomes heterogeneous after spinning, enhancing the anisotropic network connectivity along the shear flow direction. Extending beyond the classical polymer theory, with insights from the percolation network model, we illustrate the direct proportionality between network conductance and fibre Young's modulus. This integrated approach provides a general path towards de novo functional network materials with enhanced mechanical properties and beyond (optical, electrical or thermal) as we have experimentally verified.
NASA Astrophysics Data System (ADS)
Ferrer, Gabriel; Sáez, Esteban; Ledezma, Christian
2018-01-01
Copper production is an essential component of the Chilean economy. During the extraction process of copper, large quantities of waste materials (tailings) are produced, which are typically stored in large tailing ponds. Thickened Tailings Disposal (TTD) is an alternative to conventional tailings ponds. In TTD, a considerable amount of water is extracted from the tailings before their deposition. Once a thickened tailings layer is deposited, it loses water and it shrinks, forming a relatively regular structure of tailings blocks with vertical cracks in between, which are then filled up with "fresh" tailings once the new upper layer is deposited. The dynamic response of a representative column of this complex structure made out of tailings blocks with softer material in between was analyzed using a periodic half-space finite element model. The tailings' behavior was modeled using an elasto-plastic multi-yielding constitutive model, and Chilean earthquake records were used for the seismic analyses. Special attention was given to the liquefaction potential evaluation of TTD.
Modeling haplotype block variation using Markov chains.
Greenspan, G; Geiger, D
2006-04-01
Models of background variation in genomic regions form the basis of linkage disequilibrium mapping methods. In this work we analyze a background model that groups SNPs into haplotype blocks and represents the dependencies between blocks by a Markov chain. We develop an error measure to compare the performance of this model against the common model that assumes that blocks are independent. By examining data from the International Haplotype Mapping project, we show how the Markov model over haplotype blocks is most accurate when representing blocks in strong linkage disequilibrium. This contrasts with the independent model, which is rendered less accurate by linkage disequilibrium. We provide a theoretical explanation for this surprising property of the Markov model and relate its behavior to allele diversity.
Modeling Haplotype Block Variation Using Markov Chains
Greenspan, G.; Geiger, D.
2006-01-01
Models of background variation in genomic regions form the basis of linkage disequilibrium mapping methods. In this work we analyze a background model that groups SNPs into haplotype blocks and represents the dependencies between blocks by a Markov chain. We develop an error measure to compare the performance of this model against the common model that assumes that blocks are independent. By examining data from the International Haplotype Mapping project, we show how the Markov model over haplotype blocks is most accurate when representing blocks in strong linkage disequilibrium. This contrasts with the independent model, which is rendered less accurate by linkage disequilibrium. We provide a theoretical explanation for this surprising property of the Markov model and relate its behavior to allele diversity. PMID:16361244
Perspective: Evolutionary design of granular media and block copolymer patterns
NASA Astrophysics Data System (ADS)
Jaeger, Heinrich M.; de Pablo, Juan J.
2016-05-01
The creation of new materials "by design" is a process that starts from desired materials properties and proceeds to identify requirements for the constituent components. Such process is challenging because it inverts the typical modeling approach, which starts from given micro-level components to predict macro-level properties. We describe how to tackle this inverse problem using concepts from evolutionary computation. These concepts have widespread applicability and open up new opportunities for design as well as discovery. Here we apply them to design tasks involving two very different classes of soft materials, shape-optimized granular media and nanopatterned block copolymer thin films.
Influence of gravity on deformation of blocks in Earth's crust
NASA Astrophysics Data System (ADS)
Tataurova, A. A.; Stefanov, Yu. P.; Bakeev, R. A.
2017-12-01
The article presents the results of numerical calculations of deformation using an Earth's crust model fragment under the influence of gravitational force. It is shown that plastic deformation in low-strength blocks changes the stress-strain state in the medium and produces a surface deflection which is hundred meters deep. The deflection is defined by the properties of the medium, its extent, and conditions at the lateral boundaries. The order of load application beyond the elastic limit affects the development of deformation, which should be taken into account when formulating problems and performing numerical simulations. The problem has been solved using a two-dimensional elastoplastic approach.
Inferring Recent Demography from Isolation by Distance of Long Shared Sequence Blocks
Ringbauer, Harald; Coop, Graham
2017-01-01
Recently it has become feasible to detect long blocks of nearly identical sequence shared between pairs of genomes. These identity-by-descent (IBD) blocks are direct traces of recent coalescence events and, as such, contain ample signal to infer recent demography. Here, we examine sharing of such blocks in two-dimensional populations with local migration. Using a diffusion approximation to trace genetic ancestry, we derive analytical formulas for patterns of isolation by distance of IBD blocks, which can also incorporate recent population density changes. We introduce an inference scheme that uses a composite-likelihood approach to fit these formulas. We then extensively evaluate our theory and inference method on a range of scenarios using simulated data. We first validate the diffusion approximation by showing that the theoretical results closely match the simulated block-sharing patterns. We then demonstrate that our inference scheme can accurately and robustly infer dispersal rate and effective density, as well as bounds on recent dynamics of population density. To demonstrate an application, we use our estimation scheme to explore the fit of a diffusion model to Eastern European samples in the Population Reference Sample data set. We show that ancestry diffusing with a rate of σ≈50−−100 km/gen during the last centuries, combined with accelerating population growth, can explain the observed exponential decay of block sharing with increasing pairwise sample distance. PMID:28108588
NASA Astrophysics Data System (ADS)
Gün, E.; Gogus, O.; Pysklywec, R.; Topuz, G.; Bodur, O. F.
2017-12-01
The Tethyan belt in the eastern Mediterranean region is characterized by the accretion of several micro-continental blocks (e.g. Anatolide-Tauride, Sakarya and Istanbul terranes). The accretion of a micro-continental block to the active continental margin and subsequent initiation of a new subduction are of crucial importance in understanding the geodynamic evolution of the region. Numerical geodynamic experiments are designed to investigate how these micro-continental blocks in the ocean-continent subduction system develops the aforementioned subduction, back-arc extension, surface uplift and the ophiolite emplacement in the eastern Mediterranean since Late Cretaceous. In a series set of experiments, we test various sizes of micro-continental blocks (ranging from 50 to 300 km), different rheological properties (e.g. dry-wet olivine mantle) and imposed plate convergence velocities (0 to 4 cm/year). For a prime present-day analogue to the micro-continental block collision-accretion, model predictions are compared against the collision between Eratosthenes and Cyprus. Preliminary results show that slab break-off occurs directly after the collision when the plate convergence velocities are less than 2 cm/yr and the mantle lithosphere of the continental block has viscoplastic rheology. On the other hand, there is no relationship between convergence rate and break-off event when the lithospheric mantle rheology is chosen to be plastic. Furthermore, the micro-continental block undergoes considerable extension before continental collision due to the slab pull force, if a viscoplastic rheology is assumed for the mantle lithosphere.
Sun, Hongwei; Li, Guiying; Nie, Xin; Shi, Huixian; Wong, Po-Keung; Zhao, Huijun; An, Taicheng
2014-08-19
A systematic approach was developed to understand, in-depth, the mechanisms involved during the inactivation of bacterial cells using photoelectrocatalytic (PEC) processes with Escherichia coli K-12 as the model microorganism. The bacterial cells were found to be inactivated and decomposed primarily due to attack from photogenerated H2O2. Extracellular reactive oxygen species (ROSs), such as H2O2, may penetrate into the bacterial cell and cause dramatically elevated intracellular ROSs levels, which would overwhelm the antioxidative capacity of bacterial protective enzymes such as superoxide dismutase and catalase. The activities of these two enzymes were found to decrease due to the ROSs attacks during PEC inactivation. Bacterial cell wall damage was then observed, including loss of cell membrane integrity and increased permeability, followed by the decomposition of cell envelope (demonstrated by scanning electronic microscope images). One of the bacterial building blocks, protein, was found to be oxidatively damaged due to the ROSs attacks, as well. Leakage of cytoplasm and biomolecules (bacterial building blocks such as proteins and nucleic acids) were evident during prolonged PEC inactivation process. The leaked cytoplasmic substances and cell debris could be further degraded and, ultimately, mineralized with prolonged PEC treatment.
Sinus floor elevation with a crestal approach using a press-fit bone block: a case series.
Isidori, M; Genty, C; David-Tchouda, S; Fortin, T
2015-09-01
This prospective study aimed to provide detailed clinical information on a sinus augmentation procedure, i.e., transcrestal sinus floor elevation with a bone block using the press-fit technique. A bone block is harvested with a trephine burr to obtain a cylinder. This block is inserted into the antrum via a crestal approach after creation of a circular crestal window. Thirty-three patients were treated with a fixed prosthesis supported by implants placed on 70 cylindrical bone blocks. The mean bone augmentation was 6.08±2.87 mm, ranging from 0 to 12.7 mm. Only one graft failed before implant placement. During surgery and the subsequent observation period, no complications were recorded, one implant was lost, and no infection or inflammation was observed. This proof-of-concept study suggests that the use of a bone block inserted into the sinus cavity via a crestal approach can be an alternative to the sinus lift procedure with the creation of a lateral window. It reduces the duration of surgery, cost of treatment, and overall discomfort. Copyright © 2015. Published by Elsevier Ltd.
Multiple-Input Subject-Specific Modeling of Plasma Glucose Concentration for Feedforward Control.
Kotz, Kaylee; Cinar, Ali; Mei, Yong; Roggendorf, Amy; Littlejohn, Elizabeth; Quinn, Laurie; Rollins, Derrick K
2014-11-26
The ability to accurately develop subject-specific, input causation models, for blood glucose concentration (BGC) for large input sets can have a significant impact on tightening control for insulin dependent diabetes. More specifically, for Type 1 diabetics (T1Ds), it can lead to an effective artificial pancreas (i.e., an automatic control system that delivers exogenous insulin) under extreme changes in critical disturbances. These disturbances include food consumption, activity variations, and physiological stress changes. Thus, this paper presents a free-living, outpatient, multiple-input, modeling method for BGC with strong causation attributes that is stable and guards against overfitting to provide an effective modeling approach for feedforward control (FFC). This approach is a Wiener block-oriented methodology, which has unique attributes for meeting critical requirements for effective, long-term, FFC.
On the role of horizontal displacements in the exhumation of high pressure metamorphic rocks
NASA Astrophysics Data System (ADS)
Brun, J.-P.; Tirel, C.; Philippon, M.; Burov, E.; Faccenna, C.; Gueydan, F.; Lebedev, S.
2012-04-01
High pressure metamorphic rocks exposed in the core of many mountain belts correspond to various types of upper crustal materials that have been buried to mantle depths and, soon after, brought back to surface at mean displacement rates up to few cm/y, comparable to those of plate boundaries. The vertical component of HP rock exhumation velocity back to surface is commonly well constrained by pressure estimates from petrology and geochronological data whereas the horizontal component remains generally difficult or impossible to estimate. Consequently, most available models, if not all, attempt to simulate exhumation with a minimal horizontal component of displacement. Such models, require that the viscosity of HP rocks is low and/or the erosion rate large -i.e. at least equal to the rate of exhumation. However, in some regions like the Aegean, where the exhumation of blueschists and eclogites is driven by slab rollback, it can be shown that the horizontal component of exhumation related displacement, obtained from map view restoration, is 5 to 7 times larger than the vertical one, deduced from metamorphic pressure estimates. Using finite element models performed with FLAMAR, we show that such a situation simply results from the subduction of small continental blocks (< 500km) that stimulate subduction rollback. The continental block is dragged downward and sheared off the downgoing mantle slab by buoyancy force. Exhumation of the crustal block occurs through a one step Caterpillar-type walk, with the block's tail slipping along a basal décollement, approaching the head and making a large buckle, which then unrolls at surface as soon as the entire block is delaminated. Finally, the crustal block emplaces at surface in the space created by trench retreat. This process of exhumation requires neither rheological weakening of HP rocks nor high rates of erosion.
Evaluating small-body landing hazards due to blocks
NASA Astrophysics Data System (ADS)
Ernst, C.; Rodgers, D.; Barnouin, O.; Murchie, S.; Chabot, N.
2014-07-01
Introduction: Landed missions represent a vital stage of spacecraft exploration of planetary bodies. Landed science allows for a wide variety of measurements essential to unraveling the origin and evolution of a body that are not possible remotely, including but not limited to compositional measurements, microscopic grain characterization, and the physical properties of the regolith. To date, two spacecraft have performed soft landings on the surface of a small body. In 2001, the Near Earth Asteroid Rendezvous (NEAR) mission performed a controlled descent and landing on (433) Eros following the completion of its mission [1]; in 2005, the Hayabusa spacecraft performed two touch-and-go maneuvers at (25143) Itokawa [2]. Both landings were preceded by rendezvous spacecraft reconnaissance, which enabled selection of a safe landing site. Three current missions have plans to land on small bodies (Rosetta, Hayabusa 2, and OSIRIS-REx); several other mission concepts also include small-body landings. Small-body landers need to land at sites having slopes and block abundances within spacecraft design limits. Due to the small scale of the potential hazards, it can be difficult or impossible to fully characterize a landing surface before the arrival of the spacecraft at the body. Although a rendezvous mission phase can provide global reconnaissance from which a landing site can be chosen, reasonable a priori assurance that a safe landing site exists is needed to validate the design approach for the spacecraft. Method: Many robotic spacecraft have landed safely on the Moon and Mars. Images of these landing sites, as well as more recent, extremely high-resolution orbital datasets, have enabled the comparison of orbital block observations to the smaller blocks that pose hazards to landers. Analyses of the Surveyor [3], Viking 1 and 2, Mars Pathfinder, Phoenix, Spirit, Opportunity, and Curiosity landing sites [4--8] have indicated that for a reasonable difference in size (a factor of several to ten), the size-frequency distribution of blocks can be modeled, allowing extrapolation from large block distributions to estimate small block densities. From that estimate, the probability of a lander encountering hazardous blocks can be calculated for a given lander design. Such calculations are used routinely to vet candidate sites for Mars landers [5--8]. Application to Small Bodies: To determine whether a similar approach will work for small bodies, we must determine if the large and small block populations can be linked. To do so, we analyze the comprehensive block datasets for the intermediate-sized Eros [9,10] and the small Itokawa [11,12]. Global and local block size-frequency distributions for Eros and Itokawa have power-law slopes on the order of -3 and match reasonably well between larger block sizes (from lower-resolution images) and smaller block sizes (from higher-resolution images). Although absolute block densities differ regionally on each asteroid, the slopes match reasonably well between Itokawa and Eros, with the geologic implications of this result discussed in [10]. For Eros and Itokawa, the approach of extending the size-frequency distribution from large, tens-of-meter-sized blocks down to small, tens-of-centimeter-sized blocks using a power-law fit to the large population yields reasonable estimates of small block populations. It is important to note that geologic context matters for the absolute block density --- if the global counts include multiple geologic settings, they will not directly extend to local areas containing only one setting [10]. A small number of high-resolution images of Phobos are sufficient for measuring blocks. These images are concentrated in the area outside of Stickney crater, which is thought to be the source of most of the observed blocks [13]. Block counts by Thomas et al. [13] suggest a power-law slope similar to those of Eros [9] and Itokawa global counts, with the absolute density of blocks similar to that of global Eros. Because blocks tend to be more numerous proximal to large, young craters (e.g., Stickney on Phobos, Shoemaker on Eros), the block density across most of Phobos is likely to be lower than that observed in the available high-resolution images. We suggest that a power-law extrapolation of Eros or Phobos large-block distributions provides upper limits for assessing the block landing hazards faced by a Phobos lander.
Simulation requirements for the Large Deployable Reflector (LDR)
NASA Technical Reports Server (NTRS)
Soosaar, K.
1984-01-01
Simulation tools for the large deployable reflector (LDR) are discussed. These tools are often the transfer function variety equations. However, transfer functions are inadequate to represent time-varying systems for multiple control systems with overlapping bandwidths characterized by multi-input, multi-output features. Frequency domain approaches are the useful design tools, but a full-up simulation is needed. Because of the need for a dedicated computer for high frequency multi degree of freedom components encountered, non-real time smulation is preferred. Large numerical analysis software programs are useful only to receive inputs and provide output to the next block, and should be kept out of the direct loop of simulation. The following blocks make up the simulation. The thermal model block is a classical heat transfer program. It is a non-steady state program. The quasistatic block deals with problems associated with rigid body control of reflector segments. The steady state block assembles data into equations of motion and dynamics. A differential raytrace is obtained to establish a change in wave aberrations. The observation scene is described. The focal plane module converts the photon intensity impinging on it into electron streams or into permanent film records.
Variation block-based genomics method for crop plants.
Kim, Yul Ho; Park, Hyang Mi; Hwang, Tae-Young; Lee, Seuk Ki; Choi, Man Soo; Jho, Sungwoong; Hwang, Seungwoo; Kim, Hak-Min; Lee, Dongwoo; Kim, Byoung-Chul; Hong, Chang Pyo; Cho, Yun Sung; Kim, Hyunmin; Jeong, Kwang Ho; Seo, Min Jung; Yun, Hong Tai; Kim, Sun Lim; Kwon, Young-Up; Kim, Wook Han; Chun, Hye Kyung; Lim, Sang Jong; Shin, Young-Ah; Choi, Ik-Young; Kim, Young Sun; Yoon, Ho-Sung; Lee, Suk-Ha; Lee, Sunghoon
2014-06-15
In contrast with wild species, cultivated crop genomes consist of reshuffled recombination blocks, which occurred by crossing and selection processes. Accordingly, recombination block-based genomics analysis can be an effective approach for the screening of target loci for agricultural traits. We propose the variation block method, which is a three-step process for recombination block detection and comparison. The first step is to detect variations by comparing the short-read DNA sequences of the cultivar to the reference genome of the target crop. Next, sequence blocks with variation patterns are examined and defined. The boundaries between the variation-containing sequence blocks are regarded as recombination sites. All the assumed recombination sites in the cultivar set are used to split the genomes, and the resulting sequence regions are termed variation blocks. Finally, the genomes are compared using the variation blocks. The variation block method identified recurring recombination blocks accurately and successfully represented block-level diversities in the publicly available genomes of 31 soybean and 23 rice accessions. The practicality of this approach was demonstrated by the identification of a putative locus determining soybean hilum color. We suggest that the variation block method is an efficient genomics method for the recombination block-level comparison of crop genomes. We expect that this method will facilitate the development of crop genomics by bringing genomics technologies to the field of crop breeding.
Object-oriented integrated approach for the design of scalable ECG systems.
Boskovic, Dusanka; Besic, Ingmar; Avdagic, Zikrija
2009-01-01
The paper presents the implementation of Object-Oriented (OO) integrated approaches to the design of scalable Electro-Cardio-Graph (ECG) Systems. The purpose of this methodology is to preserve real-world structure and relations with the aim to minimize the information loss during the process of modeling, especially for Real-Time (RT) systems. We report on a case study of the design that uses the integration of OO and RT methods and the Unified Modeling Language (UML) standard notation. OO methods identify objects in the real-world domain and use them as fundamental building blocks for the software system. The gained experience based on the strongly defined semantics of the object model is discussed and related problems are analyzed.
CONCRETE BLOCKS' ADVERSE EFFECTS ON INDOOR AIR AND RECOMMENDED SOLUTIONS
Air infiltration through highly permeable concrete blocks can allow entry of various serious indoor air pollutants. An easy approach to avoiding these pollutants is to select a less–air-permeable concrete block. Tests show that air permeability of concrete blocks can vary by a fa...
Wu, Chi; Xie, Zuowei; Zhang, Guangzhao; Zi, Guofu; Tu, Yingfeng; Yang, Yali; Cai, Ping; Nie, Ting
2002-12-07
A combination of polymer physics and synthetic chemistry has enabled us to develop self-assembly assisted polymerization (SAAP), leading to the preparation of long multi-block copolymers with an ordered chain sequence and controllable block lengths.
An outline of graphical Markov models in dentistry.
Helfenstein, U; Steiner, M; Menghini, G
1999-12-01
In the usual multiple regression model there is one response variable and one block of several explanatory variables. In contrast, in reality there may be a block of several possibly interacting response variables one would like to explain. In addition, the explanatory variables may split into a sequence of several blocks, each block containing several interacting variables. The variables in the second block are explained by those in the first block; the variables in the third block by those in the first and the second block etc. During recent years methods have been developed allowing analysis of problems where the data set has the above complex structure. The models involved are called graphical models or graphical Markov models. The main result of an analysis is a picture, a conditional independence graph with precise statistical meaning, consisting of circles representing variables and lines or arrows representing significant conditional associations. The absence of a line between two circles signifies that the corresponding two variables are independent conditional on the presence of other variables in the model. An example from epidemiology is presented in order to demonstrate application and use of the models. The data set in the example has a complex structure consisting of successive blocks: the variable in the first block is year of investigation; the variables in the second block are age and gender; the variables in the third block are indices of calculus, gingivitis and mutans streptococci and the final response variables in the fourth block are different indices of caries. Since the statistical methods may not be easily accessible to dentists, this article presents them in an introductory form. Graphical models may be of great value to dentists in allowing analysis and visualisation of complex structured multivariate data sets consisting of a sequence of blocks of interacting variables and, in particular, several possibly interacting responses in the final block.
Evaluating atmospheric blocking in the global climate model EC-Earth
NASA Astrophysics Data System (ADS)
Hartung, Kerstin; Hense, Andreas; Kjellström, Erik
2013-04-01
Atmospheric blocking is a phenomenon of the midlatitudal troposphere, which plays an important role in climate variability. Therefore a correct representation of blocking in climate models is necessary, especially for evaluating the results of climate projections. In my master's thesis a validation of blocking in the coupled climate model EC-Earth is performed. Blocking events are detected based on the Tibaldi-Molteni Index. At first, a comparison with the reanalysis dataset ERA-Interim is conducted. The blocking frequency depending on longitude shows a small general underestimation of blocking in the model - a well known problem. Scaife et al. (2011) proposed the correction of model bias as a way to solve this problem. However, applying the correction to the higher resolution EC-Earth model does not yield any improvement. Composite maps show a link between blocking events and surface variables. One example is the formation of a positive surface temperature anomaly north and a negative anomaly south of the blocking anticyclone. In winter the surface temperature in EC-Earth can be reproduced quite well, but in summer a cold bias over the inner-European ocean is present. Using generalized linear models (GLMs) I want to study the connection between regional blocking and global atmospheric variables further. GLMs have the advantage of being applicable to non-Gaussian variables. Therefore the blocking index at each longitude, which is Bernoulli distributed, can be analysed statistically with GLMs. I applied a logistic regression between the blocking index and the geopotential height at 500 hPa to study the teleconnection of blocking events at midlatitudes with global geopotential height. GLMs also offer the possibility of quantifying the connections shown in composite maps. The implementation of the logistic regression can even be expanded to a search for trends in blocking frequency, for example in the scenario simulations.
Deformation Styles Along the Southern Alaska Margin Constrained by GPS
NASA Astrophysics Data System (ADS)
Elliott, J.; Freymueller, J. T.; Larsen, C. F.
2009-12-01
The present-day deformation observed in southcentral and southeast Alaska and the adjacent region of Canada is controlled by two main factors: ~ 50 mm/yr relative motion between the Pacific plate and North America and the Yakutat block’s collision with and accretion to southern Alaska. Over 45 mm/yr of NW-SE directed convergence from the collision is currently accommodated within the St. Elias orogen. The Fairweather, St. Elias, and Chugach ranges show the spectacular consequences of the relative tectonic motions, but the details of the plate interactions have not been well understood. Here we present GPS data from a network of over 170 campaign sites across the region. We use the data to constrain block models and forward models that characterize the nature and extent of the tectonic deformation along the Pacific-Yakutat-North America boundary. Tectonics in southeast Alaska can be described by block motion, with the Pacific plate bounding the region to the west. The fastest block motions occur along the coastal regions. The Yakutat block has a velocity of 51 ± 2.7 mm/yr towards N22 ± 2.5 deg W relative to North America. This velocity has a magnitude almost identical to that of the Pacific plate, but the azimuth is more westerly. The northeastern edge of the Yaktuat block is deforming, represented in our model by two small blocks outboard of the Fairweather fault. East of that fault, the Fairweather block rotates clockwise relative to North America, resulting in transpression along the Duke River and Eastern Denali faults. There is a clear transfer of strain from the coastal region hundreds of kilometers eastward into the Northern Cordillera block, confirming earlier suggestions that the effects of the Yakutat collision are far-reaching along its eastern margin. In contrast, deformation along the leading edge of the Yakutat collision is relatively narrowly focused within the southern half of the St. Elias orogen. The current deformation front of the Yakutat block with southern Alaska is in the vicinity of Icy Bay, where strain rates approach -1 microstrain/yr. The Malaspina thrust likely forms the northern boundary of the Yakutat block. Between Icy Bay and the Mt. St. Elias area, the tectonics cannot easily be described by block motion. The GPS data require the relative convergence to be partitioned onto multiple N-NW dipping thrust faults, resulting in a 50-70-km wide zone of deformation. This zone continues around the western side of Icy Bay into the Yakataga fold and thrust belt. North of the Mt. St. Elias area and the Bagley ice valley, roughly 100 km from the deformation front, GPS velocities are consistent with predictions of the motion of the southern Alaska block.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiemann, Reinhard; Demory, Marie-Estelle; Shaffrey, Len C.
The aim of this study is to investigate if the representation of Northern Hemisphere blocking is sensitive to resolution in current-generation atmospheric global circulation models (AGCMs). An evaluation is thus conducted of how well atmospheric blocking is represented in four AGCMs whose horizontal resolution is increased from a grid spacing of more than 100 km to about 25 km. It is shown that Euro-Atlantic blocking is simulated overall more credibly at higher resolution (i.e., in better agreement with a 50-yr reference blocking climatology created from the reanalyses ERA-40 and ERA-Interim). The improvement seen with resolution depends on the season andmore » to some extent on the model considered. Euro-Atlantic blocking is simulated more realistically at higher resolution in winter, spring, and autumn, and robustly so across the model ensemble. The improvement in spring is larger than that in winter and autumn. Summer blocking is found to be better simulated at higher resolution by one model only, with little change seen in the other three models. The representation of Pacific blocking is not found to systematically depend on resolution. Despite the improvements seen with resolution, the 25-km models still exhibit large biases in Euro-Atlantic blocking. For example, three of the four 25-km models underestimate winter northern European blocking frequency by about one-third. The resolution sensitivity and biases in the simulated blocking are shown to be in part associated with the mean-state biases in the models' midlatitude circulation.« less
Schiemann, Reinhard; Demory, Marie-Estelle; Shaffrey, Len C.; ...
2016-12-19
The aim of this study is to investigate if the representation of Northern Hemisphere blocking is sensitive to resolution in current-generation atmospheric global circulation models (AGCMs). An evaluation is thus conducted of how well atmospheric blocking is represented in four AGCMs whose horizontal resolution is increased from a grid spacing of more than 100 km to about 25 km. It is shown that Euro-Atlantic blocking is simulated overall more credibly at higher resolution (i.e., in better agreement with a 50-yr reference blocking climatology created from the reanalyses ERA-40 and ERA-Interim). The improvement seen with resolution depends on the season andmore » to some extent on the model considered. Euro-Atlantic blocking is simulated more realistically at higher resolution in winter, spring, and autumn, and robustly so across the model ensemble. The improvement in spring is larger than that in winter and autumn. Summer blocking is found to be better simulated at higher resolution by one model only, with little change seen in the other three models. The representation of Pacific blocking is not found to systematically depend on resolution. Despite the improvements seen with resolution, the 25-km models still exhibit large biases in Euro-Atlantic blocking. For example, three of the four 25-km models underestimate winter northern European blocking frequency by about one-third. The resolution sensitivity and biases in the simulated blocking are shown to be in part associated with the mean-state biases in the models' midlatitude circulation.« less
SCA with rotation to distinguish common and distinctive information in linked data.
Schouteden, Martijn; Van Deun, Katrijn; Pattyn, Sven; Van Mechelen, Iven
2013-09-01
Often data are collected that consist of different blocks that all contain information about the same entities (e.g., items, persons, or situations). In order to unveil both information that is common to all data blocks and information that is distinctive for one or a few of them, an integrated analysis of the whole of all data blocks may be most useful. Interesting classes of methods for such an approach are simultaneous-component and multigroup factor analysis methods. These methods yield dimensions underlying the data at hand. Unfortunately, however, in the results from such analyses, common and distinctive types of information are mixed up. This article proposes a novel method to disentangle the two kinds of information, by making use of the rotational freedom of component and factor models. We illustrate this method with data from a cross-cultural study of emotions.
Influence of Chirality in Ordered Block Copolymer Phases
NASA Astrophysics Data System (ADS)
Prasad, Ishan; Grason, Gregory
2015-03-01
Block copolymers are known to assemble into rich spectrum of ordered phases, with many complex phases driven by asymmetry in copolymer architecture. Despite decades of study, the influence of intrinsic chirality on equilibrium mesophase assembly of block copolymers is not well understood and largely unexplored. Self-consistent field theory has played a major role in prediction of physical properties of polymeric systems. Only recently, a polar orientational self-consistent field (oSCF) approach was adopted to model chiral BCP having a thermodynamic preference for cholesteric ordering in chiral segments. We implement oSCF theory for chiral nematic copolymers, where segment orientations are characterized by quadrupolar chiral interactions, and focus our study on the thermodynamic stability of bi-continuous network morphologies, and the transfer of molecular chirality to mesoscale chirality of networks. Unique photonic properties observed in butterfly wings have been attributed to presence of chiral single-gyroid networks, this has made it an attractive target for chiral metamaterial design.
Acoustic buffeting by infrasound in a low vibration facility.
MacLeod, B P; Hoffman, J E; Burke, S A; Bonn, D A
2016-09-01
Measurement instruments and fabrication tools with spatial resolution on the atomic scale require facilities that mitigate the impact of vibration sources in the environment. One approach to protection from vibration in a building's foundation is to place the instrument on a massive inertia block, supported on pneumatic isolators. This opens the questions of whether or not a massive floating block is susceptible to acoustic forces, and how to mitigate the effects of any such acoustic buffeting. Here this is investigated with quantitative measurements of vibrations and sound pressure, together with finite element modeling. It is shown that a particular concern, even in a facility with multiple acoustic enclosures, is the excitation of the lowest fundamental acoustic modes of the room by infrasound in the low tens of Hz range, and the efficient coupling of the fundamental room modes to a large inertia block centered in the room.
Initiation and blocking of the action potential in an axon in weak ultrasonic or microwave fields
NASA Astrophysics Data System (ADS)
Shneider, M. N.; Pekker, M.
2014-05-01
In this paper, we analyze the effect of the redistribution of the transmembrane ion channels in an axon caused by longitudinal acoustic vibrations of the membrane. These oscillations can be excited by an external source of ultrasound and weak microwave radiation interacting with the charges sitting on the surface of the lipid membrane. It is shown, using the Hodgkin-Huxley model of the axon, that the density redistribution of transmembrane sodium channels may reduce the threshold of the action potential, up to its spontaneous initiation. At the significant redistribution of sodium channels in the membrane, the rarefaction zones of the transmembrane channel density are formed, blocking the propagation of the action potential. Blocking the action potential propagation along the axon is shown to cause anesthesia in the example case of a squid axon. Various approaches to experimental observation of the effects considered in this paper are discussed.
Dynamic texture recognition using local binary patterns with an application to facial expressions.
Zhao, Guoying; Pietikäinen, Matti
2007-06-01
Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.
Wu, S.-S.; Wang, L.; Qiu, X.
2008-01-01
This article presents a deterministic model for sub-block-level population estimation based on the total building volumes derived from geographic information system (GIS) building data and three census block-level housing statistics. To assess the model, we generated artificial blocks by aggregating census block areas and calculating the respective housing statistics. We then applied the model to estimate populations for sub-artificial-block areas and assessed the estimates with census populations of the areas. Our analyses indicate that the average percent error of population estimation for sub-artificial-block areas is comparable to those for sub-census-block areas of the same size relative to associated blocks. The smaller the sub-block-level areas, the higher the population estimation errors. For example, the average percent error for residential areas is approximately 0.11 percent for 100 percent block areas and 35 percent for 5 percent block areas.
Boonsiriseth, K; Sirintawat, N; Arunakul, K; Wongsirichat, N
2013-07-01
This study aimed to evaluate the efficacy of anesthesia obtained with a novel injection approach for inferior alveolar nerve block compared with the conventional injection approach. 40 patients in good health, randomly received each of two injection approaches of local anesthetic on each side of the mandible at two separate appointments. A sharp probe and an electric pulp tester were used to test anesthesia before injection, after injection when the patients' sensation changed, and 5 min after injection. This study comprised positive aspiration and intravascular injection 5% and neurovascular bundle injection 7.5% in the conventional inferior alveolar nerve block, but without occurrence in the novel injection approach. A visual analog scale (VAS) pain assessment was used during injection and surgery. The significance level used in the statistical analysis was p<0.05. For the novel injection approach compared with the conventional injection approach, no significant difference was found on the subjective onset, objective onset, operation time, duration of anesthesia and VAS pain score during operation, but the VAS pain score during injection was significantly different. The efficacy of inferior alveolar nerve block by the novel injection approach provided adequate anesthesia and caused less pain and greater safety during injection. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Naming Block Structures: A Multimodal Approach
ERIC Educational Resources Information Center
Cohen, Lynn; Uhry, Joanna
2011-01-01
This study describes symbolic representation in block play in a culturally diverse suburban preschool classroom. Block play is "multimodal" and can allow children to experiment with materials to represent the world in many forms of literacy. Combined qualitative and quantitative data from seventy-seven block structures were collected and analyzed.…
Atmospheric flow over two-dimensional bluff surface obstructions
NASA Technical Reports Server (NTRS)
Bitte, J.; Frost, W.
1976-01-01
The phenomenon of atmospheric flow over a two-dimensional surface obstruction, such as a building (modeled as a rectangular block, a fence or a forward-facing step), is analyzed by three methods: (1) an inviscid free streamline approach, (2) a turbulent boundary layer approach using an eddy viscosity turbulence model and a horizontal pressure gradient determined by the inviscid model, and (3) an approach using the full Navier-Stokes equations with three turbulence models; i.e., an eddy viscosity model, a turbulence kinetic-energy model and a two-equation model with an additional transport equation for the turbulence length scale. A comparison of the performance of the different turbulence models is given, indicating that only the two-equation model adequately accounts for the convective character of turbulence. Turbulence flow property predictions obtained from the turbulence kinetic-energy model with prescribed length scale are only insignificantly better than those obtained from the eddy viscosity model. A parametric study includes the effects of the variation of the characteristics parameters of the assumed logarithmic approach velocity profile. For the case of the forward-facing step, it is shown that in the downstream flow region an increase of the surface roughness gives rise to higher turbulence levels in the shear layer originating from the step corner.
Main-chain supramolecular block copolymers.
Yang, Si Kyung; Ambade, Ashootosh V; Weck, Marcus
2011-01-01
Block copolymers are key building blocks for a variety of applications ranging from electronic devices to drug delivery. The material properties of block copolymers can be tuned and potentially improved by introducing noncovalent interactions in place of covalent linkages between polymeric blocks resulting in the formation of supramolecular block copolymers. Such materials combine the microphase separation behavior inherent to block copolymers with the responsiveness of supramolecular materials thereby affording dynamic and reversible materials. This tutorial review covers recent advances in main-chain supramolecular block copolymers and describes the design principles, synthetic approaches, advantages, and potential applications.
Integrating Identity Management With Federated Healthcare Data Models
NASA Astrophysics Data System (ADS)
Hu, Jun; Peyton, Liam
In order to manage performance and provide integrated services, health care data needs to be linked and aggregated across data sources from different organizations. The Internet and secure B2B networks offer the possibility of providing near real-time integration. However, there are three major stumbling blocks. One is to standardize and agree upon a common data model across organizations. The second is to match identities between different locations in order to link and aggregate records. The third is to protect identity and ensure compliance with privacy laws. In this paper, we analyze three main approaches to the problem and use a healthcare scenario to illustrate how each one addresses different aspects of the problem while failing to address others. We then present a systematic framework in which the different approaches can be flexibly combined for a more comprehensive approach to integrate identity management with federated healthcare data models.
A novel approach for fire recognition using hybrid features and manifold learning-based classifier
NASA Astrophysics Data System (ADS)
Zhu, Rong; Hu, Xueying; Tang, Jiajun; Hu, Sheng
2018-03-01
Although image/video based fire recognition has received growing attention, an efficient and robust fire detection strategy is rarely explored. In this paper, we propose a novel approach to automatically identify the flame or smoke regions in an image. It is composed to three stages: (1) a block processing is applied to divide an image into several nonoverlapping image blocks, and these image blocks are identified as suspicious fire regions or not by using two color models and a color histogram-based similarity matching method in the HSV color space, (2) considering that compared to other information, the flame and smoke regions have significant visual characteristics, so that two kinds of image features are extracted for fire recognition, where local features are obtained based on the Scale Invariant Feature Transform (SIFT) descriptor and the Bags of Keypoints (BOK) technique, and texture features are extracted based on the Gray Level Co-occurrence Matrices (GLCM) and the Wavelet-based Analysis (WA) methods, and (3) a manifold learning-based classifier is constructed based on two image manifolds, which is designed via an improve Globular Neighborhood Locally Linear Embedding (GNLLE) algorithm, and the extracted hybrid features are used as input feature vectors to train the classifier, which is used to make decision for fire images or non fire images. Experiments and comparative analyses with four approaches are conducted on the collected image sets. The results show that the proposed approach is superior to the other ones in detecting fire and achieving a high recognition accuracy and a low error rate.
Development of an ultrasound-guided technique for pudendal nerve block in cat cadavers.
Adami, Chiara; Angeli, Giovanni; Haenssgen, Kati; Stoffel, Michael H; Spadavecchia, Claudia
2013-10-01
The objective of this prospective experimental cadaveric study was to develop an ultrasound-guided technique to perform an anaesthetic pudendal nerve block in male cats. Fifteen fresh cadavers were used for this trial. A detailed anatomical dissection was performed on one cat in order to scrutinise the pudendal nerve and its ramifications. In a second step, the cadavers of six cats were used to test three different ultrasonographic approaches to the pudendal nerve: the deep dorso-lateral, the superficial dorso-lateral and the median transperineal. Although none of the approaches allowed direct ultrasonographical identification of the pudendal nerve branches, the deep dorso-lateral was found to be the most advantageous one in terms of practicability and ability to identify useful and reliable landmarks. Based on these findings, the deep dorso-lateral approach was selected as technique of choice for tracer injections (0.1 ml 1% methylene blue injected bilaterally) in six cat cadavers distinct from those used for the ultrasonographical study. Anatomical dissection revealed a homogeneous spread of the tracer around the pudendal nerve sensory branches in all six cadavers. Finally, computed tomography was performed in two additional cadavers after injection of 0.3 ml/kg (0.15 ml/kg per each injection sites, left and right) contrast medium through the deep dorso-lateral approach in order to obtain a model of volume distribution applicable to local anaesthetics. Our findings in cat cadavers indicate that ultrasound-guided pudendal nerve block is feasible and could be proposed to provide peri-operative analgesia in clinical patients undergoing perineal urethrostomy.
Distribution of model uncertainty across multiple data streams
NASA Astrophysics Data System (ADS)
Wutzler, Thomas
2014-05-01
When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.
Kenny, Joseph P.; Janssen, Curtis L.; Gordon, Mark S.; ...
2008-01-01
Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE) has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA) Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also addressmore » interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.« less
NASA Technical Reports Server (NTRS)
Saunders, D. F.; Thomas, G. E. (Principal Investigator); Kinsman, F. E.; Beatty, D. F.
1973-01-01
The author has identified the following significant results. This study was performed to investigate applications of ERTS-1 imagery in commercial reconnaissance for mineral and hydrocarbon resources. ERTS-1 imagery collected over five areas in North America (Montana; Colorado; New Mexico-West Texas; Superior Province, Canada; and North Slope, Alaska) has been analyzed for data content including linears, lineaments, and curvilinear anomalies. Locations of these features were mapped and compared with known locations of mineral and hydrocarbon accumulations. Results were analyzed in the context of a simple-shear, block-coupling model. Data analyses have resulted in detection of new lineaments, some of which may be continental in extent, detection of many curvilinear patterns not generally seen on aerial photos, strong evidence of continental regmatic fracture patterns, and realization that geological features can be explained in terms of a simple-shear, block-coupling model. The conculsions are that ERTS-1 imagery is of great value in photogeologic/geomorphic interpretations of regional features, and the simple-shear, block-coupling model provides a means of relating data from ERTS imagery to structures that have controlled emplacement of ore deposits and hydrocarbon accumulations, thus providing a basis for a new approach for reconnaissance for mineral, uranium, gas, and oil deposits and structures.
Anders, Royce; Riès, Stéphanie; Van Maanen, Leendert; Alario, F-Xavier
Patients with lesions in the left prefrontal cortex (PFC) have been shown to be impaired in lexical selection, especially when interference between semantically related alternatives is increased. To more deeply investigate which computational mechanisms may be impaired following left PFC damage due to stroke, a psychometric modelling approach is employed in which we assess the cognitive parameters of the patients from an evidence accumulation (sequential information sampling) modelling of their response data. We also compare the results to healthy speakers. Analysis of the cognitive parameters indicates an impairment of the PFC patients to appropriately adjust their decision threshold, in order to handle the increased item difficulty that is introduced by semantic interference. Also, the modelling contributes to other topics in psycholinguistic theory, in which specific effects are observed on the cognitive parameters according to item familiarization, and the opposing effects of priming (lower threshold) and semantic interference (lower drift) which are found to depend on repetition. These results are developed for the blocked-cyclic picture naming paradigm, in which pictures are presented within semantically homogeneous (HOM) or heterogeneous (HET) blocks, and are repeated several times per block. Overall, the results are in agreement with a role of the left PFC in adjusting the decision threshold for lexical selection in language production.
Geometric Modelling of Tree Roots with Different Levels of Detail
NASA Astrophysics Data System (ADS)
Guerrero Iñiguez, J. I.
2017-09-01
This paper presents a geometric approach for modelling tree roots with different Levels of Detail, suitable for analysis of the tree anchoring, potentially occupied underground space, interaction with urban elements and damage produced and taken in the built-in environment. Three types of tree roots are considered to cover several species: tap root, heart shaped root and lateral roots. Shrubs and smaller plants are not considered, however, a similar approach can be considered if the information is available for individual species. The geometrical approach considers the difficulties of modelling the actual roots, which are dynamic and almost opaque to direct observation, proposing generalized versions. For each type of root, different geometric models are considered to capture the overall shape of the root, a simplified block model, and a planar or surface projected version. Lower detail versions are considered as compatibility version for 2D systems while higher detail models are suitable for 3D analysis and visualization. The proposed levels of detail are matched with CityGML Levels of Detail, enabling both analysis and aesthetic views for urban modelling.
Performance Analysis of a Hybrid Overset Multi-Block Application on Multiple Architectures
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
This paper presents a detailed performance analysis of a multi-block overset grid compu- tational fluid dynamics app!ication on multiple state-of-the-art computer architectures. The application is implemented using a hybrid MPI+OpenMP programming paradigm that exploits both coarse and fine-grain parallelism; the former via MPI message passing and the latter via OpenMP directives. The hybrid model also extends the applicability of multi-block programs to large clusters of SNIP nodes by overcoming the restriction that the number of processors be less than the number of grid blocks. A key kernel of the application, namely the LU-SGS linear solver, had to be modified to enhance the performance of the hybrid approach on the target machines. Investigations were conducted on cacheless Cray SX6 vector processors, cache-based IBM Power3 and Power4 architectures, and single system image SGI Origin3000 platforms. Overall results for complex vortex dynamics simulations demonstrate that the SX6 achieves the highest performance and outperforms the RISC-based architectures; however, the best scaling performance was achieved on the Power3.
NASA Astrophysics Data System (ADS)
Kasparek, Christian; Rörich, Irina; Blom, Paul W. M.; Wetzelaer, Gert-Jan A. H.
2018-01-01
By blending semiconducting polymers with the cross-linkable matrix ethoxylated-(4)-bisphenol-a-dimethacrylate (SR540), an insoluble layer is acquired after UV-illumination. Following this approach, a trilayer polymer light-emitting diode (PLED) consisting of a blend of poly[N,N'-bis(4-butylphenyl)-N,N'-bis(phenyl)-benzidine] (poly-TPD) and SR540 as an electron-blocking layer, Super Yellow-Poly(p-phenylene vinylene) (SY-PPV) blended with SR540 as an emissive layer, and poly(9,9-di-n-octylfluorenyl-2,7-diyl) as a hole-blocking layer is fabricated from solution. The trilayer PLED shows a 23% increase in efficiency at low voltage as compared to a single layer SY-PPV PLED. However, at higher voltage, the advantage in current efficiency gradually decreases. A combined experimental and modelling study shows that the increased efficiency is not only due to the elimination of exciton quenching at the electrodes but also due to suppressed nonradiative trap-assisted recombination due to carrier confinement. At high voltages, holes can overcome the hole-blocking barrier, which explains the efficiency roll-off.
NASA Astrophysics Data System (ADS)
Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.
2012-12-01
Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
Study of mathematical modeling of communication systems transponders and receivers
NASA Technical Reports Server (NTRS)
Walsh, J. R.
1972-01-01
The modeling of communication receivers is described at both the circuit detail level and at the block level. The largest effort was devoted to developing new models at the block modeling level. The available effort did not permit full development of all of the block modeling concepts envisioned, but idealized blocks were developed for signal sources, a variety of filters, limiters, amplifiers, mixers, and demodulators. These blocks were organized into an operational computer simulation of communications receiver circuits identified as the frequency and time circuit analysis technique (FATCAT). The simulation operates in both the time and frequency domains, and permits output plots or listings of either frequency spectra or time waveforms from any model block. Transfer between domains is handled with a fast Fourier transform algorithm.
In silico assessment of drug safety in human heart applied to late sodium current blockers
Trenor, Beatriz; Gomis-Tena, Julio; Cardona, Karen; Romero, Lucia; Rajamani, Sridharan; Belardinelli, Luiz; Giles, Wayne R; Saiz, Javier
2013-01-01
Drug-induced action potential (AP) prolongation leading to Torsade de Pointes is a major concern for the development of anti-arrhythmic drugs. Nevertheless the development of improved anti-arrhythmic agents, some of which may block different channels, remains an important opportunity. Partial block of the late sodium current (INaL) has emerged as a novel anti-arrhythmic mechanism. It can be effective in the settings of free radical challenge or hypoxia. In addition, this approach can attenuate pro-arrhythmic effects of blocking the rapid delayed rectifying K+ current (IKr). The main goal of our computational work was to develop an in-silico tool for preclinical anti-arrhythmic drug safety assessment, by illustrating the impact of IKr/INaL ratio of steady-state block of drug candidates on “torsadogenic” biomarkers. The O’Hara et al. AP model for human ventricular myocytes was used. Biomarkers for arrhythmic risk, i.e., AP duration, triangulation, reverse rate-dependence, transmural dispersion of repolarization and electrocardiogram QT intervals, were calculated using single myocyte and one-dimensional strand simulations. Predetermined amounts of block of INaL and IKr were evaluated. “Safety plots” were developed to illustrate the value of the specific biomarker for selected combinations of IC50s for IKr and INaL of potential drugs. The reference biomarkers at baseline changed depending on the “drug” specificity for these two ion channel targets. Ranolazine and GS967 (a novel potent inhibitor of INaL) yielded a biomarker data set that is considered safe by standard regulatory criteria. This novel in-silico approach is useful for evaluating pro-arrhythmic potential of drugs and drug candidates in the human ventricle. PMID:23696033
Deformation pattern during normal faulting: A sequential limit analysis
NASA Astrophysics Data System (ADS)
Yuan, X. P.; Maillot, B.; Leroy, Y. M.
2017-02-01
We model in 2-D the formation and development of half-graben faults above a low-angle normal detachment fault. The model, based on a "sequential limit analysis" accounting for mechanical equilibrium and energy dissipation, simulates the incremental deformation of a frictional, cohesive, and fluid-saturated rock wedge above the detachment. Two modes of deformation, gravitational collapse and tectonic collapse, are revealed which compare well with the results of the critical Coulomb wedge theory. We additionally show that the fault and the axial surface of the half-graben rotate as topographic subsidence increases. This progressive rotation makes some of the footwall material being sheared and entering into the hanging wall, creating a specific region called foot-to-hanging wall (FHW). The model allows introducing additional effects, such as weakening of the faults once they have slipped and sedimentation in their hanging wall. These processes are shown to control the size of the FHW region and the number of fault-bounded blocks it eventually contains. Fault weakening tends to make fault rotation more discontinuous and this results in the FHW zone containing multiple blocks of intact material separated by faults. By compensating the topographic subsidence of the half-graben, sedimentation tends to slow the fault rotation and this results in the reduction of the size of the FHW zone and of its number of fault-bounded blocks. We apply the new approach to reproduce the faults observed along a seismic line in the Southern Jeanne d'Arc Basin, Grand Banks, offshore Newfoundland. There, a single block exists in the hanging wall of the principal fault. The model explains well this situation provided that a slow sedimentation rate in the Lower Jurassic is proposed followed by an increasing rate over time as the main detachment fault was growing.
Dynamic system simulation of small satellite projects
NASA Astrophysics Data System (ADS)
Raif, Matthias; Walter, Ulrich; Bouwmeester, Jasper
2010-11-01
A prerequisite to accomplish a system simulation is to have a system model holding all necessary project information in a centralized repository that can be accessed and edited by all parties involved. At the Institute of Astronautics of the Technische Universitaet Muenchen a modular approach for modeling and dynamic simulation of satellite systems has been developed called dynamic system simulation (DySyS). DySyS is based on the platform independent description language SysML to model a small satellite project with respect to the system composition and dynamic behavior. A library of specific building blocks and possible relations between these blocks have been developed. From this library a system model of the satellite of interest can be created. A mapping into a C++ simulation allows the creation of an executable system model with which simulations are performed to observe the dynamic behavior of the satellite. In this paper DySyS is used to model and simulate the dynamic behavior of small satellites, because small satellite projects can act as a precursor to demonstrate the feasibility of a system model since they are less complex compared to a large scale satellite project.
Fabrication routes for one-dimensional nanostructures via block copolymers
NASA Astrophysics Data System (ADS)
Tharmavaram, Maithri; Rawtani, Deepak; Pandey, Gaurav
2017-05-01
Nanotechnology is the field which deals with fabrication of materials with dimensions in the nanometer range by manipulating atoms and molecules. Various synthesis routes exist for the one, two and three dimensional nanostructures. Recent advancements in nanotechnology have enabled the usage of block copolymers for the synthesis of such nanostructures. Block copolymers are versatile polymers with unique properties and come in many types and shapes. Their properties are highly dependent on the blocks of the copolymers, thus allowing easy tunability of its properties. This review briefly focusses on the use of block copolymers for synthesizing one-dimensional nanostructures especially nanowires, nanorods, nanoribbons and nanofibers. Template based, lithographic, and solution based approaches are common approaches in the synthesis of nanowires, nanorods, nanoribbons, and nanofibers. Synthesis of metal, metal oxides, metal oxalates, polymer, and graphene one dimensional nanostructures using block copolymers have been discussed as well.
Reversible conduction block in peripheral nerve using electrical waveforms.
Bhadra, Niloy; Vrabec, Tina L; Bhadra, Narendra; Kilgore, Kevin L
2018-01-01
Electrical nerve block uses electrical waveforms to block action potential propagation. Two key features that distinguish electrical nerve block from other nonelectrical means of nerve block: block occurs instantly, typically within 1 s; and block is fully and rapidly reversible (within seconds). Approaches for achieving electrical nerve block are reviewed, including kilohertz frequency alternating current and charge-balanced polarizing current. We conclude with a discussion of the future directions of electrical nerve block. Electrical nerve block is an emerging technique that has many significant advantages over other methods of nerve block. This field is still in its infancy, but a significant expansion in the clinical application of this technique is expected in the coming years.
Numerical Validation of Chemical Compositional Model for Wettability Alteration Processes
NASA Astrophysics Data System (ADS)
Bekbauov, Bakhbergen; Berdyshev, Abdumauvlen; Baishemirov, Zharasbek; Bau, Domenico
2017-12-01
Chemical compositional simulation of enhanced oil recovery and surfactant enhanced aquifer remediation processes is a complex task that involves solving dozens of equations for all grid blocks representing a reservoir. In the present work, we perform a numerical validation of the newly developed mathematical formulation which satisfies the conservation laws of mass and energy and allows applying a sequential solution approach to solve the governing equations separately and implicitly. Through its application to the numerical experiment using a wettability alteration model and comparisons with existing chemical compositional model's numerical results, the new model has proven to be practical, reliable and stable.
Modeling of DNA-Mediated Self-Assembly from Anisotropic Nanoparticles: A Molecular Dynamics Study
NASA Astrophysics Data System (ADS)
Millan, Jaime; Girard, Martin; Brodin, Jeffrey; O'Brien, Matt; Mirkin, Chad; Olvera de La Cruz, Monica
The programmable selectivity of DNA recognition constitutes an elegant scheme to self-assemble a rich variety of superlattices from versatile nanoscale building blocks, where the natural interactions between building blocks are traded by complementary DNA hybridization interactions. Recently, we introduced and validated a scale-accurate coarse-grained model for a molecular dynamics approach that captures the dynamic nature of DNA hybridization events and reproduces the experimentally-observed crystallization behavior of various mixtures of spherical DNA-modified nanoparticles. Here, we have extended this model to robustly reproduce the assembly of nanoparticles with the anisotropic shapes observed experimentally. In particular, we are interested in two different particle types: (i) regular shapes, namely the cubic and octahedral polyhedra shapes commonly observed in gold nanoparticles, and (ii) irregular shapes akin to those exhibited by enzymes. Anisotropy in shape can provide an analog to the atomic orbitals exhibited by conventional atomic crystals. We present results for the assembly of enzymes or anisotropic nanoparticles and the co-assembly of enzymes and nanoparticles.
Identifying Blocks Formed by Curbed Fractures Using Exact Arithmetic
NASA Astrophysics Data System (ADS)
Zheng, Y.; Xia, L.; Yu, Q.; Zhang, X.
2015-12-01
Identifying blocks formed by fractures is important in rock engineering. Most studies assume the fractures to be perfect planar whereas curved fractures are rarely considered. However, large fractures observed in the field are often curved. This paper presents a new method for identifying rock blocks formed by both curved and planar fractures based on the element-block-assembling approach. The curved and planar fractures are represented as triangle meshes and planar discs, respectively. In the beginning of the identification method, the intersection segments between different triangle meshes are calculated and the intersected triangles are re-meshed to construct a piecewise linear complex (PLC). Then, the modeling domain is divided into tetrahedral subdomains under the constraint of the PLC and these subdomains are further decomposed into element blocks by extended planar fractures. Finally, the element blocks are combined and the subdomains are assembled to form complex blocks. The combination of two subdomains is skipped if and only if the common facet lies on a curved fracture. In this study, the exact arithmetic is used to handle the computational errors, which may threat the robustness of the block identification program when the degenerated cases are encountered. Specifically, a real number is represented as the ratio between two integers and the basic arithmetic such as addition, subtraction, multiplication and division between different real numbers can be performed exactly if an arbitrary precision integer package is used. In this way, the exact construction of blocks can be achieved without introducing computational errors. Several analytical examples are given in this paper and the results show effectiveness of this method in handling arbitrary shaped blocks. Moreover, there is no limitation on the number of blocks in a block system. The results also show (suggest) that the degenerated cases can be handled without affecting the robustness of the identification program.
2010-03-01
cracking in both 7050 series aluminium alloys and Mil Annealed Ti-6Al-4V conforms to the Generalised Frost-Dugdale model. The report recommends... ALUMINIUM ALLOYS ............................................. 14 5.1 Application of the equivalent block variant to represent crack growth in 7050 series... aluminium alloys ................................................................................ 20 6. DETERMINING THE CONSTANTS IN THE GENERALISED
NASA Astrophysics Data System (ADS)
Li, D.
2017-12-01
Fingerprints of anthropogenic climate change can be most readily detected in the high latitudes of Northern Hemisphere, where temperature has been rising faster than the rest of the globe and sea ice cover has shrunk dramatically over recent decades. Reducing the meridional temperature gradient, this amplified warming over the high latitudes influences weather in the middle latitudes by modulating the jet stream, storms, and atmospheric blocking activities. Whether observational records have revealed significant changes in mid-latitude storms and blocking activities, however, has remained a subject of much debate. Buried deep in strong year-to-year variations, the long-term dynamic responses of the atmosphere are more difficult to identify, compared with its thermodynamic responses. Variabilities of decadal and longer timescales further obscure any trends diagnosed from satellite observations, which are often shorter than 40 years. Here, new metrics reflecting storm and blocking activities are developed using surface air temperature and pressure records, and their variations and long-term trends are examined. This approach gives an inkling of the changes in storm and blocking activities since the Industrial Revolution in regions with abundant long-term observational records, e.g. Europe and North America. The relationship between Atlantic Multi-decadal Oscillation and variations in storm and blocking activities across the Atlantic is also scrutinized. The connection between observed centennial trends and anthropogenic forcings is investigated using a hierarchy of numerical tools, from highly idealized to fully coupled atmosphere-ocean models. Pre-industrial control simulations and a set of large ensemble simulations forced by increased CO2 are analyzed to evaluate the range of natural variabilities, which paves the way to singling out significant anthropogenic changes from observational records, as well as predicting future changes in mid-latitude storm and blocking activities in the case of continued anthropogenic CO2 forcing.
A system architecture for a planetary rover
NASA Technical Reports Server (NTRS)
Smith, D. B.; Matijevic, J. R.
1989-01-01
Each planetary mission requires a complex space vehicle which integrates several functions to accomplish the mission and science objectives. A Mars Rover is one of these vehicles, and extends the normal spacecraft functionality with two additional functions: surface mobility and sample acquisition. All functions are assembled into a hierarchical and structured format to understand the complexities of interactions between functions during different mission times. It can graphically show data flow between functions, and most importantly, the necessary control flow to avoid unambiguous results. Diagrams are presented organizing the functions into a structured, block format where each block represents a major function at the system level. As such, there are six blocks representing telecomm, power, thermal, science, mobility and sampling under a supervisory block called Data Management/Executive. Each block is a simple collection of state machines arranged into a hierarchical order very close to the NASREM model for Telerobotics. Each layer within a block represents a level of control for a set of state machines that do the three primary interface functions: command, telemetry, and fault protection. This latter function is expanded to include automatic reactions to the environment as well as internal faults. Lastly, diagrams are presented that trace the system operations involved in moving from site to site after site selection. The diagrams clearly illustrate both the data and control flows. They also illustrate inter-block data transfers and a hierarchical approach to fault protection. This systems architecture can be used to determine functional requirements, interface specifications and be used as a mechanism for grouping subsystems (i.e., collecting groups of machines, or blocks consistent with good and testable implementations).
Calcott, Rebecca D.; Berkman, Elliot T.
2014-01-01
In the present studies, we aimed to understand how approach and avoidance states affect attentional flexibility by examining attentional shifts on a trial-by-trial basis. We also examined how a novel construct in this area, task context, might interact with motivation to influence attentional flexibility. Participants completed a modified composite letter task in which the ratio of global to local targets was varied by block, making different levels of attentional focus beneficial to performance on different blocks. Study 1 demonstrated that, in the absence of a motivation manipulation, switch costs were lowest on blocks with an even ratio of global and local trials and were higher on blocks with an uneven ratio. Other participants completed the task while viewing pictures (Studies 2 and 3) and assuming arm positions (Studies 2 and 4) to induce approach, avoidance, and neutral motivational states. Avoidance motivation reduced switch costs in evenly proportioned contexts, whereas approach motivation reduced switch costs in mostly global contexts. Additionally, approach motivation imparted a similar switch cost magnitude across different contexts, whereas avoidance and neutral states led to variable switch costs depending on the context. Subsequent analyses revealed that these effects were driven largely by faster switching to local targets on mostly global blocks in the approach condition. These findings suggest that avoidance facilitates attentional shifts when switches are frequent, whereas approach facilitates responding to rare or unexpected local stimuli. The main implication of these results is that motivation has different effects on attentional shifts depending on the context. PMID:24294866
Analgesic Effect Of Bilateral Subcostal Tap Block After Laparoscopic Cholecystectomy.
Khan, Karima Karam; Khan, Robyna Irshad
2018-01-01
Pain after laparoscopic cholecystectomy is mild to moderate in intensity. Several modalities are employed for achieving safe and effective postoperative analgesia, the benefits of which adds to the early recovery of the patients. As a part of multimodal analgesia, various approaches of Transversus abdominis plane (TAP) block has been used for management of parietal and incisional components of pain after laparoscopic cholecystectomy. This study was designed to compare the analgesic efficacy of two different approaches of ultrasound guided TAP block, i.e., Subcostal-TAP block technique with ultrasound guided Posterior-TAP block for postoperative pain management in patients undergoing laparoscopic cholecystectomy under general anaesthesia. In this double blinded randomized controlled study, consecutive nonprobability sampling was done and a total of 126 patients admitted for elective laparoscopic cholecystectomy fulfilling the inclusion criteria were selected. After induction of general anaesthesia, patients were randomized through draw method and received either ultrasound guided posterior TAP block with 0.375% bupivacaine (20ml volume) on each side of the abdomen or subcostal TAP block bilaterally with the same. Up to 24 hours postoperatively, static and dynamic numeric rating pain scores were assessed. We found statistically significant difference in mean static pain scores over 24 hours postoperatively in subcostal TAP group, suggesting improved analgesia. However, mean dynamic postoperative pain scores were comparable between the two groups. Whereas, patients in both groups were satisfied with pain management. Ultrasound guided subcostal TAP block provides better postoperative analgesia as compared to the Posterior TAP block in laparoscopic cholecystectomy. Otherwise both of the approaches improve patient outcomes towards early recovery and discharge from hospital.
Novel mechanism of antibodies to hepatitis B virus in blocking viral particle release from cells.
Neumann, Avidan U; Phillips, Sandra; Levine, Idit; Ijaz, Samreen; Dahari, Harel; Eren, Rachel; Dagan, Shlomo; Naoumov, Nikolai V
2010-09-01
Antibodies are thought to exert antiviral activities by blocking viral entry into cells and/or accelerating viral clearance from circulation. In particular, antibodies to hepatitis B virus (HBV) surface antigen (HBsAg) confer protection, by binding circulating virus. Here, we used mathematical modeling to gain information about viral dynamics during and after single or multiple infusions of a combination of two human monoclonal anti-HBs (HepeX-B) antibodies in patients with chronic hepatitis B. The antibody HBV-17 recognizes a conformational epitope, whereas antibody HBV-19 recognizes a linear epitope on the HBsAg. The kinetic profiles of the decline of serum HBV DNA and HBsAg revealed partial blocking of virion release from infected cells as a new antiviral mechanism, in addition to acceleration of HBV clearance from the circulation. We then replicated this approach in vitro, using cells secreting HBsAg, and compared the prediction of the mathematical modeling obtained from the in vivo kinetics. In vitro, HepeX-B treatment of HBsAg-producing cells showed cellular uptake of antibodies, resulting in intracellular accumulation of viral particles. Blocking of HBsAg secretion also continued after HepeX-B was removed from the cell culture supernatants. These results identify a novel antiviral mechanism of antibodies to HBsAg (anti-HBs) involving prolonged blocking of the HBV and HBsAg subviral particles release from infected cells. This may have implications in designing new therapies for patients with chronic HBV infection and may also be relevant in other viral infections.
Portfolio Acquisition - How the DoD Can Leverage the Commercial Product Line Model
2015-04-30
canceled (Harrison, 2011). A major contributing factor common to these failures is that the programs tried to do too much at once: they used a big - bang ...requirements in a single, big - bang approach. MDAPs take 10 to 15 years from Milestone A to initial operational capability, with many of the largest...2013). The block upgrade model for B-52, F-15, and F-16 proved successful over decades, yet with its big - bang structure the F-35 program is
A study of the parallel algorithm for large-scale DC simulation of nonlinear systems
NASA Astrophysics Data System (ADS)
Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel
Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.
Artificial Immune System Approaches for Aerospace Applications
NASA Technical Reports Server (NTRS)
KrishnaKumar, Kalmanje; Koga, Dennis (Technical Monitor)
2002-01-01
Artificial Immune Systems (AIS) combine a priori knowledge with the adapting capabilities of biological immune system to provide a powerful alternative to currently available techniques for pattern recognition, modeling, design, and control. Immunology is the science of built-in defense mechanisms that are present in all living beings to protect against external attacks. A biological immune system can be thought of as a robust, adaptive system that is capable of dealing with an enormous variety of disturbances and uncertainties. Biological immune systems use a finite number of discrete "building blocks" to achieve this adaptiveness. These building blocks can be thought of as pieces of a puzzle which must be put together in a specific way-to neutralize, remove, or destroy each unique disturbance the system encounters. In this paper, we outline AIS models that are immediately applicable to aerospace problems and identify application areas that need further investigation.
Accelerated Gaussian mixture model and its application on image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Jianhui; Zhang, Yuanyuan; Ding, Yihua; Long, Chengjiang; Yuan, Zhiyong; Zhang, Dengyi
2013-03-01
Gaussian mixture model (GMM) has been widely used for image segmentation in recent years due to its superior adaptability and simplicity of implementation. However, traditional GMM has the disadvantage of high computational complexity. In this paper an accelerated GMM is designed, for which the following approaches are adopted: establish the lookup table for Gaussian probability matrix to avoid the repetitive probability calculations on all pixels, employ the blocking detection method on each block of pixels to further decrease the complexity, change the structure of lookup table from 3D to 1D with more simple data type to reduce the space requirement. The accelerated GMM is applied on image segmentation with the help of OTSU method to decide the threshold value automatically. Our algorithm has been tested through image segmenting of flames and faces from a set of real pictures, and the experimental results prove its efficiency in segmentation precision and computational cost.
Baechler, Simon; Morelato, Marie; Ribaux, Olivier; Beavis, Alison; Tahtouh, Mark; Kirkbride, K Paul; Esseiva, Pierre; Margot, Pierre; Roux, Claude
2015-05-01
The development of forensic intelligence relies on the expression of suitable models that better represent the contribution of forensic intelligence in relation to the criminal justice system, policing and security. Such models assist in comparing and evaluating methods and new technologies, provide transparency and foster the development of new applications. Interestingly, strong similarities between two separate projects focusing on specific forensic science areas were recently observed. These observations have led to the induction of a general model (Part I) that could guide the use of any forensic science case data in an intelligence perspective. The present article builds upon this general approach by focusing on decisional and organisational issues. The article investigates the comparison process and evaluation system that lay at the heart of the forensic intelligence framework, advocating scientific decision criteria and a structured but flexible and dynamic architecture. These building blocks are crucial and clearly lay within the expertise of forensic scientists. However, it is only part of the problem. Forensic intelligence includes other blocks with their respective interactions, decision points and tensions (e.g. regarding how to guide detection and how to integrate forensic information with other information). Formalising these blocks identifies many questions and potential answers. Addressing these questions is essential for the progress of the discipline. Such a process requires clarifying the role and place of the forensic scientist within the whole process and their relationship to other stakeholders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Cramer, Nick; Swei, Sean Shan-Min; Cheung, Kenny; Teodorescu, Mircea
2015-01-01
This paper presents a modeling and control of aerostructure developed by lattice-based cellular materials/components. The proposed aerostructure concept leverages a building block strategy for lattice-based components which provide great adaptability to varying ight scenarios, the needs of which are essential for in- ight wing shaping control. A decentralized structural control design is proposed that utilizes discrete-time lumped mass transfer matrix method (DT-LM-TMM). The objective is to develop an e ective reduced order model through DT-LM-TMM that can be used to design a decentralized controller for the structural control of a wing. The proposed approach developed in this paper shows that, as far as the performance of overall structural system is concerned, the reduced order model can be as e ective as the full order model in designing an optimal stabilizing controller.
Integrated Resilient Aircraft Control Project Full Scale Flight Validation
NASA Technical Reports Server (NTRS)
Bosworth, John T.
2009-01-01
Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.
Interventional Management for Pelvic Pain.
Nagpal, Ameet S; Moody, Erika L
2017-08-01
Interventional procedures can be applied for diagnostic evaluation and treatment of the patient with pelvic pain, often once more conservative measures have failed to provide relief. This article reviews interventional management strategies for pelvic pain. We review superior and inferior hypogastric plexus blocks, ganglion impar blocks, transversus abdominis plane blocks, ilioinguinal, iliohypogastric and genitofemoral blocks, pudendal nerve blocks, and selective nerve root blocks. Additionally, we discuss trigger point injections, sacroiliac joint injections, and neuromodulation approaches. Copyright © 2017 Elsevier Inc. All rights reserved.
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam
2014-07-01
This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Automate Your Physical Plant Using the Building Block Approach.
ERIC Educational Resources Information Center
Michaelson, Matt
1998-01-01
Illustrates how Mount Saint Vincent University (Halifax), by upgrading the control and monitoring of one building or section of the school at a time, could produce savings in energy and operating costs and improve the environment. Explains a gradual, "building block" approach to facility automation that provides flexibility without a…
DOT National Transportation Integrated Search
2015-04-01
This research examined the safety and operational effects of roadway lane width on mid-block segments between : signalized intersections as well as on signalized intersection approaches in the urban environments of Lincoln and Omaha, : Nebraska. In t...
Self-aligned block technology: a step toward further scaling
NASA Astrophysics Data System (ADS)
Lazzarino, Frédéric; Mohanty, Nihar; Feurprier, Yannick; Huli, Lior; Luong, Vinh; Demand, Marc; Decoster, Stefan; Vega Gonzalez, Victor; Ryckaert, Julien; Kim, Ryan Ryoung Han; Mallik, Arindam; Leray, Philippe; Wilson, Chris; Boemmels, Jürgen; Kumar, Kaushik; Nafus, Kathleen; deVilliers, Anton; Smith, Jeffrey; Fonseca, Carlos; Bannister, Julie; Scheer, Steven; Tokei, Zsolt; Piumi, Daniele; Barla, Kathy
2017-04-01
In this work, we present and compare two integration approaches to enable self-alignment of the block suitable for the 5- nm technology node. The first approach is exploring the insertion of a spin-on metal-based material to memorize the first block and act as an etch stop layer in the overall integration. The second approach is evaluating the self-aligned block technology employing widely used organic materials and well-known processes. The concept and the motivation are discussed considering the effects on design and mask count as well as the impact on process complexity and EPE budget. We show the integration schemes and discuss the requirements to enable self-alignment. We present the details of materials and processes selection to allow optimal selective etches and we demonstrate the proof of concept using a 16- nm half-pitch BEOL vehicle. Finally, a study on technology insertion and cost estimation is presented.
Hybrid discrete ordinates and characteristics method for solving the linear Boltzmann equation
NASA Astrophysics Data System (ADS)
Yi, Ce
With the ability of computer hardware and software increasing rapidly, deterministic methods to solve the linear Boltzmann equation (LBE) have attracted some attention for computational applications in both the nuclear engineering and medical physics fields. Among various deterministic methods, the discrete ordinates method (SN) and the method of characteristics (MOC) are two of the most widely used methods. The SN method is the traditional approach to solve the LBE for its stability and efficiency. While the MOC has some advantages in treating complicated geometries. However, in 3-D problems requiring a dense discretization grid in phase space (i.e., a large number of spatial meshes, directions, or energy groups), both methods could suffer from the need for large amounts of memory and computation time. In our study, we developed a new hybrid algorithm by combing the two methods into one code, TITAN. The hybrid approach is specifically designed for application to problems containing low scattering regions. A new serial 3-D time-independent transport code has been developed. Under the hybrid approach, the preferred method can be applied in different regions (blocks) within the same problem model. Since the characteristics method is numerically more efficient in low scattering media, the hybrid approach uses a block-oriented characteristics solver in low scattering regions, and a block-oriented SN solver in the remainder of the physical model. In the TITAN code, a physical problem model is divided into a number of coarse meshes (blocks) in Cartesian geometry. Either the characteristics solver or the SN solver can be chosen to solve the LBE within a coarse mesh. A coarse mesh can be filled with fine meshes or characteristic rays depending on the solver assigned to the coarse mesh. Furthermore, with its object-oriented programming paradigm and layered code structure, TITAN allows different individual spatial meshing schemes and angular quadrature sets for each coarse mesh. Two quadrature types (level-symmetric and Legendre-Chebyshev quadrature) along with the ordinate splitting techniques (rectangular splitting and PN-TN splitting) are implemented. In the S N solver, we apply a memory-efficient 'front-line' style paradigm to handle the fine mesh interface fluxes. In the characteristics solver, we have developed a novel 'backward' ray-tracing approach, in which a bi-linear interpolation procedure is used on the incoming boundaries of a coarse mesh. A CPU-efficient scattering kernel is shared in both solvers within the source iteration scheme. Angular and spatial projection techniques are developed to transfer the angular fluxes on the interfaces of coarse meshes with different discretization grids. The performance of the hybrid algorithm is tested in a number of benchmark problems in both nuclear engineering and medical physics fields. Among them are the Kobayashi benchmark problems and a computational tomography (CT) device model. We also developed an extra sweep procedure with the fictitious quadrature technique to calculate angular fluxes along directions of interest. The technique is applied in a single photon emission computed tomography (SPECT) phantom model to simulate the SPECT projection images. The accuracy and efficiency of the TITAN code are demonstrated in these benchmarks along with its scalability. A modified version of the characteristics solver is integrated in the PENTRAN code and tested within the parallel engine of PENTRAN. The limitations on the hybrid algorithm are also studied.
Modeling of wave processes in blocky media with porous and fluid-saturated interlayers
NASA Astrophysics Data System (ADS)
Sadovskii, Vladimir M.; Sadovskaya, Oxana V.; Lukyanov, Alexander A.
2017-09-01
The wave processes in blocky media are analyzed by applying different mathematical models, wherein the elastic blocks interact with each other via pliant interlayers with the complex mechanical properties. Four versions of constitutive equations are considered. In the first version, an elastic interaction between the blocks is simulated within the framework of linear elasticity theory, and the model of elastic-plastic interlayers is constructed to take into account the appearance of irreversible deformation of interlayers at short time intervals. In the second one, the effects of viscoelastic shear in the interblock interlayers are taken into the consideration using the Poynting-Thomson rheological scheme. In the third option, the model of an elastic porous material is used in the interlayers, where the pores collapse if an abrupt compressive stress is applied. In the fourth case, the model of a fluid-saturated material with open pores is examined based on Biot's equations. The collapse of pores is modeled by the generalized rheological approach, wherein the mechanical properties of a material are simulated using four rheological elements. Three of them are the traditional elastic, viscous and plastic elements, the fourth element is the so-called rigid contact, which is used to describe the behavior of materials with the different resistance to tension and compression. It was shown that the thermodynamically consistent model is provided, which means that the energy balance equation is fulfilled for an entire blocky structure, where the kinetic and potential energy of the system is the sum of the kinetic and potential energies of the blocks and interlayers. Under numerical implementation of the interlayers models, the dissipationless finite difference Ivanov's method was used. The splitting method by spatial variables in the combination with the Godunov gap decay scheme was applied in the blocks. As a result, robust and stable computational algorithms are built and tested. Using MPI technology, the parallel software was designed for the modeling of wave processes in 2D setting. The numerical results are presented, discussed and future studies are outlined.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-12-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111
Development and Validation of a New Air Carrier Block Time Prediction Model and Methodology
NASA Astrophysics Data System (ADS)
Litvay, Robyn Olson
Commercial airline operations rely on predicted block times as the foundation for critical, successive decisions that include fuel purchasing, crew scheduling, and airport facility usage planning. Small inaccuracies in the predicted block times have the potential to result in huge financial losses, and, with profit margins for airline operations currently almost nonexistent, potentially negate any possible profit. Although optimization techniques have resulted in many models targeting airline operations, the challenge of accurately predicting and quantifying variables months in advance remains elusive. The objective of this work is the development of an airline block time prediction model and methodology that is practical, easily implemented, and easily updated. Research was accomplished, and actual U.S., domestic, flight data from a major airline was utilized, to develop a model to predict airline block times with increased accuracy and smaller variance in the actual times from the predicted times. This reduction in variance represents tens of millions of dollars (U.S.) per year in operational cost savings for an individual airline. A new methodology for block time prediction is constructed using a regression model as the base, as it has both deterministic and probabilistic components, and historic block time distributions. The estimation of the block times for commercial, domestic, airline operations requires a probabilistic, general model that can be easily customized for a specific airline’s network. As individual block times vary by season, by day, and by time of day, the challenge is to make general, long-term estimations representing the average, actual block times while minimizing the variation. Predictions of block times for the third quarter months of July and August of 2011 were calculated using this new model. The resulting, actual block times were obtained from the Research and Innovative Technology Administration, Bureau of Transportation Statistics (Airline On-time Performance Data, 2008-2011) for comparison and analysis. Future block times are shown to be predicted with greater accuracy, without exception and network-wide, for a major, U.S., domestic airline.
Combined KHFAC + DC nerve block without onset or reduced nerve conductivity after block
NASA Astrophysics Data System (ADS)
Franke, Manfred; Vrabec, Tina; Wainright, Jesse; Bhadra, Niloy; Bhadra, Narendra; Kilgore, Kevin
2014-10-01
Objective. Kilohertz frequency alternating current (KHFAC) waveforms have been shown to provide peripheral nerve conductivity block in many acute and chronic animal models. KHFAC nerve block could be used to address multiple disorders caused by neural over-activity, including blocking pain and spasticity. However, one drawback of KHFAC block is a transient activation of nerve fibers during the initiation of the nerve block, called the onset response. The objective of this study is to evaluate the feasibility of using charge balanced direct current (CBDC) waveforms to temporarily block motor nerve conductivity distally to the KHFAC electrodes to mitigate the block onset-response. Approach. A total of eight animals were used in this study. A set of four animals were used to assess feasibility and reproducibility of a combined KHFAC + CBDC block. A following randomized study, conducted on a second set of four animals, compared the onset response resulting from KHFAC alone and combined KHFAC + CBDC waveforms. To quantify the onset, peak forces and the force-time integral were measured during KHFAC block initiation. Nerve conductivity was monitored throughout the study by comparing muscle twitch forces evoked by supra-maximal stimulation proximal and distal to the block electrodes. Each animal of the randomized study received at least 300 s (range: 318-1563 s) of cumulative dc to investigate the impact of combined KHFAC + CBDC on nerve viability. Main results. The peak onset force was reduced significantly from 20.73 N (range: 18.6-26.5 N) with KHFAC alone to 0.45 N (range: 0.2-0.7 N) with the combined CBDC and KHFAC block waveform (p < 0.001). The area under the force curve was reduced from 6.8 Ns (range: 3.5-21.9 Ns) to 0.54 Ns (range: 0.18-0.86 Ns) (p < 0.01). No change in nerve conductivity was observed after application of the combined KHFAC + CBDC block relative to KHFAC waveforms. Significance. The distal application of CBDC can significantly reduce or even completely prevent the KHFAC onset response without a change in nerve conductivity.
Nagata, Jun; Watanabe, Jun; Sawatsubashi, Yusuke; Akiyama, Masaki; Arase, Koichi; Minagawa, Noritaka; Torigoe, Takayuki; Hamada, Kotaro; Nakayama, Yoshifumi; Hirata, Keiji
2017-08-27
A 62-year-old man who had acute rectal obstruction due to a large rectal cancer is presented. He underwent emergency laparoscopic colostomy. We used the laparoscopic puncture needle to inject analgesia with the novel transperitoneal approach. In this procedure, both ultrasound and laparoscopic images assisted with the accurate injection of analgesic to the correct layer. The combination of laparoscopic visualization and ultrasound imaging ensured infiltration of analgesic into the correct layer without causing damage to the bowel. Twenty-four hours postoperatively, the patient's pain intensity as assessed by the numeric rating scale was 0-1 during coughing, and a continuous intravenous analgesic was not needed. Colostomy is often necessary in colon obstruction. Epidural anesthesia for postoperative pain cannot be used in patients with a coagulation disorder. We report the use of a novel laparoscopic rectus sheath block for colostomy. There has been no literature described about the nerve block with transperitoneal approach. The laparoscopic rectus sheath block was performed safely and had enough analgesic efficacy for postoperative pain. This technique could be considered as an optional anesthetic regimen in acute situations.
NASA Astrophysics Data System (ADS)
Athanasiadis, Panos; Gualdi, Silvio; Scaife, Adam A.; Bellucci, Alessio; Hermanson, Leon; MacLachlan, Craig; Arribas, Alberto; Materia, Stefano; Borelli, Andrea
2014-05-01
Low-frequency variability is a fundamental component of the atmospheric circulation. Extratropical teleconnections, the occurrence of blocking and the slow modulation of the jet streams and storm tracks are all different aspects of low-frequency variability. Part of the latter is attributed to the chaotic nature of the atmosphere and is inherently unpredictable. On the other hand, primarily as a response to boundary forcings, tropospheric low-frequency variability includes components that are potentially predictable. Seasonal forecasting faces the difficult task of predicting these components. Particularly referring to the extratropics, the current generation of seasonal forecasting systems seem to be approaching this target by realistically initializing most components of the climate system, using higher resolution and utilizing large ensemble sizes. Two seasonal prediction systems (Met-Office GloSea and CMCC-SPS-v1.5) are analyzed in terms of their representation of different aspects of extratropical low-frequency variability. The current operational Met-Office system achieves unprecedented high scores in predicting the winter-mean phase of the North Atlantic Oscillation (NAO, corr. 0.74 at 500 hPa) and the Pacific-N. American pattern (PNA, corr. 0.82). The CMCC system, considering its small ensemble size and course resolution, also achieves good scores (0.42 for NAO, 0.51 for PNA). Despite these positive features, both models suffer from biases in low-frequency variance, particularly in the N. Atlantic. Consequently, it is found that their intrinsic variability patterns (sectoral EOFs) differ significantly from the observed, and the known teleconnections are underrepresented. Regarding the representation of N. hemisphere blocking, after bias correction both systems exhibit a realistic climatology of blocking frequency. In this assessment, instantaneous blocking and large-scale persistent blocking events are identified using daily geopotential height fields at 500 hPa. Given a documented strong relationship between high-latitude N. Atlantic blocking and the NAO, one would expect a predictive skill for the seasonal frequency of blocking comparable to that of the NAO. However, this remains elusive. Future efforts should be in the direction of reducing model biases not only in the mean but also in variability (band-passed variances).
Role models for complex networks
NASA Astrophysics Data System (ADS)
Reichardt, J.; White, D. R.
2007-11-01
We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.
[Anesthetic infiltration of the spermatic cord in surgery for voluminous hydrocele].
Reale, C; Corinti, R; Galullo, B; Borgonuovo, P; Borgonuovo, P
1998-06-01
The use of a new technique in spermatic cord block in surgical treatment of large hydroceles is reported. Identification of the cord in these cases is often difficult due to the presence of the hydrocele. The reported technique consists in the percutaneous drainage of the hydrocele prior to the block, in order to allow an easier identification of the cord. The block is then performed by the usual method. 108 patients with large hydroceles (above 250 mls) underwent surgical repair employing this approach. In only one case the cord was not identified even after drainage due to the effects of a previous hernioplasty. In the remaining 107 patients the cord was easily identified and blocked. The excellent results obtained with this approach, show that cord block is possible in all patients, even when a large hydrocele is present.
Ando, David; Gopinathan, Ajay
2017-01-01
Nucleocytoplasmic transport is highly selective, efficient, and is regulated by a poorly understood mechanism involving hundreds of disordered FG nucleoporin proteins (FG nups) lining the inside wall of the nuclear pore complex (NPC). Previous research has concluded that FG nups in Baker’s yeast (S. cerevisiae) are present in a bimodal distribution, with the “Forest Model” classifying FG nups as either di-block polymer like “trees” or single-block polymer like “shrubs”. Using a combination of coarse-grained modeling and polymer brush modeling, the function of the di-block FG nups has previously been hypothesized in the Di-block Copolymer Brush Gate (DCBG) model to form a higher-order polymer brush architecture which can open and close to regulate transport across the NPC. In this manuscript we work to extend the original DCBG model by first performing coarse grained simulations of the single-block FG nups which confirm that they have a single block polymer structure rather than the di-block structure of tree nups. Our molecular simulations also demonstrate that these single-block FG nups are likely cohesive, compact, collapsed coil polymers, implying that these FG nups are generally localized to their grafting location within the NPC. We find that adding a layer of single-block FG nups to the DCBG model increases the range of cargo sizes which are able to translocate the pore through a cooperative effect involving single-block and di-block FG nups. This effect can explain the puzzling connection between single-block FG nup deletion mutants in S. cerevisiae and the resulting failure of certain large cargo transport through the NPC. Facilitation of large cargo transport via single-block and di-block FG nup cooperativity in the nuclear pore could provide a model mechanism for designing future biomimetic pores of greater applicability. PMID:28068389
A classification tree based modeling approach for segment related crashes on multilane highways.
Pande, Anurag; Abdel-Aty, Mohamed; Das, Abhishek
2010-10-01
This study presents a classification tree based alternative to crash frequency analysis for analyzing crashes on mid-block segments of multilane arterials. The traditional approach of modeling counts of crashes that occur over a period of time works well for intersection crashes where each intersection itself provides a well-defined unit over which to aggregate the crash data. However, in the case of mid-block segments the crash frequency based approach requires segmentation of the arterial corridor into segments of arbitrary lengths. In this study we have used random samples of time, day of week, and location (i.e., milepost) combinations and compared them with the sample of crashes from the same arterial corridor. For crash and non-crash cases, geometric design/roadside and traffic characteristics were derived based on their milepost locations. The variables used in the analysis are non-event specific and therefore more relevant for roadway safety feature improvement programs. First classification tree model is a model comparing all crashes with the non-crash data and then four groups of crashes (rear-end, lane-change related, pedestrian, and single-vehicle/off-road crashes) are separately compared to the non-crash cases. The classification tree models provide a list of significant variables as well as a measure to classify crash from non-crash cases. ADT along with time of day/day of week are significantly related to all crash types with different groups of crashes being more likely to occur at different times. From the classification performance of different models it was apparent that using non-event specific information may not be suitable for single vehicle/off-road crashes. The study provides the safety analysis community an additional tool to assess safety without having to aggregate the corridor crash data over arbitrary segment lengths. Copyright © 2010. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Kudryavtsev, Andrey V.; Laurent, Guillaume J.; Clévy, Cédric; Tamadazte, Brahim; Lutz, Philippe
2015-10-01
Microassembly is an innovative alternative to the microfabrication process of MOEMS, which is quite complex. It usually implies the use of microrobots controlled by an operator. The reliability of this approach has been already confirmed for micro-optical technologies. However, the characterization of assemblies has shown that the operator is the main source of inaccuracies in the teleoperated microassembly. Therefore, there is great interest in automating the microassembly process. One of the constraints of automation in microscale is the lack of high precision sensors capable to provide the full information about the object position. Thus, the usage of visual-based feedback represents a very promising approach allowing to automate the microassembly process. The purpose of this article is to characterize the techniques of object position estimation based on the visual data, i.e., visual tracking techniques from the ViSP library. These algorithms enables a 3-D object pose using a single view of the scene and the CAD model of the object. The performance of three main types of model-based trackers is analyzed and quantified: edge-based, texture-based and hybrid tracker. The problems of visual tracking in microscale are discussed. The control of the micromanipulation station used in the framework of our project is performed using a new Simulink block set. Experimental results are shown and demonstrate the possibility to obtain the repeatability below 1 µm.
E1a is an exogenous in vivo tumour suppressor.
Cimas, Francisco J; Callejas-Valera, Juan L; García-Olmo, Dolores C; Hernández-Losa, Javier; Melgar-Rojas, Pedro; Ruiz-Hidalgo, María J; Pascual-Serra, Raquel; Ortega-Muelas, Marta; Roche, Olga; Marcos, Pilar; Garcia-Gil, Elena; Fernandez-Aroca, Diego M; Ramón Y Cajal, Santiago; Gutkind, J Silvio; Sanchez-Prieto, Ricardo
2017-07-28
The E1a gene from adenovirus has become a major tool in cancer research. Since the discovery of E1a, it has been proposed to be an oncogene, becoming a key element in the model of cooperation between oncogenes. However, E1a's in vivo behaviour is consistent with a tumour suppressor gene, due to the block/delay observed in different xenograft models. To clarify this interesting controversy, we have evaluated the effect of the E1a 13s isoform from adenovirus 5 in vivo. Initially, a conventional xenograft approach was performed using previously unreported HCT116 and B16-F10 cells, showing a clear anti-tumour effect regardless of the mouse's immunological background (immunosuppressed/immunocompetent). Next, we engineered a transgenic mouse model in which inducible E1a 13s expression was under the control of cytokeratin 5 to avoid side effects during embryonic development. Our results show that E1a is able to block chemical skin carcinogenesis, showing an anti-tumour effect. The present report demonstrates the in vivo anti-tumour effect of E1a, showing that the in vitro oncogenic role of E1a cannot be extrapolated in vivo, supporting its future use in gene therapy approaches. Copyright © 2017 Elsevier B.V. All rights reserved.
Permatasari, Galuh W; Utomo, Didik H; Widodo
2016-10-01
A designing peptide as agent for inducing diabetes mellitus type 2 (T2DM) in an animal model is challenging. The computational approach provides a sophisticated tool to design a functional peptide that may block the insulin receptor activity. The peptide that able to inhibit the binding between insulin and insulin receptor is a warrant for inducing T2DM. Therefore, we designed a potential peptide inhibitor of insulin receptor as an agent to generate T2DM animal model by bioinformatics approach. The peptide has been developed based on the structure of insulin receptor binding site of insulin and then modified it to obtain the best properties of half life, hydrophobicity, antigenicity, and stability binding into insulin receptor. The results showed that the modified peptide has characteristics 100h half-life, high-affinity -95.1±20, and high stability 28.17 in complex with the insulin receptor. Moreover, the modified peptide has molecular weight 4420.8g/Mol and has no antigenic regions. Based on the molecular dynamic simulation, the complex of modified peptide-insulin receptor is more stable than the commercial insulin receptor blocker. This study suggested that the modified peptide has the promising performance to block the insulin receptor activity that potentially induce diabetes mellitus type 2 in mice. Copyright © 2016 Elsevier Ltd. All rights reserved.
Measuring accident risk exposure for pedestrians in different micro-environments.
Lassarre, Sylvain; Papadimitriou, Eleonora; Yannis, George; Golias, John
2007-11-01
Pedestrians are mainly exposed to the risk of road accident when crossing a road in urban areas. Traditionally in the road safety field, the risk of accident for pedestrian is estimated as a rate of accident involvement per unit of time spent on the road network. The objective of this research is to develop an approach of accident risk based on the concept of risk exposure used in environmental epidemiology, such as in the case of exposure to pollutants. This type of indicator would be useful for comparing the effects of urban transportation policy scenarios on pedestrian safety. The first step is to create an indicator of pedestrians' exposure, which is based on motorised vehicles' "concentration" by lane and also takes account of traffic speed and time spent to cross. This is applied to two specific micro-environments: junctions and mid-block locations. A model of pedestrians' crossing behaviour along a trip is then developed, based on a hierarchical choice between junctions and mid-block locations and taking account of origin and destination, traffic characteristics and pedestrian facilities. Finally, a complete framework is produced for modelling pedestrians' exposure in the light of their crossing behaviour. The feasibility of this approach is demonstrated on an artificial network and a first set of results is obtained from the validation of the models in observational studies.
Defining landscapes suitable for restoration of grizzly bears (Ursus arctos) in Idaho
Merrill, Troy; Mattson, D.J.; Wright, R.G.; Quigley, Howard B.
1999-01-01
Informed management of large carnivores depends on the timely and useful presentation of relevant information. We describe an approach to evaluating carnivore habitat that uses pre-existing qualitative and quantitative information on humans and carnivores to generate coarse-scale maps of habitat suitability, habitat productivity, potential reserves, and areas of potential conflict. We use information pertinent to the contemplated reintroduction of grizzly bears Ursus arctos horribilis into central Idaho to demonstrate our approach. The approach uses measures of human numbers, their estimated distribution, road and trail access, and abundance and quality of bear foods to create standardized indices that are analogues of death and birth rates, respectively; the first subtracted from the second indicates habitat suitability (HS). We calibrate HS to sightings of grizzly bears in two ecosystems in northern Idaho and develop an empirical model from these same sightings based on piece-wise treatment of the variables contained in HS. Depending on whether the empirical model or HS is used, we estimate that there is 14 800 km2 of suitable habitat in two blocks or 37 100 km2 in one block in central Idaho, respectively. Both approaches show suitable habitat in the current Evaluation Area and in an area of southeastern Idaho centered on the Palisades Reservoir. Areas of highly productive habitat are concentrated in northern and western Idaho and in the Palisades area. Future conflicts between humans and bears are most likely to occur on the western and northern margins of suitable habitat in central Idaho, rather than to the east, where opposition to reintroduction of grizzly bears is currently strongest.
Implications of the earthquake cycle for inferring fault locking on the Cascadia megathrust
Pollitz, Fred; Evans, Eileen
2017-01-01
GPS velocity fields in the Western US have been interpreted with various physical models of the lithosphere-asthenosphere system: (1) time-independent block models; (2) time-dependent viscoelastic-cycle models, where deformation is driven by viscoelastic relaxation of the lower crust and upper mantle from past faulting events; (3) viscoelastic block models, a time-dependent variation of the block model. All three models are generally driven by a combination of loading on locked faults and (aseismic) fault creep. Here we construct viscoelastic block models and viscoelastic-cycle models for the Western US, focusing on the Pacific Northwest and the earthquake cycle on the Cascadia megathrust. In the viscoelastic block model, the western US is divided into blocks selected from an initial set of 137 microplates using the method of Total Variation Regularization, allowing potential trade-offs between faulting and megathrust coupling to be determined algorithmically from GPS observations. Fault geometry, slip rate, and locking rates (i.e. the locking fraction times the long term slip rate) are estimated simultaneously within the TVR block model. For a range of mantle asthenosphere viscosity (4.4 × 1018 to 3.6 × 1020 Pa s) we find that fault locking on the megathrust is concentrated in the uppermost 20 km in depth, and a locking rate contour line of 30 mm yr−1 extends deepest beneath the Olympic Peninsula, characteristics similar to previous time-independent block model results. These results are corroborated by viscoelastic-cycle modelling. The average locking rate required to fit the GPS velocity field depends on mantle viscosity, being higher the lower the viscosity. Moreover, for viscosity ≲ 1020 Pa s, the amount of inferred locking is higher than that obtained using a time-independent block model. This suggests that time-dependent models for a range of admissible viscosity structures could refine our knowledge of the locking distribution and its epistemic uncertainty.
Highly Ordered Block Copolymer Templates for the Generation of Nanostructured Materials
NASA Astrophysics Data System (ADS)
Bhoje Gowd, E.; Nandan, Bhanu; Bigall, Nadja C.; Eychmuller, Alexander; Stamm, Manfred
2009-03-01
Among many different types of self-assembled materials, block copolymers have attracted immense interest for applications in nanotechnology. Block copolymer thin film can be used as a template for patterning of hard inorganic materials such as metal nanoparticles. In the present work, we demonstrate a new approach to fabricate highly ordered arrays of nanoscopic inorganic dots and wires using switchable block copolymer thin films. Various inorganic nanoparticles from a simple aqueous solution were directly deposited on the surface reconstructed block copolymer templates. The preferential interaction of the nanoparticles with one of the blocks is mainly responsible for the lateral distribution of the nanoparticles in addition to the capillary forces. Subsequent stabilization by UV-irradiation followed by pyrolysis in air at 450 ^oC removes the polymer to produce highly ordered metallic nanostructures. This method is highly versatile as the procedure used here is simple, eco-friendly and provides a facile approach to fabricate a broad range of nanoscaled architectures with tunable lateral spacing.
Kovaleva, Marina; Johnson, Katherine; Steven, John; Barelle, Caroline J; Porter, Andrew
2017-01-01
Induced costimulatory ligand (ICOSL) plays an important role in the activation of T cells through its interaction with the inducible costimulator, ICOS. Suppression of full T cell activation can be achieved by blocking this interaction and has been shown to be an effective means of ameliorating disease in models of autoimmunity and inflammation. In this study, we demonstrated the ability of a novel class of anti-ICOSL antigen-binding single domains derived from sharks (VNARs) to effectively reduce inflammation in a murine model of non-infectious uveitis. In initial selections, specific VNARs that recognized human ICOSL were isolated from an immunized nurse shark phage display library and lead domains were identified following their performance in a series of antigen selectivity and in vitro bioassay screens. High potency in cell-based blocking assays suggested their potential as novel binders suitable for further therapeutic development. To test this hypothesis, surrogate anti-mouse ICOSL VNAR domains were isolated from the same phage display library and the lead VNAR clone selected via screening in binding and ICOS/ICOSL blocking experiments. The VNAR domain with the highest potency in cell-based blocking of ICOS/ICOSL interaction was fused to the Fc portion of human IgG1 and was tested in vivo in a mouse model of interphotoreceptor retinoid-binding protein-induced uveitis. The anti-mICOSL VNAR Fc, injected systemically, resulted in a marked reduction of inflammation in treated mice when compared with untreated control animals. This approach inhibited disease progression to an equivalent extent to that seen for the positive corticosteroid control, cyclosporin A, reducing both clinical and histopathological scores. These results represent the first demonstration of efficacy of a VNAR binding domain in a relevant clinical model of disease and highlight the potential of VNARs for the treatment of auto-inflammatory conditions.
NASA Astrophysics Data System (ADS)
Popa, L.; Popa, V.
2017-08-01
The article is focused on modeling an automated industrial robotic arm operated electro-pneumatically and to simulate the robotic arm operation. It is used the graphic language FBD (Function Block Diagram) to program the robotic arm on Zelio Logic automation. The innovative modeling and simulation procedures are considered specific problems regarding the development of a new type of technical products in the field of robotics. Thus, were identified new applications of a Programmable Logic Controller (PLC) as a specialized computer performing control functions with a variety of high levels of complexit.
Model-free inference of direct network interactions from nonlinear collective dynamics.
Casadiego, Jose; Nitzan, Mor; Hallerberg, Sarah; Timme, Marc
2017-12-19
The topology of interactions in network dynamical systems fundamentally underlies their function. Accelerating technological progress creates massively available data about collective nonlinear dynamics in physical, biological, and technological systems. Detecting direct interaction patterns from those dynamics still constitutes a major open problem. In particular, current nonlinear dynamics approaches mostly require to know a priori a model of the (often high dimensional) system dynamics. Here we develop a model-independent framework for inferring direct interactions solely from recording the nonlinear collective dynamics generated. Introducing an explicit dependency matrix in combination with a block-orthogonal regression algorithm, the approach works reliably across many dynamical regimes, including transient dynamics toward steady states, periodic and non-periodic dynamics, and chaos. Together with its capabilities to reveal network (two point) as well as hypernetwork (e.g., three point) interactions, this framework may thus open up nonlinear dynamics options of inferring direct interaction patterns across systems where no model is known.
Designing perturbative metamaterials from discrete models.
Matlack, Kathryn H; Serra-Garcia, Marc; Palermo, Antonio; Huber, Sebastian D; Daraio, Chiara
2018-04-01
Identifying material geometries that lead to metamaterials with desired functionalities presents a challenge for the field. Discrete, or reduced-order, models provide a concise description of complex phenomena, such as negative refraction, or topological surface states; therefore, the combination of geometric building blocks to replicate discrete models presenting the desired features represents a promising approach. However, there is no reliable way to solve such an inverse problem. Here, we introduce 'perturbative metamaterials', a class of metamaterials consisting of weakly interacting unit cells. The weak interaction allows us to associate each element of the discrete model with individual geometric features of the metamaterial, thereby enabling a systematic design process. We demonstrate our approach by designing two-dimensional elastic metamaterials that realize Veselago lenses, zero-dispersion bands and topological surface phonons. While our selected examples are within the mechanical domain, the same design principle can be applied to acoustic, thermal and photonic metamaterials composed of weakly interacting unit cells.
Windley, Monique J; Mann, Stefan A; Vandenberg, Jamie I; Hill, Adam P
2016-07-01
Drug block of voltage-gated potassium channel subtype 11.1 human ether-a-go-go related gene (Kv11.1) (hERG) channels, encoded by the KCNH2 gene, is associated with reduced repolarization of the cardiac action potential and is the predominant cause of acquired long QT syndrome that can lead to fatal cardiac arrhythmias. Current safety guidelines require that potency of KV11.1 block is assessed in the preclinical phase of drug development. However, not all drugs that block KV11.1 are proarrhythmic, meaning that screening on the basis of equilibrium measures of block can result in high attrition of potentially low-risk drugs. The basis of the next generation of drug-screening approaches is set to be in silico risk prediction, informed by in vitro mechanistic descriptions of drug binding, including measures of the kinetics of block. A critical issue in this regard is characterizing the temperature dependence of drug binding. Specifically, it is important to address whether kinetics relevant to physiologic temperatures can be inferred or extrapolated from in vitro data gathered at room temperature in high-throughout systems. Here we present the first complete study of the temperature-dependent kinetics of block and unblock of a proarrhythmic drug, cisapride, to KV11.1. Our data highlight a complexity to binding that manifests at higher temperatures and can be explained by accumulation of an intermediate, non-blocking encounter-complex. These results suggest that for cisapride, physiologically relevant kinetic parameters cannot be simply extrapolated from those measured at lower temperatures; rather, data gathered at physiologic temperatures should be used to constrain in silico models that may be used for proarrhythmic risk prediction. Copyright © 2016 by The American Society for Pharmacology and Experimental Therapeutics.
Turabee, Md Hasan; Thambi, Thavasyappan; Duong, Huu Thuy Trang; Jeong, Ji Hoon; Lee, Doo Sung
2018-02-27
Sustained delivery of protein therapeutics is limited owing to the fragile nature of proteins. Despite its great potential, delivery of proteins without any loss of bioactivity remains a challenge in the use of protein therapeutics in the clinic. To surmount this shortcoming, we report a pH- and temperature-responsive in situ-forming injectable hydrogel based on comb-type polypeptide block copolymers for the controlled delivery of proteins. Polypeptide block copolymers, composed of hydrophilic polyethylene glycol (PEG), temperature-responsive poly(γ-benzyl-l-glutamate) (PBLG), and pH-responsive oligo(sulfamethazine) (OSM), exhibit pH- and temperature-induced sol-to-gel transition behavior in aqueous solutions. Polypeptide block copolymers were synthesized by combining N-carboxyanhydride-based ring-opening polymerization and post-functionalization of the chain-end using N-hydroxy succinimide ester activated OSM. The physical properties of polypeptide-based hydrogels were tuned by varying the composition of temperature- and pH-responsive PBLG and OSM in block copolymers. Polypeptide block copolymers were non-toxic to human embryonic kidney cells at high concentrations (2000 μg mL -1 ). Subcutaneous administration of polypeptide block copolymer sols formed viscoelastic gel instantly at the back of Sprague-Dawley (SD) rats. The in vivo gels exhibited sustained degradation and were found to be bioresorbable in 6 weeks without any noticeable inflammation at the injection site. Anionic characteristics of hydrogels allow efficient loading of a cationic model protein, lysozyme, through electrostatic interaction. Lysozyme-loaded polypeptide block copolymer sols readily formed a viscoelastic gel in vivo and sustained lysozyme release for at least a week. Overall, the results demonstrate an elegant approach to control the release of certain charged proteins and open a myriad of therapeutic possibilities in protein therapeutics.
Lund, Ida K.; Rasch, Morten G.; Ingvarsen, Signe; Pass, Jesper; Madsen, Daniel H.; Engelholm, Lars H.; Behrendt, Niels; Høyer-Hansen, Gunilla
2012-01-01
Identification of targets for cancer therapy requires the understanding of the in vivo roles of proteins, which can be derived from studies using gene-targeted mice. An alternative strategy is the administration of inhibitory monoclonal antibodies (mAbs), causing acute disruption of the target protein function(s). This approach has the advantage of being a model for therapeutic targeting. mAbs for use in mouse models can be obtained through immunization of gene-deficient mice with the autologous protein. Such mAbs react with both species-specific epitopes and epitopes conserved between species. mAbs against proteins involved in extracellular proteolysis, including plasminogen activators urokinase plasminogen activator (uPA), tissue-type plasminogen activator (tPA), their inhibitor PAI-1, the uPA receptor (uPAR), two matrix metalloproteinases (MMP9 and MMP14), as well as the collagen internalization receptor uPARAP, have been developed. The inhibitory mAbs against uPA and uPAR block plasminogen activation and thereby hepatic fibrinolysis in vivo. Wound healing, another plasmin-dependent process, is delayed by an inhibitory mAb against uPA in the adult mouse. Thromboembolism can be inhibited by anti-PAI-1 mAbs in vivo. In conclusion, function-blocking mAbs are well-suited for targeted therapy in mouse models of different diseases, including cancer. PMID:22754528
Hadagali, Prasannaah; Peters, James R; Balasubramanian, Sriram
2018-03-01
Personalized Finite Element (FE) models and hexahedral elements are preferred for biomechanical investigations. Feature-based multi-block methods are used to develop anatomically accurate personalized FE models with hexahedral mesh. It is tedious to manually construct multi-blocks for large number of geometries on an individual basis to develop personalized FE models. Mesh-morphing method mitigates the aforementioned tediousness in meshing personalized geometries every time, but leads to element warping and loss of geometrical data. Such issues increase in magnitude when normative spine FE model is morphed to scoliosis-affected spinal geometry. The only way to bypass the issue of hex-mesh distortion or loss of geometry as a result of morphing is to rely on manually constructing the multi-blocks for scoliosis-affected spine geometry of each individual, which is time intensive. A method to semi-automate the construction of multi-blocks on the geometry of scoliosis vertebrae from the existing multi-blocks of normative vertebrae is demonstrated in this paper. High-quality hexahedral elements were generated on the scoliosis vertebrae from the morphed multi-blocks of normative vertebrae. Time taken was 3 months to construct the multi-blocks for normative spine and less than a day for scoliosis. Efforts taken to construct multi-blocks on personalized scoliosis spinal geometries are significantly reduced by morphing existing multi-blocks.
Computational Design of Biomimetic Gels With Properties of Human Tissues
2008-12-01
poly(styrene-block- isoprene -block-styrene) copolymer or SIS in the I-selective solvent has been chosen as a model triblock copolymer for this study...our model. A A B B B RC Fig. 2. Schematic representation of A1B3A1 triblock copolymer mapped on DPD model. Poly(styrene-block- isoprene -block...Pa making G 2.25 times higher for c changes from 0.16 to 0.33 (density of styrene and isoprene blocks are taken to be 1.04 and 0.913 g/cm3
Procedure for assessing the performance of a rockfall fragmentation model
NASA Astrophysics Data System (ADS)
Matas, Gerard; Lantada, Nieves; Corominas, Jordi; Gili, Josep Antoni; Ruiz-Carulla, Roger; Prades, Albert
2017-04-01
A Rockfall is a mass instability process frequently observed in road cuts, open pit mines and quarries, steep slopes and cliffs. It is frequently observed that the detached rock mass becomes fragmented when it impacts with the slope surface. The consideration of the fragmentation of the rockfall mass is critical for the calculation of block's trajectories and their impact energies, to further assess their potential to cause damage and design adequate preventive structures. We present here the performance of the RockGIS model. It is a GIS-Based tool that simulates stochastically the fragmentation of the rockfalls, based on a lumped mass approach. In RockGIS, the fragmentation initiates by the disaggregation of the detached rock mass through the pre-existing discontinuities just before the impact with the ground. An energy threshold is defined in order to determine whether the impacting blocks break or not. The distribution of the initial mass between a set of newly generated rock fragments is carried out stochastically following a power law. The trajectories of the new rock fragments are distributed within a cone. The model requires the calibration of both the runout of the resultant blocks and the spatial distribution of the volumes of fragments generated by breakage during their propagation. As this is a coupled process which is controlled by several parameters, a set of performance criteria to be met by the simulation have been defined. The criteria includes: position of the centre of gravity of the whole block distribution, histogram of the runout of the blocks, extent and boundaries of the young debris cover over the slope surface, lateral dispersion of trajectories, total number of blocks generated after fragmentation, volume distribution of the generated fragments, the number of blocks and volume passages past a reference line and the maximum runout distance Since the number of parameters to fit increases significantly when considering fragmentation, the final parameters selected after the calibration process are a compromise which meet all considered criteria. This methodology has been tested in some recent rockfall where high fragmentation was observed. The RockGIS tool and the fragmentation laws using data collected from recent rockfall have been developed within the RockRisk project (2014-2016, BIA2013-42582-P). This project was funded by the Spanish Ministerio de Economía y Competitividad.
Empirical Assessment of the Mean Block Volume of Rock Masses Intersected by Four Joint Sets
NASA Astrophysics Data System (ADS)
Morelli, Gian Luca
2016-05-01
The estimation of a representative value for the rock block volume ( V b) is of huge interest in rock engineering in regards to rock mass characterization purposes. However, while mathematical relationships to precisely estimate this parameter from the spacing of joints can be found in literature for rock masses intersected by three dominant joint sets, corresponding relationships do not actually exist when more than three sets occur. In these cases, a consistent assessment of V b can only be achieved by directly measuring the dimensions of several representative natural rock blocks in the field or by means of more sophisticated 3D numerical modeling approaches. However, Palmström's empirical relationship based on the volumetric joint count J v and on a block shape factor β is commonly used in the practice, although strictly valid only for rock masses intersected by three joint sets. Starting from these considerations, the present paper is primarily intended to investigate the reliability of a set of empirical relationships linking the block volume with the indexes most commonly used to characterize the degree of jointing in a rock mass (i.e. the J v and the mean value of the joint set spacings) specifically applicable to rock masses intersected by four sets of persistent discontinuities. Based on the analysis of artificial 3D block assemblies generated using the software AutoCAD, the most accurate best-fit regression has been found between the mean block volume (V_{{{{b}}_{{m}} }}) of tested rock mass samples and the geometric mean value of the spacings of the joint sets delimiting blocks; thus, indicating this mean value as a promising parameter for the preliminary characterization of the block size. Tests on field outcrops have demonstrated that the proposed empirical methodology has the potential of predicting the mean block volume of multiple-set jointed rock masses with an acceptable accuracy for common uses in most practical rock engineering applications.
Penocchio, Emanuele; Piccardo, Matteo; Barone, Vincenzo
2015-10-13
The B2PLYP double hybrid functional, coupled with the correlation-consistent triple-ζ cc-pVTZ (VTZ) basis set, has been validated in the framework of the semiexperimental (SE) approach for deriving accurate equilibrium structures of molecules containing up to 15 atoms. A systematic comparison between new B2PLYP/VTZ results and several equilibrium SE structures previously determined at other levels, in particular B3LYP/SNSD and CCSD(T) with various basis sets, has put in evidence the accuracy and the remarkable stability of such model chemistry for both equilibrium structures and vibrational corrections. New SE equilibrium structures for phenylacetylene, pyruvic acid, peroxyformic acid, and phenyl radical are discussed and compared with literature data. Particular attention has been devoted to the discussion of systems for which lack of sufficient experimental data prevents a complete SE determination. In order to obtain an accurate equilibrium SE structure for these situations, the so-called templating molecule approach is discussed and generalized with respect to our previous work. Important applications are those involving biological building blocks, like uracil and thiouracil. In addition, for more general situations the linear regression approach has been proposed and validated.
Global Dynamic Exposure and the OpenBuildingMap
NASA Astrophysics Data System (ADS)
Schorlemmer, D.; Beutin, T.; Hirata, N.; Hao, K. X.; Wyss, M.; Cotton, F.; Prehn, K.
2015-12-01
Detailed understanding of local risk factors regarding natural catastrophes requires in-depth characterization of the local exposure. Current exposure capture techniques have to find the balance between resolution and coverage. We aim at bridging this gap by employing a crowd-sourced approach to exposure capturing focusing on risk related to earthquake hazard. OpenStreetMap (OSM), the rich and constantly growing geographical database, is an ideal foundation for us. More than 2.5 billion geographical nodes, more than 150 million building footprints (growing by ~100'000 per day), and a plethora of information about school, hospital, and other critical facility locations allow us to exploit this dataset for risk-related computations. We will harvest this dataset by collecting exposure and vulnerability indicators from explicitly provided data (e.g. hospital locations), implicitly provided data (e.g. building shapes and positions), and semantically derived data, i.e. interpretation applying expert knowledge. With this approach, we can increase the resolution of existing exposure models from fragility classes distribution via block-by-block specifications to building-by-building vulnerability. To increase coverage, we will provide a framework for collecting building data by any person or community. We will implement a double crowd-sourced approach to bring together the interest and enthusiasm of communities with the knowledge of earthquake and engineering experts. The first crowd-sourced approach aims at collecting building properties in a community by local people and activists. This will be supported by tailored building capture tools for mobile devices for simple and fast building property capturing. The second crowd-sourced approach involves local experts in estimating building vulnerability that will provide building classification rules that translate building properties into vulnerability and exposure indicators as defined in the Building Taxonomy 2.0 developed by the Global Earthquake Model (GEM). These indicators will then be combined with a hazard model using the GEM OpenQuake engine to compute a risk model. The free/open framework we will provide can be used on commodity hardware for local to regional exposure capturing and for communities to understand their earthquake risk.
Intradomain phase transitions in flexible block copolymers with self-aligning segments.
Burke, Christopher J; Grason, Gregory M
2018-05-07
We study a model of flexible block copolymers (BCPs) in which there is an enlthalpic preference for orientational order, or local alignment, among like-block segments. We describe a generalization of the self-consistent field theory of flexible BCPs to include inter-segment orientational interactions via a Landau-de Gennes free energy associated with a polar or nematic order parameter for segments of one component of a diblock copolymer. We study the equilibrium states of this model numerically, using a pseudo-spectral approach to solve for chain conformation statistics in the presence of a self-consistent torque generated by inter-segment alignment forces. Applying this theory to the structure of lamellar domains composed of symmetric diblocks possessing a single block of "self-aligning" polar segments, we show the emergence of spatially complex segment order parameters (segment director fields) within a given lamellar domain. Because BCP phase separation gives rise to spatially inhomogeneous orientation order of segments even in the absence of explicit intra-segment aligning forces, the director fields of BCPs, as well as thermodynamics of lamellar domain formation, exhibit a highly non-linear dependence on both the inter-block segregation (χN) and the enthalpy of alignment (ε). Specifically, we predict the stability of new phases of lamellar order in which distinct regions of alignment coexist within the single mesodomain and spontaneously break the symmetries of the lamella (or smectic) pattern of composition in the melt via in-plane tilt of the director in the centers of the like-composition domains. We further show that, in analogy to Freedericksz transition confined nematics, the elastic costs to reorient segments within the domain, as described by the Frank elasticity of the director, increase the threshold value ε needed to induce this intra-domain phase transition.
Intradomain phase transitions in flexible block copolymers with self-aligning segments
NASA Astrophysics Data System (ADS)
Burke, Christopher J.; Grason, Gregory M.
2018-05-01
We study a model of flexible block copolymers (BCPs) in which there is an enlthalpic preference for orientational order, or local alignment, among like-block segments. We describe a generalization of the self-consistent field theory of flexible BCPs to include inter-segment orientational interactions via a Landau-de Gennes free energy associated with a polar or nematic order parameter for segments of one component of a diblock copolymer. We study the equilibrium states of this model numerically, using a pseudo-spectral approach to solve for chain conformation statistics in the presence of a self-consistent torque generated by inter-segment alignment forces. Applying this theory to the structure of lamellar domains composed of symmetric diblocks possessing a single block of "self-aligning" polar segments, we show the emergence of spatially complex segment order parameters (segment director fields) within a given lamellar domain. Because BCP phase separation gives rise to spatially inhomogeneous orientation order of segments even in the absence of explicit intra-segment aligning forces, the director fields of BCPs, as well as thermodynamics of lamellar domain formation, exhibit a highly non-linear dependence on both the inter-block segregation (χN) and the enthalpy of alignment (ɛ). Specifically, we predict the stability of new phases of lamellar order in which distinct regions of alignment coexist within the single mesodomain and spontaneously break the symmetries of the lamella (or smectic) pattern of composition in the melt via in-plane tilt of the director in the centers of the like-composition domains. We further show that, in analogy to Freedericksz transition confined nematics, the elastic costs to reorient segments within the domain, as described by the Frank elasticity of the director, increase the threshold value ɛ needed to induce this intra-domain phase transition.
Koski, Jason P; Riggleman, Robert A
2017-04-28
Block copolymers, due to their ability to self-assemble into periodic structures with long range order, are appealing candidates to control the ordering of functionalized nanoparticles where it is well-accepted that the spatial distribution of nanoparticles in a polymer matrix dictates the resulting material properties. The large parameter space associated with block copolymer nanocomposites makes theory and simulation tools appealing to guide experiments and effectively isolate parameters of interest. We demonstrate a method for performing field-theoretic simulations in a constant volume-constant interfacial tension ensemble (nVγT) that enables the determination of the equilibrium properties of block copolymer nanocomposites, including when the composites are placed under tensile or compressive loads. Our approach is compatible with the complex Langevin simulation framework, which allows us to go beyond the mean-field approximation. We validate our approach by comparing our nVγT approach with free energy calculations to determine the ideal domain spacing and modulus of a symmetric block copolymer melt. We analyze the effect of numerical and thermodynamic parameters on the efficiency of the nVγT ensemble and subsequently use our method to investigate the ideal domain spacing, modulus, and nanoparticle distribution of a lamellar forming block copolymer nanocomposite. We find that the nanoparticle distribution is directly linked to the resultant domain spacing and is dependent on polymer chain density, nanoparticle size, and nanoparticle chemistry. Furthermore, placing the system under tension or compression can qualitatively alter the nanoparticle distribution within the block copolymer.
Huston, P.
1998-01-01
PROBLEM BEING ADDRESSED: Writer's block, or a distinctly uncomfortable inability to write, can interfere with professional productivity. OBJECTIVE OF PROGRAM: To identify writer's block and to outline suggestions for its early diagnosis, treatment, and prevention. MAIN COMPONENTS OF PROGRAM: Once the diagnosis has been established, a stepwise approach to care is recommended. Mild blockage can be resolved by evaluating and revising expectations, conducting a task analysis, and giving oneself positive feedback. Moderate blockage can be addressed by creative exercises, such as brainstorming and role-playing. Recalcitrant blockage can be resolved with therapy. Writer's block can be prevented by taking opportunities to write at the beginning of projects, working with a supportive group of people, and cultivating an ongoing interest in writing. CONCLUSIONS: Writer's block is a highly treatable condition. A systematic approach can help to alleviate anxiety, build confidence, and give people the information they need to work productively. PMID:9481467
Algorithms for the automatic generation of 2-D structured multi-block grids
NASA Technical Reports Server (NTRS)
Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.
1995-01-01
Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.
Radiometric Block Adjusment and Digital Radiometric Model Generation
NASA Astrophysics Data System (ADS)
Pros, A.; Colomina, I.; Navarro, J. A.; Antequera, R.; Andrinal, P.
2013-05-01
In this paper we present a radiometric block adjustment method that is related to geometric block adjustment and to the concept of a terrain Digital Radiometric Model (DRM) as a complement to the terrain digital elevation and surface models. A DRM, in our concept, is a function that for each ground point returns a reflectance value and a Bidirectional Reflectance Distribution Function (BRDF). In a similar way to the terrain geometric reconstruction procedure, given an image block of some terrain area, we split the DRM generation in two phases: radiometric block adjustment and DRM generation. In the paper we concentrate on the radiometric block adjustment step, but we also describe a preliminary DRM generator. In the block adjustment step, after a radiometric pre-calibraton step, local atmosphere radiative transfer parameters, and ground reflectances and BRDFs at the radiometric tie points are estimated. This radiometric block adjustment is based on atmospheric radiative transfer (ART) models, pre-selected BRDF models and radiometric ground control points. The proposed concept is implemented and applied in an experimental campaign, and the obtained results are presented. The DRM and orthophoto mosaics are generated showing no radiometric differences at the seam lines.
Bergman, C M; Kreitman, M
2001-08-01
Comparative genomic approaches to gene and cis-regulatory prediction are based on the principle that differential DNA sequence conservation reflects variation in functional constraint. Using this principle, we analyze noncoding sequence conservation in Drosophila for 40 loci with known or suspected cis-regulatory function encompassing >100 kb of DNA. We estimate the fraction of noncoding DNA conserved in both intergenic and intronic regions and describe the length distribution of ungapped conserved noncoding blocks. On average, 22%-26% of noncoding sequences surveyed are conserved in Drosophila, with median block length approximately 19 bp. We show that point substitution in conserved noncoding blocks exhibits transition bias as well as lineage effects in base composition, and occurs more than an order of magnitude more frequently than insertion/deletion (indel) substitution. Overall, patterns of noncoding DNA structure and evolution differ remarkably little between intergenic and intronic conserved blocks, suggesting that the effects of transcription per se contribute minimally to the constraints operating on these sequences. The results of this study have implications for the development of alignment and prediction algorithms specific to noncoding DNA, as well as for models of cis-regulatory DNA sequence evolution.
NASA Astrophysics Data System (ADS)
Martin, D. F.; Cornford, S. L.; Schwartz, P.; Bhalla, A.; Johansen, H.; Ng, E.
2017-12-01
Correctly representing grounding line and calving-front dynamics is of fundamental importance in modeling marine ice sheets, since the configuration of these interfaces exerts a controlling influence on the dynamics of the ice sheet. Traditional ice sheet models have struggled to correctly represent these regions without very high spatial resolution. We have developed a front-tracking discretization for grounding lines and calving fronts based on the Chombo embedded-boundary cut-cell framework. This promises better representation of these interfaces vs. a traditional stair-step discretization on Cartesian meshes like those currently used in the block-structured AMR BISICLES code. The dynamic adaptivity of the BISICLES model complements the subgrid-scale discretizations of this scheme, producing a robust approach for tracking the evolution of these interfaces. Also, the fundamental discontinuous nature of flow across grounding lines is respected by mathematically treating it as a material phase change. We present examples of this approach to demonstrate its effectiveness.
Combining Deterministic structures and stochastic heterogeneity for transport modeling
NASA Astrophysics Data System (ADS)
Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg
2017-04-01
Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.
Okada, Jun-Ichi; Washio, Takumi; Nakagawa, Machiko; Watanabe, Masahiro; Kadooka, Yoshimasa; Kariya, Taro; Yamashita, Hiroshi; Yamada, Yoko; Momomura, Shin-Ichi; Nagai, Ryozo; Hisada, Toshiaki; Sugiura, Seiryo
2018-01-01
Background: Cardiac resynchronization therapy is an effective device therapy for heart failure patients with conduction block. However, a problem with this invasive technique is the nearly 30% of non-responders. A number of studies have reported a functional line of block of cardiac excitation propagation in responders. However, this can only be detected using non-contact endocardial mapping. Further, although the line of block is considered a sign of responders to therapy, the mechanism remains unclear. Methods: Herein, we created two patient-specific heart models with conduction block and simulated the propagation of excitation based on a cellmodel of electrophysiology. In one model with a relatively narrow QRS width (176 ms), we modeled the Purkinje network using a thin endocardial layer with rapid conduction. To reproduce a wider QRS complex (200 ms) in the second model, we eliminated the Purkinje network, and we simulated the endocardial mapping by solving the inverse problem according to the actual mapping system. Results: We successfully observed the line of block using non-contact mapping in the model without the rapid propagation of excitation through the Purkinje network, although the excitation in the wall propagated smoothly. This model of slow conduction also reproduced the characteristic properties of the line of block, including dense isochronal lines and fractionated local electrocardiograms. Further, simulation of ventricular pacing from the lateral wall shifted the location of the line of block. By contrast, in the model with the Purkinje network, propagation of excitation in the endocardial map faithfully followed the actual propagation in the wall, without showing the line of block. Finally, switching the mode of propagation between the two models completely reversed these findings. Conclusions: Our simulation data suggest that the absence of rapid propagation of excitation through the Purkinje network is the major cause of the functional line of block recorded by non-contact endocardial mapping. The line of block can be used to identify responders as these patients loose rapid propagation through the Purkinje network.
NASA Astrophysics Data System (ADS)
Schobelock, J.; Stamps, D. S.; Pagani, M.; Garcia, J.; Styron, R. H.
2017-12-01
The Caribbean and Central America region (CCAR) undergoes the entire spectrum of earthquake types due to its complex tectonic setting comprised of transform zones, young oceanic spreading ridges, and subductions along its eastern and western boundaries. CCAR is, therefore, an ideal setting in which to study the impacts of long-term tectonic deformation on the distribution of present-day seismic activity. In this work, we develop a continuous tectonic strain rate model based on inter-seismic geodetic data and compare it with known active faults and earthquake focal mechanism data. We first create a 0.25o x 0.25o finite element mesh that is comprised of block geometries defined in previously studies. Second, we isolate and remove transient signals from the latest open access community velocity solution from UNAVCO, which includes 339 velocities from COCONet and TLALOCNet GNSS data for the Caribbean and Central America, respectively. In a third step we define zones of deformation and rigidity by creating a buffer around the boundary of each block that varies depending on the size of the block and the expected deformation zone based on locations of GNSS data that are consistent with rigid block motion. We then assign each node within the buffer a 0 for the deforming areas and a plate index outside the buffer for the rigid. Finally, we calculate a tectonic strain rate model for CCAR using the Haines and Holt finite element approach to fit bi-cubic Bessel splines to the the GNSS/GPS data assuming block rotation for zones of rigidity. Our model of the CCAR is consistent with compression along subduction zones, extension across the mid-Pacific Rise, and a combination of compression and extension across the North America - Caribbean plate boundary. The majority of CCAR strain rate magnitudes range from -60 to 60 nanostrains/yr. Modeling results are then used to calculate expected faulting behaviors that we compare with mapped geologic faults and seismic activity.
Wu, Sangwook
2009-03-01
We investigate dynamical self-arrest in a diblock copolymer melt using a replica approach within a self-consistent local method based on dynamical mean-field theory (DMFT). The local replica approach effectively predicts (chiN)_{A} for dynamical self-arrest in a block copolymer melt for symmetric and asymmetric cases. We discuss the competition of the cubic and quartic interactions in the Landau free energy for a block copolymer melt in stabilizing a glassy state depending on the chain length. Our local replica theory provides a universal value for the dynamical self-arrest in block copolymer melts with (chiN)_{A} approximately 10.5+64N;{-3/10} for the symmetric case.
A zone-based approach to identifying urban land uses using nationally-available data
NASA Astrophysics Data System (ADS)
Falcone, James A.
Accurate identification of urban land use is essential for many applications in environmental study, ecological assessment, and urban planning, among other fields. However, because physical surfaces of land cover types are not necessarily related to their use and economic function, differentiating among thematically-detailed urban land uses (single-family residential, multi-family residential, commercial, industrial, etc.) using remotely-sensed imagery is a challenging task, particularly over large areas. Because the process requires an interpretation of tone/color, size, shape, pattern, and neighborhood association elements within a scene, it has traditionally been accomplished via manual interpretation of aerial photography or high-resolution satellite imagery. Although success has been achieved for localized areas using various automated techniques based on high-spatial or high-spectral resolution data, few detailed (Anderson Level II equivalent or greater) urban land use mapping products have successfully been created via automated means for broad (multi-county or larger) areas, and no such product exists today for the United States. In this study I argue that by employing a zone-based approach it is feasible to map thematically-detailed urban land use classes over large areas using appropriate combinations of non-image based predictor data which are nationally and publicly available. The approach presented here uses U.S. Census block groups as the basic unit of geography, and predicts the percent of each of ten land use types---nine of them urban---for each block group based on a number of data sources, to include census data, nationally-available point locations of features from the USGS Geographic Names Information System, historical land cover, and metrics which characterize spatial pattern, context (e.g. distance to city centers or other features), and measures of spatial autocorrelation. The method was demonstrated over a four-county area surrounding the city of Boston. A generalized version of the method (six land use classes) was also developed and cross-validated among additional geographic settings: Atlanta, Los Angeles, and Providence. The results suggest that even with the thematically-detailed ten-class structure, it is feasible to map most urban land uses with reasonable accuracy at the block group scale, and results improve with class aggregation. When classified by predicted majority land use, 79% of block groups correctly matched the actual majority land use with the ten-class models. Six-class models typically performed well for the geographic area they were developed from, however models had mixed performance when transported to other geographic settings. Contextual variables, which characterized a block group's spatial relationship to city centers, transportation routes, and other amenities, were consistently strong predictors of most land uses, a result which corresponds to classic urban land use theory. The method and metrics derived here provide a prototype for mapping urban land uses from readily-available data over broader geographic areas than is generally practiced today using current image-based solutions.
Solar Power Satellite (SPS) solid-state antenna power combiner
NASA Technical Reports Server (NTRS)
1980-01-01
A low loss power-combining microstrip antenna suitable for solid state solar power satellite (SPS) application was developed. A unique approach for performing both the combining and radiating function in a single cavity-type circuit was verified, representing substantial refinements over previous demonstration models in terms of detailed geometry to obtain good matching and adequate bandwidth at the design frequency. The combiner circuit was designed, built, and tested and the overall results support the view that the solid state power-combining antenna approach is a viable candidate for a solid state SPS antenna building block.
CFD Methods and Tools for Multi-Element Airfoil Analysis
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; George, Michael W. (Technical Monitor)
1995-01-01
This lecture will discuss the computational tools currently available for high-lift multi-element airfoil analysis. It will present an overview of a number of different numerical approaches, their current capabilities, short-comings, and computational costs. The lecture will be limited to viscous methods, including inviscid/boundary layer coupling methods, and incompressible and compressible Reynolds-averaged Navier-Stokes methods. Both structured and unstructured grid generation approaches will be presented. Two different structured grid procedures are outlined, one which uses multi-block patched grids, the other uses overset chimera grids. Turbulence and transition modeling will be discussed.
Convolutional Dictionary Learning: Acceleration and Convergence
NASA Astrophysics Data System (ADS)
Chun, Il Yong; Fessler, Jeffrey A.
2018-04-01
Convolutional dictionary learning (CDL or sparsifying CDL) has many applications in image processing and computer vision. There has been growing interest in developing efficient algorithms for CDL, mostly relying on the augmented Lagrangian (AL) method or the variant alternating direction method of multipliers (ADMM). When their parameters are properly tuned, AL methods have shown fast convergence in CDL. However, the parameter tuning process is not trivial due to its data dependence and, in practice, the convergence of AL methods depends on the AL parameters for nonconvex CDL problems. To moderate these problems, this paper proposes a new practically feasible and convergent Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The BPG-M-based CDL is investigated with different block updating schemes and majorization matrix designs, and further accelerated by incorporating some momentum coefficient formulas and restarting techniques. All of the methods investigated incorporate a boundary artifacts removal (or, more generally, sampling) operator in the learning model. Numerical experiments show that, without needing any parameter tuning process, the proposed BPG-M approach converges more stably to desirable solutions of lower objective values than the existing state-of-the-art ADMM algorithm and its memory-efficient variant do. Compared to the ADMM approaches, the BPG-M method using a multi-block updating scheme is particularly useful in single-threaded CDL algorithm handling large datasets, due to its lower memory requirement and no polynomial computational complexity. Image denoising experiments show that, for relatively strong additive white Gaussian noise, the filters learned by BPG-M-based CDL outperform those trained by the ADMM approach.
Detection of shifted double JPEG compression by an adaptive DCT coefficient model
NASA Astrophysics Data System (ADS)
Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua
2014-12-01
In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.
Guide on the Effective Block Approach for the Fatigue Life Assessment of Metallic Structures
2013-01-01
Load Interpretation Truncation Validation coupon test program NDI Non-Destructive Inspection QF Quantitative Fractography RAAF Royal Australian...even more-so with the advent of quantitative fractography . 3 LEFM forms the basis of most state-of-art CG models. UNCLASSIFIED 1 UNCLASSIFIED DSTO...preferred method for obtaining the CGR data is by quantitative fractography (QF). This method is well suited to small cracks where other measurement
Ecohydrologic process modeling of mountain block groundwater recharge.
Magruder, Ian A; Woessner, William W; Running, Steve W
2009-01-01
Regional mountain block recharge (MBR) is a key component of alluvial basin aquifer systems typical of the western United States. Yet neither water scientists nor resource managers have a commonly available and reasonably invoked quantitative method to constrain MBR rates. Recent advances in landscape-scale ecohydrologic process modeling offer the possibility that meteorological data and land surface physical and vegetative conditions can be used to generate estimates of MBR. A water balance was generated for a temperate 24,600-ha mountain watershed, elevation 1565 to 3207 m, using the ecosystem process model Biome-BGC (BioGeochemical Cycles) (Running and Hunt 1993). Input data included remotely sensed landscape information and climate data generated with the Mountain Climate Simulator (MT-CLIM) (Running et al. 1987). Estimated mean annual MBR flux into the crystalline bedrock terrain is 99,000 m(3) /d, or approximately 19% of annual precipitation for the 2003 water year. Controls on MBR predictions include evapotranspiration (radiation limited in wet years and moisture limited in dry years), soil properties, vegetative ecotones (significant at lower elevations), and snowmelt (dominant recharge process). The ecohydrologic model is also used to investigate how climatic and vegetative controls influence recharge dynamics within three elevation zones. The ecohydrologic model proves useful for investigating controls on recharge to mountain blocks as a function of climate and vegetation. Future efforts will need to investigate the uncertainty in the modeled water balance by incorporating an advanced understanding of mountain recharge processes, an ability to simulate those processes at varying scales, and independent approaches to calibrating MBR estimates. Copyright © 2009 The Author(s). Journal compilation © 2009 National Ground Water Association.
Slope-scale dynamic states of rockfalls
NASA Astrophysics Data System (ADS)
Agliardi, F.; Crosta, G. B.
2009-04-01
Rockfalls are common earth surface phenomena characterised by complex dynamics at the slope scale, depending on local block kinematics and slope geometry. We investigated the nature of this slope-scale dynamics by parametric 3D numerical modelling of rockfalls over synthetic slopes with different inclination, roughness and spatial resolution. Simulations were performed through an original code specifically designed for rockfall modeling, incorporating kinematic and hybrid algorithms with different damping functions available to model local energy loss by impact and pure rolling. Modelling results in terms of average velocity profiles suggest that three dynamic regimes (i.e. decelerating, steady-state and accelerating), previously recognized in the literature through laboratory experiments on granular flows, can set up at the slope scale depending on slope average inclination and roughness. Sharp changes in rock fall kinematics, including motion type and lateral dispersion of trajectories, are associated to the transition among different regimes. Associated threshold conditions, portrayed in "phase diagrams" as slope-roughness critical lines, were analysed depending on block size, impact/rebound angles, velocity and energy, and model spatial resolution. Motion in regime B (i.e. steady state) is governed by a slope-scale "viscous friction" with average velocity linearly related to the sine of slope inclination. This suggest an analogy between rockfall motion in regime B and newtonian flow, whereas in regime C (i.e. accelerating) an analogy with a dilatant flow was observed. Thus, although local behavior of single falling blocks is well described by rigid body dynamics, the slope scale dynamics of rockfalls seem to statistically approach that of granular media. Possible outcomes of these findings include a discussion of the transition from rockfall to granular flow, the evaluation of the reliability of predictive models, and the implementation of criteria for a preliminary evaluation of hazard assessment and countermeasure planning.
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta, Lucas G.; Jackson, Karen E.; Polanco, Michael A.; Littell, Justin D.
2012-01-01
Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber (DEA) under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. The presence of this energy absorbing device reduced the peak impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to a system-integrated finite element model of the test article developed in parallel with the test program. In preparation for the full-scale crash test, a series of sub-scale and MD-500 mass simulator tests were conducted to evaluate the impact performances of various components and subsystems, including new crush tubes and the DEA blocks. Parameters defined for the system-integrated finite element model were determined from these tests. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the full-scale crash test without the DEA. This combination of heuristic and quantitative methods identified modeling deficiencies, evaluated parameter importance, and proposed required model changes. The multidimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and copilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. One lesson learned was that this approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and pretest predictions. Complete crash simulations with validated finite element models can be used to satisfy crash certification requirements, potentially reducing overall development costs.
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.
2015-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
Spatial dynamics of ecosystem service flows: a comprehensive approach to quantifying actual services
Bagstad, Kenneth J.; Johnson, Gary W.; Voigt, Brian; Villa, Ferdinando
2013-01-01
Recent ecosystem services research has highlighted the importance of spatial connectivity between ecosystems and their beneficiaries. Despite this need, a systematic approach to ecosystem service flow quantification has not yet emerged. In this article, we present such an approach, which we formalize as a class of agent-based models termed “Service Path Attribution Networks” (SPANs). These models, developed as part of the Artificial Intelligence for Ecosystem Services (ARIES) project, expand on ecosystem services classification terminology introduced by other authors. Conceptual elements needed to support flow modeling include a service's rivalness, its flow routing type (e.g., through hydrologic or transportation networks, lines of sight, or other approaches), and whether the benefit is supplied by an ecosystem's provision of a beneficial flow to people or by absorption of a detrimental flow before it reaches them. We describe our implementation of the SPAN framework for five ecosystem services and discuss how to generalize the approach to additional services. SPAN model outputs include maps of ecosystem service provision, use, depletion, and flows under theoretical, possible, actual, inaccessible, and blocked conditions. We highlight how these different ecosystem service flow maps could be used to support various types of decision making for conservation and resource management planning.
A large-grain mapping approach for multiprocessor systems through data flow model. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kim, Hwa-Soo
1991-01-01
A large-grain level mapping method is presented of numerical oriented applications onto multiprocessor systems. The method is based on the large-grain data flow representation of the input application and it assumes a general interconnection topology of the multiprocessor system. The large-grain data flow model was used because such representation best exhibits inherited parallelism in many important applications, e.g., CFD models based on partial differential equations can be presented in large-grain data flow format, very effectively. A generalized interconnection topology of the multiprocessor architecture is considered, including such architectural issues as interprocessor communication cost, with the aim to identify the 'best matching' between the application and the multiprocessor structure. The objective is to minimize the total execution time of the input algorithm running on the target system. The mapping strategy consists of the following: (1) large-grain data flow graph generation from the input application using compilation techniques; (2) data flow graph partitioning into basic computation blocks; and (3) physical mapping onto the target multiprocessor using a priority allocation scheme for the computation blocks.
A novel partitioning method for block-structured adaptive meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Lin, E-mail: lin.fu@tum.de; Litvinov, Sergej, E-mail: sergej.litvinov@aer.mw.tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de
We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtainmore » the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.« less
Digital-analog quantum simulation of generalized Dicke models with superconducting circuits
NASA Astrophysics Data System (ADS)
Lamata, Lucas
2017-03-01
We propose a digital-analog quantum simulation of generalized Dicke models with superconducting circuits, including Fermi- Bose condensates, biased and pulsed Dicke models, for all regimes of light-matter coupling. We encode these classes of problems in a set of superconducting qubits coupled with a bosonic mode implemented by a transmission line resonator. Via digital-analog techniques, an efficient quantum simulation can be performed in state-of-the-art circuit quantum electrodynamics platforms, by suitable decomposition into analog qubit-bosonic blocks and collective single-qubit pulses through digital steps. Moreover, just a single global analog block would be needed during the whole protocol in most of the cases, superimposed with fast periodic pulses to rotate and detune the qubits. Therefore, a large number of digital steps may be attained with this approach, providing a reduced digital error. Additionally, the number of gates per digital step does not grow with the number of qubits, rendering the simulation efficient. This strategy paves the way for the scalable digital-analog quantum simulation of many-body dynamics involving bosonic modes and spin degrees of freedom with superconducting circuits.
Knight, Jason S; Luo, Wei; O'Dell, Alexander A; Yalavarthi, Srilakshmi; Zhao, Wenpu; Subramanian, Venkataraman; Guo, Chiao; Grenn, Robert C; Thompson, Paul R; Eitzman, Daniel T; Kaplan, Mariana J
2014-03-14
Neutrophil extracellular trap (NET) formation promotes vascular damage, thrombosis, and activation of interferon-α-producing plasmacytoid dendritic cells in diseased arteries. Peptidylarginine deiminase inhibition is a strategy that can decrease in vivo NET formation. To test whether peptidylarginine deiminase inhibition, a novel approach to targeting arterial disease, can reduce vascular damage and inhibit innate immune responses in murine models of atherosclerosis. Apolipoprotein-E (Apoe)(-/-) mice demonstrated enhanced NET formation, developed autoantibodies to NETs, and expressed high levels of interferon-α in diseased arteries. Apoe(-/-) mice were treated for 11 weeks with daily injections of Cl-amidine, a peptidylarginine deiminase inhibitor. Peptidylarginine deiminase inhibition blocked NET formation, reduced atherosclerotic lesion area, and delayed time to carotid artery thrombosis in a photochemical injury model. Decreases in atherosclerosis burden were accompanied by reduced recruitment of netting neutrophils and macrophages to arteries, as well as by reduced arterial interferon-α expression. Pharmacological interventions that block NET formation can reduce atherosclerosis burden and arterial thrombosis in murine systems. These results support a role for aberrant NET formation in the pathogenesis of atherosclerosis through modulation of innate immune responses.
A novel partitioning method for block-structured adaptive meshes
NASA Astrophysics Data System (ADS)
Fu, Lin; Litvinov, Sergej; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-07-01
We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtain the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.
Digital-analog quantum simulation of generalized Dicke models with superconducting circuits
Lamata, Lucas
2017-01-01
We propose a digital-analog quantum simulation of generalized Dicke models with superconducting circuits, including Fermi- Bose condensates, biased and pulsed Dicke models, for all regimes of light-matter coupling. We encode these classes of problems in a set of superconducting qubits coupled with a bosonic mode implemented by a transmission line resonator. Via digital-analog techniques, an efficient quantum simulation can be performed in state-of-the-art circuit quantum electrodynamics platforms, by suitable decomposition into analog qubit-bosonic blocks and collective single-qubit pulses through digital steps. Moreover, just a single global analog block would be needed during the whole protocol in most of the cases, superimposed with fast periodic pulses to rotate and detune the qubits. Therefore, a large number of digital steps may be attained with this approach, providing a reduced digital error. Additionally, the number of gates per digital step does not grow with the number of qubits, rendering the simulation efficient. This strategy paves the way for the scalable digital-analog quantum simulation of many-body dynamics involving bosonic modes and spin degrees of freedom with superconducting circuits. PMID:28256559
Nagata, Jun; Watanabe, Jun; Nagata, Masato; Sawatsubashi, Yusuke; Akiyama, Masaki; Tajima, Takehide; Arase, Koichi; Minagawa, Noritaka; Torigoe, Takayuki; Nakayama, Yoshifumi; Horishita, Reiko; Kida, Kentaro; Hamada, Kotaro; Hirata, Keiji
2017-08-01
A laparoscopic approach for inguinal hernia repair is now considered the gold standard. Laparoscopic surgery is associated with a significant reduction in postoperative pain. Epidural analgesia cannot be used in patients with perioperative anticoagulant therapy because of complications such as epidural hematoma. As such, regional anesthetic techniques, such as ultrasound-guided rectus sheath block and transversus abdominis plane block, have become increasingly popular. However, even these anesthetic techniques have potential complications, such as rectus sheath hematoma, if vessels are damaged. We report the use of a transperitoneal laparoscopic approach for rectus sheath block and transversus abdominis plane block as a novel anesthetic procedure. An 81-year-old woman with direct inguinal hernia underwent laparoscopic transabdominal preperitoneal inguinal repair. Epidural anesthesia was not performed because anticoagulant therapy was administered. A Peti-needle™ was delivered through the port, and levobupivacaine was injected though the peritoneum. Surgery was performed successfully, and the anesthetic technique did not affect completion of the operative procedure. The patient was discharged without any complications. This technique was feasible, and the procedure was performed safely. Our novel analgesia technique has potential use as a standard postoperative regimen in various laparoscopic surgeries. Additional prospective studies to compare it with other techniques are required. © 2017 Japan Society for Endoscopic Surgery, Asia Endosurgery Task Force and John Wiley & Sons Australia, Ltd.
OBSIFRAC: database-supported software for 3D modeling of rock mass fragmentation
NASA Astrophysics Data System (ADS)
Empereur-Mot, Luc; Villemin, Thierry
2003-03-01
Under stress, fractures in rock masses tend to form fully connected networks. The mass can thus be thought of as a 3D series of blocks produced by fragmentation processes. A numerical model has been developed that uses a relational database to describe such a mass. The model, which assumes the fractures to be plane, allows data from natural networks to test theories concerning fragmentation processes. In the model, blocks are bordered by faces that are composed of edges and vertices. A fracture can originate from a seed point, its orientation being controlled by the stress field specified by an orientation matrix. Alternatively, it can be generated from a discrete set of given orientations and positions. Both kinds of fracture can occur together in a model. From an original simple block, a given fracture produces two simple polyhedral blocks, and the original block becomes compound. Compound and simple blocks created throughout fragmentation are stored in the database. Several fragmentation processes have been studied. In one scenario, a constant proportion of blocks is fragmented at each step of the process. The resulting distribution appears to be fractal, although seed points are random in each fragmented block. In a second scenario, division affects only one random block at each stage of the process, and gives a Weibull volume distribution law. This software can be used for a large number of other applications.
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; Rucci, Michael A.; Dapore, Alexander J.; Karch, Barry K.
2017-07-01
We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.
Challenges associated with nerve conduction block using kilohertz electrical stimulation
NASA Astrophysics Data System (ADS)
Patel, Yogi A.; Butera, Robert J.
2018-06-01
Neuromodulation therapies, which electrically stimulate parts of the nervous system, have traditionally attempted to activate neurons or axons to restore function or alleviate disease symptoms. In stark contrast to this approach is inhibiting neural activity to relieve disease symptoms and/or restore homeostasis. One potential approach is kilohertz electrical stimulation (KES) of peripheral nerves—which enables a rapid, reversible, and localized block of conduction. This review highlights the existing scientific and clinical utility of KES and discusses the technical and physiological challenges that must be addressed for successful translation of KES nerve conduction block therapies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Henry; Wang, Cong; Winterfeld, Philip
An efficient modeling approach is described for incorporating arbitrary 3D, discrete fractures, such as hydraulic fractures or faults, into modeling fracture-dominated fluid flow and heat transfer in fractured geothermal reservoirs. This technique allows 3D discrete fractures to be discretized independently from surrounding rock volume and inserted explicitly into a primary fracture/matrix grid, generated without including 3D discrete fractures in prior. An effective computational algorithm is developed to discretize these 3D discrete fractures and construct local connections between 3D fractures and fracture/matrix grid blocks of representing the surrounding rock volume. The constructed gridding information on 3D fractures is then added tomore » the primary grid. This embedded fracture modeling approach can be directly implemented into a developed geothermal reservoir simulator via the integral finite difference (IFD) method or with TOUGH2 technology This embedded fracture modeling approach is very promising and computationally efficient to handle realistic 3D discrete fractures with complicated geometries, connections, and spatial distributions. Compared with other fracture modeling approaches, it avoids cumbersome 3D unstructured, local refining procedures, and increases computational efficiency by simplifying Jacobian matrix size and sparsity, while keeps sufficient accuracy. Several numeral simulations are present to demonstrate the utility and robustness of the proposed technique. Our numerical experiments show that this approach captures all the key patterns about fluid flow and heat transfer dominated by fractures in these cases. Thus, this approach is readily available to simulation of fractured geothermal reservoirs with both artificial and natural fractures.« less
Blockbuster Ideas: Activities for Breaking Up Block Periods.
ERIC Educational Resources Information Center
Bohince, Judy
1996-01-01
Describes how to approach block scheduling of science classes. Discusses the planning process, specific activities that work well in longer science classes, and techniques for motivating students. (DDR)
Calibration and validation of rockfall models
NASA Astrophysics Data System (ADS)
Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.
2013-04-01
Calibrating and validating landslide models is extremely difficult due to the particular characteristic of landslides: limited recurrence in time, relatively low frequency of the events, short durability of post-event traces, poor availability of continuous monitoring data, especially for small landslide and rockfalls. For this reason, most of the rockfall models presented in literature completely lack calibration and validation of the results. In this contribution, we explore different strategies for rockfall model calibration and validation starting from both an historical event and a full-scale field test. The event occurred in 2012 in Courmayeur (Western Alps, Italy), and caused serious damages to quarrying facilities. This event has been studied soon after the occurrence through a field campaign aimed at mapping the blocks arrested along the slope, the shape and location of the detachment area, and the traces of scars associated to impacts of blocks on the slope. The full-scale field test was performed by Geovert Ltd in the Christchurch area (New Zealand) after the 2011 earthquake. During the test, a number of large blocks have been mobilized from the upper part of the slope and filmed with high velocity cameras from different viewpoints. The movies of each released block were analysed to identify the block shape, the propagation path, the location of impacts, the height of the trajectory and the velocity of the block along the path. Both calibration and validation of rockfall models should be based on the optimization of the agreement between the actual trajectories or location of arrested blocks and the simulated ones. A measure that describe this agreement is therefore needed. For calibration purpose, this measure should simple enough to allow trial and error repetitions of the model for parameter optimization. In this contribution we explore different calibration/validation measures: (1) the percentage of simulated blocks arresting within a buffer of the actual blocks, (2) the percentage of trajectories passing through the buffer of the actual rockfall path, (3) the mean distance between the location of arrest of each simulated blocks and the location of the nearest actual blocks; (4) the mean distance between the location of detachment of each simulated block and the location of detachment of the actual block located closer to the arrest position. By applying the four measures to the case studies, we observed that all measures are able to represent the model performance for validation purposes. However, the third measure is more simple and reliable than the others, and seems to be optimal for model calibration, especially when using a parameter estimation and optimization modelling software for automated calibration.
Torus Approach in Gravity Field Determination from Simulated GOCE Gravity Gradients
NASA Astrophysics Data System (ADS)
Liu, Huanling; Wen, Hanjiang; Xu, Xinyu; Zhu, Guangbin
2016-08-01
In Torus approach, observations are projected to the nominal orbits with constant radius and inclination, lumped coefficients provides a linear relationship between observations and spherical harmonic coefficients. Based on the relationship, two-dimensional FFT and block-diagonal least-squares adjustment are used to recover Earth's gravity field model. The Earth's gravity field model complete to degree and order 200 is recovered using simulated satellite gravity gradients on a torus grid, and the degree median error is smaller than 10-18, which shows the effectiveness of Torus approach. EGM2008 is employed as a reference model and the gravity field model is resolved using the simulated observations without noise given on GOCE orbits of 61 days. The error from reduction and interpolation can be mitigated by iterations. Due to polar gap, the precision of low-order coefficients is lower. Without considering these coefficients the maximum geoid degree error and cumulative error are 0.022mm and 0.099mm, respectively. The Earth's gravity field model is also recovered from simulated observations with white noise 5mE/Hz1/2, which is compared to that from direct method. In conclusion, it is demonstrated that Torus approach is a valid method for processing massive amount of GOCE gravity gradients.
A new rapid method for rockfall energies and distances estimation
NASA Astrophysics Data System (ADS)
Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric
2016-04-01
Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies and distances at the base to block and slope features. The validation of the proposed approach was conducted by comparing predictions to experimental data collected in the field and gathered from the scientific literature. The method can be used for both natural and constructed slopes and easily extended to more complicated and articulated slope geometries. The study shows its great potential for a quick qualitative hazard assessment providing indication about impact energy and horizontal distance of the first impact at the base of a rock cliff. Nevertheless, its application cannot substitute a more detailed quantitative analysis required for site-specific design of mitigation measures. Acknowledgements The authors gratefully acknowledge the financial support of the Australian Coal Association Research Program (ACARP). References Dorren, L.K.A. (2003) A review of rockfall mechanics and modelling approaches, Progress in Physical Geography 27(1), 69-87. Agliardi, F., Crosta, G.B., Frattini, P. (2009) Integrating rockfall risk assessment and countermeasure design by 3D modelling techniques. Natural Hazards and Earth System Sciences 9(4), 1059-1073. Ferrari, F., Thoeni, K., Giacomini, A., Lambert, C. (2016) A rapid approach to estimate the rockfall energies and distances at the base of rock cliffs. Georisk, DOI: 10.1080/17499518.2016.1139729.
Zhang, Hong; Ren, Lei; Kong, Vic; Giles, William; Zhang, You; Jin, Jian-Yue
2016-01-01
A preobject grid can reduce and correct scatter in cone beam computed tomography (CBCT). However, half of the signal in each projection is blocked by the grid. A synchronized moving grid (SMOG) has been proposed to acquire two complimentary projections at each gantry position and merge them into one complete projection. That approach, however, suffers from increased scanning time and the technical difficulty of accurately merging the two projections per gantry angle. Herein, the authors present a new SMOG approach which acquires a single projection per gantry angle, with complimentary grid patterns for any two adjacent projections, and use an interprojection sensor fusion (IPSF) technique to estimate the blocked signal in each projection. The method may have the additional benefit of reduced imaging dose due to the grid blocking half of the incident radiation. The IPSF considers multiple paired observations from two adjacent gantry angles as approximations of the blocked signal and uses a weighted least square regression of these observations to finally determine the blocked signal. The method was first tested with a simulated SMOG on a head phantom. The signal to noise ratio (SNR), which represents the difference of the recovered CBCT image to the original image without the SMOG, was used to evaluate the ability of the IPSF in recovering the missing signal. The IPSF approach was then tested using a Catphan phantom on a prototype SMOG assembly installed in a bench top CBCT system. In the simulated SMOG experiment, the SNRs were increased from 15.1 and 12.7 dB to 35.6 and 28.9 dB comparing with a conventional interpolation method (inpainting method) for a projection and the reconstructed 3D image, respectively, suggesting that IPSF successfully recovered most of blocked signal. In the prototype SMOG experiment, the authors have successfully reconstructed a CBCT image using the IPSF-SMOG approach. The detailed geometric features in the Catphan phantom were mostly recovered according to visual evaluation. The scatter related artifacts, such as cupping artifacts, were almost completely removed. The IPSF-SMOG is promising in reducing scatter artifacts and improving image quality while reducing radiation dose.
Gholami, Somayeh; Nedaie, Hassan Ali; Longo, Francesco; Ay, Mohammad Reza; Dini, Sharifeh A.; Meigooni, Ali S.
2017-01-01
Purpose: The clinical efficacy of Grid therapy has been examined by several investigators. In this project, the hole diameter and hole spacing in Grid blocks were examined to determine the optimum parameters that give a therapeutic advantage. Methods: The evaluations were performed using Monte Carlo (MC) simulation and commonly used radiobiological models. The Geant4 MC code was used to simulate the dose distributions for 25 different Grid blocks with different hole diameters and center-to-center spacing. The therapeutic parameters of these blocks, namely, the therapeutic ratio (TR) and geometrical sparing factor (GSF) were calculated using two different radiobiological models, including the linear quadratic and Hug–Kellerer models. In addition, the ratio of the open to blocked area (ROTBA) is also used as a geometrical parameter for each block design. Comparisons of the TR, GSF, and ROTBA for all of the blocks were used to derive the parameters for an optimum Grid block with the maximum TR, minimum GSF, and optimal ROTBA. A sample of the optimum Grid block was fabricated at our institution. Dosimetric characteristics of this Grid block were measured using an ionization chamber in water phantom, Gafchromic film, and thermoluminescent dosimeters in Solid Water™ phantom materials. Results: The results of these investigations indicated that Grid blocks with hole diameters between 1.00 and 1.25 cm and spacing of 1.7 or 1.8 cm have optimal therapeutic parameters (TR > 1.3 and GSF~0.90). The measured dosimetric characteristics of the optimum Grid blocks including dose profiles, percentage depth dose, dose output factor (cGy/MU), and valley-to-peak ratio were in good agreement (±5%) with the simulated data. Conclusion: In summary, using MC-based dosimetry, two radiobiological models, and previously published clinical data, we have introduced a method to design a Grid block with optimum therapeutic response. The simulated data were reproduced by experimental data. PMID:29296035
A casemix model for estimating the impact of hospital access block on the emergency department.
Stuart, Peter
2004-06-01
To determine the ED activity and costs resulting from access block. A casemix model (AWOOS) was developed to measure activity due to access block. Using data from four hospitals between 1998 and 2002, ED activity was measured using the urgency and disposition group (UDG) casemix model and the AWOOS model with the purpose of determining the change in ED activity due to access block. Whilst the mean length of stay in ED (admitted patients) increased by 93% between 1998 and 2002, mean UDG activity increased by 0.63% compared to a mean increase in AWOOS activity of 24.5%. The 23.9% difference between UDG and AWOOS activity represents the (unmeasured) increase in ED activity and costs for the period 1998-2002 resulting from access block. The UDG system significantly underestimates the activity in EDs experiencing marked access block.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chremos, Alexandros, E-mail: achremos@imperial.ac.uk; Nikoubashman, Arash, E-mail: arashn@princeton.edu; Panagiotopoulos, Athanassios Z.
In this contribution, we develop a coarse-graining methodology for mapping specific block copolymer systems to bead-spring particle-based models. We map the constituent Kuhn segments to Lennard-Jones particles, and establish a semi-empirical correlation between the experimentally determined Flory-Huggins parameter χ and the interaction of the model potential. For these purposes, we have performed an extensive set of isobaric–isothermal Monte Carlo simulations of binary mixtures of Lennard-Jones particles with the same size but with asymmetric energetic parameters. The phase behavior of these monomeric mixtures is then extended to chains with finite sizes through theoretical considerations. Such a top-down coarse-graining approach is importantmore » from a computational point of view, since many characteristic features of block copolymer systems are on time and length scales which are still inaccessible through fully atomistic simulations. We demonstrate the applicability of our method for generating parameters by reproducing the morphology diagram of a specific diblock copolymer, namely, poly(styrene-b-methyl methacrylate), which has been extensively studied in experiments.« less
An Aerodynamic Simulation Process for Iced Lifting Surfaces and Associated Issues
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Vickerman, Mary B.; Hackenberg, Anthony W.; Rigby, David L.
2003-01-01
This paper discusses technologies and software tools that are being implemented in a software toolkit currently under development at NASA Glenn Research Center. Its purpose is to help study the effects of icing on airfoil performance and assist with the aerodynamic simulation process which consists of characterization and modeling of ice geometry, application of block topology and grid generation, and flow simulation. Tools and technologies for each task have been carefully chosen based on their contribution to the overall process. For the geometry characterization and modeling, we have chosen an interactive rather than automatic process in order to handle numerous ice shapes. An Appendix presents features of a software toolkit developed to support the interactive process. Approaches taken for the generation of block topology and grids, and flow simulation, though not yet implemented in the software, are discussed with reasons for why particular methods are chosen. Some of the issues that need to be addressed and discussed by the icing community are also included.
Global image registration using a symmetric block-matching approach
Modat, Marc; Cash, David M.; Daga, Pankaj; Winston, Gavin P.; Duncan, John S.; Ourselin, Sébastien
2014-01-01
Abstract. Most medical image registration algorithms suffer from a directionality bias that has been shown to largely impact subsequent analyses. Several approaches have been proposed in the literature to address this bias in the context of nonlinear registration, but little work has been done for global registration. We propose a symmetric approach based on a block-matching technique and least-trimmed square regression. The proposed method is suitable for multimodal registration and is robust to outliers in the input images. The symmetric framework is compared with the original asymmetric block-matching technique and is shown to outperform it in terms of accuracy and robustness. The methodology presented in this article has been made available to the community as part of the NiftyReg open-source package. PMID:26158035
Alfred, Vinu Mervick; Srinivasan, Gnanasekaran; Zachariah, Mamie
2018-01-01
The supraclavicular approach is considered to be the easiest and most effective approach to block the brachial plexus for upper limb surgeries. The classical approach using the anatomical landmark technique was associated with higher failure rates and complications. Ultrasonography (USG) guidance and peripheral nerve stimulator (PNS) have improved the success rates and safety margin. The aim of the present study is to compare USG with PNS in supraclavicular brachial plexus block for upper limb surgeries with respect to the onset of motor and sensory blockade, total duration of blockade, procedure time, and complications. Prospective, randomized controlled study. Sixty patients aged above 18 years scheduled for elective upper limb surgery were randomly allocated into two groups. Group A patients received supraclavicular brachial plexus block under ultrasound guidance and in Group B patients, PNS was used. In both groups, local anesthetic mixture consisting of 15 ml of 0.5% bupivacaine and 10 ml of 2% lignocaine with 1:200,000 adrenaline were used. Independent t -test used to compare mean between groups; Chi-square test for categorical variables. The procedure time was shorter with USG (11.57 ± 2.75 min) compared to PNS (21.73 ± 4.84). The onset time of sensory block (12.83 ± 3.64 min vs. 16 ± 3.57 min) and onset of motor block (23 ± 4.27 min vs. 27 ± 3.85 min) were significantly shorter in Group A compared to Group B ( P < 0.05). The duration of sensory block was significantly prolonged in Group A (8.00 ± 0.891 h) compared to Group B (7.25 ± 1.418 h). None of the patients in either groups developed any complications. The ultrasound-guided supraclavicular brachial plexus block can be done quicker, with a faster onset of sensory and motor block compared to nerve stimulator technique.
Analysis of Fault Spacing in Thrust-Belt Wedges Using Numerical Modeling
NASA Astrophysics Data System (ADS)
Regensburger, P. V.; Ito, G.
2017-12-01
Numerical modeling is invaluable in studying the mechanical processes governing the evolution of geologic features such as thrust-belt wedges. The mechanisms controlling thrust fault spacing in wedges is not well understood. Our numerical model treats the thrust belt as a visco-elastic-plastic continuum and uses a finite-difference, marker-in-cell method to solve for conservation of mass and momentum. From these conservation laws, stress is calculated and Byerlee's law is used to determine the shear stress required for a fault to form. Each model consists of a layer of crust, initially 3-km-thick, carried on top of a basal décollement, which moves at a constant speed towards a rigid backstop. A series of models were run with varied material properties, focusing on the angle of basal friction at the décollement, the angle of friction within the crust, and the cohesion of the crust. We investigate how these properties affected the spacing between thrusts that have the most time-integrated history of slip and therefore have the greatest effect on the large-scale undulations in surface topography. The surface position of these faults, which extend through most of the crustal layer, are identifiable as local maxima in positive curvature of surface topography. Tracking the temporal evolution of faults, we find that thrust blocks are widest when they first form at the front of the wedge and then they tend to contract over time as more crustal material is carried to the wedge. Within each model, thrust blocks form with similar initial widths, but individual thrust blocks develop differently and may approach an asymptotic width over time. The median of thrust block widths across the whole wedge tends to decrease with time. Median fault spacing shows a positive correlation with both wedge cohesion and internal friction. In contrast, median fault spacing exhibits a negative correlation at small angles of basal friction (<17˚) and a positive correlation with larger angles of basal friction. From these correlations, we will derive scaling laws that can be used to predict fault spacing in thrust-belt wedges.
Fuel savings potential of the NASA Advanced Turboprop Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitlow, J.B. Jr.; Sievers, G.K.
1984-01-01
The NASA Advanced Turboprop (ATP) Program is directed at developing new technology for highly loaded, multibladed propellers for use at Mach 0.65 to 0.85 and at altitudes compatible with the air transport system requirements. Advanced turboprop engines offer the potential of 15 to 30 percent savings in aircraft block fuel relative to advanced turbofan engines (50 to 60 percent savings over today's turbofan fleet). The concept, propulsive efficiency gains, block fuel savings and other benefits, and the program objectives through a systems approach are described. Current program status and major accomplishments in both single rotation and counter rotation propeller technologymore » are addressed. The overall program from scale model wind tunnel tests to large scale flight tests on testbed aircraft is discussed.« less
Wheeler, David C; Czarnota, Jenna; Jones, Resa M
2017-01-01
Socioeconomic status (SES) is often considered a risk factor for health outcomes. SES is typically measured using individual variables of educational attainment, income, housing, and employment variables or a composite of these variables. Approaches to building the composite variable include using equal weights for each variable or estimating the weights with principal components analysis or factor analysis. However, these methods do not consider the relationship between the outcome and the SES variables when constructing the index. In this project, we used weighted quantile sum (WQS) regression to estimate an area-level SES index and its effect in a model of colonoscopy screening adherence in the Minnesota-Wisconsin Metropolitan Statistical Area. We considered several specifications of the SES index including using different spatial scales (e.g., census block group-level, tract-level) for the SES variables. We found a significant positive association (odds ratio = 1.17, 95% CI: 1.15-1.19) between the SES index and colonoscopy adherence in the best fitting model. The model with the best goodness-of-fit included a multi-scale SES index with 10 variables at the block group-level and one at the tract-level, with home ownership, race, and income among the most important variables. Contrary to previous index construction, our results were not consistent with an assumption of equal importance of variables in the SES index when explaining colonoscopy screening adherence. Our approach is applicable in any study where an SES index is considered as a variable in a regression model and the weights for the SES variables are not known in advance.
NASA Astrophysics Data System (ADS)
Moreno Ródenas, Antonio Manuel; Cecinati, Francesca; ten Veldhuis, Marie-Claire; Langeveld, Jeroen; Clemens, Francois
2016-04-01
Maintaining water quality standards in highly urbanised hydrological catchments is a worldwide challenge. Water management authorities struggle to cope with changing climate and an increase in pollution pressures. Water quality modelling has been used as a decision support tool for investment and regulatory developments. This approach led to the development of integrated catchment models (ICM), which account for the link between the urban/rural hydrology and the in-river pollutant dynamics. In the modelled system, rainfall triggers the drainage systems of urban areas scattered along a river. When flow exceeds the sewer infrastructure capacity, untreated wastewater enters the natural system by combined sewer overflows. This results in a degradation of the river water quality, depending on the magnitude of the emission and river conditions. Thus, being capable of representing these dynamics in the modelling process is key for a correct assessment of the water quality. In many urbanised hydrological systems the distances between draining sewer infrastructures go beyond the de-correlation length of rainfall processes, especially, for convective summer storms. Hence, spatial and temporal scales of selected rainfall inputs are expected to affect water quality dynamics. The objective of this work is to evaluate how the use of rainfall data from different sources and with different space-time characteristics affects modelled output concentrations of dissolved oxygen in a simplified ICM. The study area is located at the Dommel, a relatively small and sensitive river flowing through the city of Eindhoven (The Netherlands). This river stretch receives the discharge of the 750,000 p.e. WWTP of Eindhoven and from over 200 combined sewer overflows scattered along its length. A pseudo-distributed water quality model has been developed in WEST (mikedhi.com); this is a lumped-physically based model that accounts for urban drainage processes, WWTP and river dynamics for several pollutant typologies. Different rainfall products are tested: 1) Block kriging of a single reliable rain gauge, 2) Block kriging product from a network of 13 rain gauges and, 3) Universal block kriging with 13 rain gauges and KNMI weather radar estimates as a covariate. Different temporal accumulation levels are compared ranging from 10min to 1h. A geostatistical approach is used to allocate the prediction of the rainfall input in each of the urban hydrological units composing the model. The change in model performance is then assessed by contrasting it with dissolved oxygen monitoring data in a series of events.
Vollert, Jan; Magerl, Walter; Baron, Ralf; Binder, Andreas; Enax-Krumova, Elena K; Geisslinger, Gerd; Gierthmühlen, Janne; Henrich, Florian; Hüllemann, Philipp; Klein, Thomas; Lötsch, Jörn; Maier, Christoph; Oertel, Bruno; Schuh-Hofer, Sigrid; Tölle, Thomas R; Treede, Rolf-Detlef
2018-06-01
As an indirect approach to relate previously identified sensory phenotypes of patients suffering from peripheral neuropathic pain to underlying mechanisms, we used a published sorting algorithm to estimate the prevalence of denervation, peripheral and central sensitization in 657 healthy subjects undergoing experimental models of nerve block (NB) (compression block and topical lidocaine), primary hyperalgesia (PH) (sunburn and topical capsaicin), or secondary hyperalgesia (intradermal capsaicin and electrical high-frequency stimulation), and in 902 patients suffering from neuropathic pain. Some of the data have been previously published. Randomized split-half analysis verified a good concordance with a priori mechanistic sensory profile assignment in the training (79%, Cohen κ = 0.54, n = 265) and the test set (81%, Cohen κ = 0.56, n = 279). Nerve blocks were characterized by pronounced thermal and mechanical sensory loss, but also mild pinprick hyperalgesia and paradoxical heat sensations. Primary hyperalgesia was characterized by pronounced gain for heat, pressure and pinprick pain, and mild thermal sensory loss. Secondary hyperalgesia was characterized by pronounced pinprick hyperalgesia and mild thermal sensory loss. Topical lidocaine plus topical capsaicin induced a combined phenotype of NB plus PH. Topical menthol was the only model with significant cold hyperalgesia. Sorting of the 902 patients into these mechanistic phenotypes led to a similar distribution as the original heuristic clustering (65% identity, Cohen κ = 0.44), but the denervation phenotype was more frequent than in heuristic clustering. These data suggest that sorting according to human surrogate models may be useful for mechanism-based stratification of neuropathic pain patients for future clinical trials, as encouraged by the European Medicines Agency.
Community Detection Algorithm Combining Stochastic Block Model and Attribute Data Clustering
NASA Astrophysics Data System (ADS)
Kataoka, Shun; Kobayashi, Takuto; Yasuda, Muneki; Tanaka, Kazuyuki
2016-11-01
We propose a new algorithm to detect the community structure in a network that utilizes both the network structure and vertex attribute data. Suppose we have the network structure together with the vertex attribute data, that is, the information assigned to each vertex associated with the community to which it belongs. The problem addressed this paper is the detection of the community structure from the information of both the network structure and the vertex attribute data. Our approach is based on the Bayesian approach that models the posterior probability distribution of the community labels. The detection of the community structure in our method is achieved by using belief propagation and an EM algorithm. We numerically verified the performance of our method using computer-generated networks and real-world networks.
The jABC Approach to Rigorous Collaborative Development of SCM Applications
NASA Astrophysics Data System (ADS)
Hörmann, Martina; Margaria, Tiziana; Mender, Thomas; Nagel, Ralf; Steffen, Bernhard; Trinh, Hong
Our approach to the model-driven collaborative design of IKEA's P3 Delivery Management Process uses the jABC [9] for model driven mediation and choreography to complement a RUP-based (Rational Unified Process) development process. jABC is a framework for service development based on Lightweight Process Coordination. Users (product developers and system/software designers) easily develop services and applications by composing reusable building-blocks into (flow-) graph structures that can be animated, analyzed, simulated, verified, executed, and compiled. This way of handling the collaborative design of complex embedded systems has proven to be effective and adequate for the cooperation of non-programmers and non-technical people, which is the focus of this contribution, and it is now being rolled out in the operative practice.
NASA Technical Reports Server (NTRS)
Davidson, Paul; Pineda, Evan J.; Heinrich, Christian; Waas, Anthony M.
2013-01-01
The open hole tensile and compressive strengths are important design parameters in qualifying fiber reinforced laminates for a wide variety of structural applications in the aerospace industry. In this paper, we present a unified model that can be used for predicting both these strengths (tensile and compressive) using the same set of coupon level, material property data. As a prelude to the unified computational model that follows, simplified approaches, referred to as "zeroth order", "first order", etc. with increasing levels of fidelity are first presented. The results and methods presented are practical and validated against experimental data. They serve as an introductory step in establishing a virtual building block, bottom-up approach to designing future airframe structures with composite materials. The results are useful for aerospace design engineers, particularly those that deal with airframe design.
Lunardi, Andrea; Ala, Ugo; Epping, Mirjam T.; Salmena, Leonardo; Clohessy, John G.; Webster, Kaitlyn A.; Wang, Guocan; Mazzucchelli, Roberta; Bianconi, Maristella; Stack, Edward C.; Lis, Rosina; Patnaik, Akash; Cantley, Lewis C.; Bubley, Glenn; Cordon-Cardo, Carlos; Gerald, William L.; Montironi, Rodolfo; Signoretti, Sabina; Loda, Massimo; Nardella, Caterina; Pandolfi, Pier Paolo
2013-01-01
Here we report an integrated analysis that leverages data from treatment of genetic mouse models of prostate cancer along with clinical data from patients to elucidate new mechanisms of castration resistance. We show that castration counteracts tumor progression in a Pten-loss driven mouse model of prostate cancer through the induction of apoptosis and proliferation block. Conversely, this response is bypassed upon deletion of either Trp53 or Lrf together with Pten, leading to the development of castration resistant prostate cancer (CRPC). Mechanistically, the integrated acquisition of data from mouse models and patients identifies the expression patterns of XAF1-XIAP/SRD5A1 as a predictive and actionable signature for CRPC. Importantly, we show that combined inhibition of XIAP, SRD5A1, and AR pathways overcomes castration resistance. Thus, our co-clinical approach facilitates stratification of patients and the development of tailored and innovative therapeutic treatments. PMID:23727860
Wacker, Soren; Noskov, Sergei Yu
2018-05-01
Drug-induced abnormal heart rhythm known as Torsades de Pointes (TdP) is a potential lethal ventricular tachycardia found in many patients. Even newly released anti-arrhythmic drugs, like ivabradine with HCN channel as a primary target, block the hERG potassium current in overlapping concentration interval. Promiscuous drug block to hERG channel may potentially lead to perturbation of the action potential duration (APD) and TdP, especially when with combined with polypharmacy and/or electrolyte disturbances. The example of novel anti-arrhythmic ivabradine illustrates clinically important and ongoing deficit in drug design and warrants for better screening methods. There is an urgent need to develop new approaches for rapid and accurate assessment of how drugs with complex interactions and multiple subcellular targets can predispose or protect from drug-induced TdP. One of the unexpected outcomes of compulsory hERG screening implemented in USA and European Union resulted in large datasets of IC 50 values for various molecules entering the market. The abundant data allows now to construct predictive machine-learning (ML) models. Novel ML algorithms and techniques promise better accuracy in determining IC 50 values of hERG blockade that is comparable or surpassing that of the earlier QSAR or molecular modeling technique. To test the performance of modern ML techniques, we have developed a computational platform integrating various workflows for quantitative structure activity relationship (QSAR) models using data from the ChEMBL database. To establish predictive powers of ML-based algorithms we computed IC 50 values for large dataset of molecules and compared it to automated patch clamp system for a large dataset of hERG blocking and non-blocking drugs, an industry gold standard in studies of cardiotoxicity. The optimal protocol with high sensitivity and predictive power is based on the novel eXtreme gradient boosting (XGBoost) algorithm. The ML-platform with XGBoost displays excellent performance with a coefficient of determination of up to R 2 ~0.8 for pIC 50 values in evaluation datasets, surpassing other metrics and approaches available in literature. Ultimately, the ML-based platform developed in our work is a scalable framework with automation potential to interact with other developing technologies in cardiotoxicity field, including high-throughput electrophysiology measurements delivering large datasets of profiled drugs, rapid synthesis and drug development via progress in synthetic biology.
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.
2011-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Effects of strychnine on the sodium conductance of the frog node of Ranvier
1977-01-01
Strychnine blocks sodium conductance in the frog node of Ranvier. This block was studied by reducing and slowing sodium inactivation with scorpion venom. The block is voltage and time dependent. The more positive the axoplasm the greater the block and the faster the approach to equilibrium. Some evidence is presented suggesting that only open channels can be blocked. The block is reduced by raising external sodium or lithium but not impermeant cations. A quaternary derivative of strychnine was synthesized and found to have the same action only when applied intracellularly. We conclude that strychnine blocks sodium channels by a mechanism analogous to that by which it blocks potassium channels. The potassium channel block had previously been found to be identical to that by tetraethylammonium ion derivatives. In addition, strychnine resembles procaine and its derivatives in both its structure and the mechanism of sodium channel block. PMID:302321
Ficklin, Travis; Lund, Robin; Schipper, Megan
2014-01-01
The purpose of this study was to compare traditional and swing blocking techniques on center of mass (COM) projectile motion and effective blocking area in nine healthy Division I female volleyball players. Two high-definition (1080 p) video cameras (60 Hz) were used to collect two-dimensional variables from two separate views. One was placed perpendicular to the plane of the net and the other was directed along the top of the net, and were used to estimate COM locations and blocking area in a plane parallel to the net and hand penetration through the plane of the net respectively. Video of both the traditional and swing techniques were digitized and kinematic variables were calculated. Paired samples t-tests indicated that the swing technique resulted in greater (p < 0.05) vertical and horizontal takeoff velocities (vy and vx), jump height (H), duration of the block (tBLOCK), blocking coverage during the block (C) as well as hand penetration above and through the net’s plane (YPEN, ZPEN). The traditional technique had significantly greater approach time (tAPP). The results of this study suggest that the swing technique results in both greater jump height and effective blocking area. However, the shorter tAPP that occurs with swing is associated with longer times in the air during the block which may reduce the ability of the athlete to make adjustments to attacks designed to misdirect the defense. Key Points Swing blocking technique has greater jump height, effective blocking area, hand penetration, horizontal and vertical takeoff velocity, and has a shorter time of approach. Despite these advantages, there may be more potential for mistiming blocks and having erratic deflections of the ball after contact when using the swing technique. Coaches should take more than simple jump height and hand penetration into account when deciding which technique to employ. PMID:24570609
Ficklin, Travis; Lund, Robin; Schipper, Megan
2014-01-01
The purpose of this study was to compare traditional and swing blocking techniques on center of mass (COM) projectile motion and effective blocking area in nine healthy Division I female volleyball players. Two high-definition (1080 p) video cameras (60 Hz) were used to collect two-dimensional variables from two separate views. One was placed perpendicular to the plane of the net and the other was directed along the top of the net, and were used to estimate COM locations and blocking area in a plane parallel to the net and hand penetration through the plane of the net respectively. Video of both the traditional and swing techniques were digitized and kinematic variables were calculated. Paired samples t-tests indicated that the swing technique resulted in greater (p < 0.05) vertical and horizontal takeoff velocities (vy and vx), jump height (H), duration of the block (tBLOCK), blocking coverage during the block (C) as well as hand penetration above and through the net's plane (YPEN, ZPEN). The traditional technique had significantly greater approach time (tAPP). The results of this study suggest that the swing technique results in both greater jump height and effective blocking area. However, the shorter tAPP that occurs with swing is associated with longer times in the air during the block which may reduce the ability of the athlete to make adjustments to attacks designed to misdirect the defense. Key PointsSwing blocking technique has greater jump height, effective blocking area, hand penetration, horizontal and vertical takeoff velocity, and has a shorter time of approach.Despite these advantages, there may be more potential for mistiming blocks and having erratic deflections of the ball after contact when using the swing technique.Coaches should take more than simple jump height and hand penetration into account when deciding which technique to employ.
Gaussian curvature analysis allows for automatic block placement in multi-block hexahedral meshing.
Ramme, Austin J; Shivanna, Kiran H; Magnotta, Vincent A; Grosland, Nicole M
2011-10-01
Musculoskeletal finite element analysis (FEA) has been essential to research in orthopaedic biomechanics. The generation of a volumetric mesh is often the most challenging step in a FEA. Hexahedral meshing tools that are based on a multi-block approach rely on the manual placement of building blocks for their mesh generation scheme. We hypothesise that Gaussian curvature analysis could be used to automatically develop a building block structure for multi-block hexahedral mesh generation. The Automated Building Block Algorithm incorporates principles from differential geometry, combinatorics, statistical analysis and computer science to automatically generate a building block structure to represent a given surface without prior information. We have applied this algorithm to 29 bones of varying geometries and successfully generated a usable mesh in all cases. This work represents a significant advancement in automating the definition of building blocks.
Catalyst for Expanding Human Spaceflight
NASA Technical Reports Server (NTRS)
Lueders, Kathryn L.
2014-01-01
History supplies us with many models of how and how not to commercialize an industry. This presentation draws parallels between industries with government roots, like the railroad, air transport, communications and the internet, and NASAs Commercial Crew Program. In these examples, government served as a catalyst for what became a booming industry. The building block approach the Commercial Crew Program is taking is very simple -- establish a need, laying the groundwork, enabling industry and legal framework.
From nonfinite to finite 1D arrays of origami tiles.
Wu, Tsai Chin; Rahman, Masudur; Norton, Michael L
2014-06-17
CONSPECTUS: DNA based nanotechnology provides a basis for high-resolution fabrication of objects almost without physical size limitations. However, the pathway to large-scale production of large objects is currently unclear. Operationally, one method forward is to use high information content, large building blocks, which can be generated with high yield and reproducibility. Although flat DNA origami naturally invites comparison to pixels in zero, one, and two dimensions and voxels in three dimensions and has provided an excellent mechanism for generating blocks of significant size and complexity and a multitude of shapes, the field is young enough that a single "brick" has not become the standard platform used by the majority of researchers in the field. In this Account, we highlight factors we considered that led to our adoption of a cross-shaped, non-space-filling origami species, designed by Dr. Liu of the Seeman laboratory, as the building block ideal for use in the fabrication of finite one-dimensional arrays. Three approaches that can be employed for uniquely coding origami-origami linkages are presented. Such coding not only provides the energetics for tethering the species but also uniquely designates the relative orientation of the origami building blocks. The strength of the coding approach implemented in our laboratory is demonstrated using examples of oligomers ranging from finite multimers composed of four, six, and eight origami structures to semi-infinite polymers (100mers). Two approaches to finite array design and the series of assembly steps that each requires are discussed. The process of AFM observation for array characterization is presented as a critical case study. For these soft species, the array images do not simply present the solution phase geometry projected onto a two-dimensional surface. There are additional perturbations associated with fluidic forces associated with sample preparation. At this time, reconstruction of the "true" or average solution structures for blocks is more readily achieved using computer models than using direct imaging methods. The development of scalable 1D-origami arrays composed of uniquely addressable components is a logical, if not necessary, step in the evolution of higher order fully addressable structures. Our research into the fabrication of arrays has led us to generate a listing of several important areas of future endeavor. Of high importance is the re-enforcement of the mechanical properties of the building blocks and the organization of multiple arrays on a surface of technological importance. While addressing this short list of barriers to progress will prove challenging, coherent development along each of these lines of inquiry will accelerate the appearance of commercial scale molecular manufacturing.
Spatial distribution of block falls using volumetric GIS-decision-tree models
NASA Astrophysics Data System (ADS)
Abdallah, C.
2010-10-01
Block falls are considered a significant aspect of surficial instability contributing to losses in land and socio-economic aspects through their damaging effects to natural and human environments. This paper predicts and maps the geographic distribution and volumes of block falls in central Lebanon using remote sensing, geographic information systems (GIS) and decision-tree modeling (un-pruned and pruned trees). Eleven terrain parameters (lithology, proximity to fault line, karst type, soil type, distance to drainage line, elevation, slope gradient, slope aspect, slope curvature, land cover/use, and proximity to roads) were generated to statistically explain the occurrence of block falls. The latter were discriminated using SPOT4 satellite imageries, and their dimensions were determined during field surveys. The un-pruned tree model based on all considered parameters explained 86% of the variability in field block fall measurements. Once pruned, it classifies 50% in block falls' volumes by selecting just four parameters (lithology, slope gradient, soil type, and land cover/use). Both tree models (un-pruned and pruned) were converted to quantitative 1:50,000 block falls' maps with different classes; starting from Nil (no block falls) to more than 4000 m 3. These maps are fairly matching with coincidence value equal to 45%; however, both can be used to prioritize the choice of specific zones for further measurement and modeling, as well as for land-use management. The proposed tree models are relatively simple, and may also be applied to other areas (i.e. the choice of un-pruned or pruned model is related to the availability of terrain parameters in a given area).
Kukreti, B M; Kumar, Pramod; Sharma, G K
2015-10-01
Exploratory drilling was undertaken in the Lostoin block, West Khasi Hills district of Meghalaya based on the geological extension to the major uranium deposit in the basin. Gamma ray logging of drilled boreholes shows considerable subsurface mineralization in the block. However, environmental and exploration related challenges such as climatic, logistic, limited core drilling and poor core recovery etc. in the block severely restricted the study of uranium exploration related index parameters for the block with a high degree confidence. The present study examines these exploration related challenges and develops an integrated approach using representative sampling of reconnoitory boreholes in the block. Experimental findings validate a similar geochemically coherent nature of radio elements (K, Ra and Th) in the Lostoin block uranium hosting environment with respect to the known block of Mahadek basin and uranium enrichment is confirmed by the lower U to Th correlation index (0.268) of hosting environment. A mineralized zone investigation in the block shows parent (refers to the actual parent uranium concentration at a location and not a secondary concentration such as the daughter elements which produce the signal from a total gamma ray measurement) favoring uranium mineralization. The confidence parameters generated under the present study have implications for the assessment of the inferred category of uranium ore in the block and setting up a road map for the systematic exploration of large uranium potential occurring over extended areas in the basin amid prevailing environmental and exploratory impediments. Copyright © 2015 Elsevier Ltd. All rights reserved.
Design of peptide mimetics to block pro-inflammatory functions of HA fragments.
Hauser-Kawaguchi, Alexandra; Luyt, Leonard G; Turley, Eva
2018-01-31
Hyaluronan is a simple extracellular matrix polysaccharide that actively regulates inflammation in tissue repair and disease processes. The native HA polymer, which is large (>500 kDa), contributes to the maintenance of homeostasis. In remodeling and diseased tissues, polymer size is strikingly polydisperse, ranging from <10 kDa to >500 kDa. In a diseased or stressed tissue context, both smaller HA fragments and high molecular weight HA polymers can acquire pro-inflammatory functions, which result in the activation of multiple receptors, triggering pro-inflammatory signaling to diverse stimuli. Peptide mimics that bind and scavenge HA fragments have been developed, which show efficacy in animal models of inflammation. These studies indicate both that HA fragments are key to driving inflammation and that scavenging these is a viable therapeutic approach to blunting inflammation in disease processes. This mini-review summarizes the peptide-based methods that have been reported to date for blocking HA signaling events as an anti-inflammatory therapeutic approach. Copyright © 2017 International Society of Matrix Biology. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Flores, J.; Gundy, K.; Gundy, K.; Gundy, K.; Gundy, K.; Gundy, K.
1986-01-01
A fast diagonalized Beam-Warming algorithm is coupled with a zonal approach to solve the three-dimensional Euler/Navier-Stokes equations. The computer code, called Transonic Navier-Stokes (TNS), uses a total of four zones for wing configurations (or can be extended to complete aircraft configurations by adding zones). In the inner blocks near the wing surface, the thin-layer Navier-Stokes equations are solved, while in the outer two blocks the Euler equations are solved. The diagonal algorithm yields a speedup of as much as a factor of 40 over the original algorithm/zonal method code. The TNS code, in addition, has the capability to model wind tunnel walls. Transonic viscous solutions are obtained on a 150,000-point mesh for a NACA 0012 wing. A three-order-of-magnitude drop in the L2-norm of the residual requires approximately 500 iterations, which takes about 45 min of CPU time on a Cray-XMP processor. Simulations are also conducted for a different geometrical wing called WING C. All cases show good agreement with experimental data.
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.
Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods.
Localized Optogenetic Targeting of Rotors in Atrial Cardiomyocyte Monolayers.
Feola, Iolanda; Volkers, Linda; Majumder, Rupamanjari; Teplenin, Alexander; Schalij, Martin J; Panfilov, Alexander V; de Vries, Antoine A F; Pijnappels, Daniël A
2017-11-01
Recently, a new ablation strategy for atrial fibrillation has emerged, which involves the identification of rotors (ie, local drivers) followed by the localized targeting of their core region by ablation. However, this concept has been subject to debate because the mode of arrhythmia termination remains poorly understood, as dedicated models and research tools are lacking. We took a unique optogenetic approach to induce and locally target a rotor in atrial monolayers. Neonatal rat atrial cardiomyocyte monolayers expressing a depolarizing light-gated ion channel (Ca 2+ -translocating channelrhodopsin) were subjected to patterned illumination to induce single, stable, and centralized rotors by optical S1-S2 cross-field stimulation. Next, the core region of these rotors was specifically and precisely targeted by light to induce local conduction blocks of circular or linear shapes. Conduction blocks crossing the core region, but not reaching any unexcitable boundary, did not lead to termination. Instead, electric waves started to propagate along the circumference of block, thereby maintaining reentrant activity, although of lower frequency. If, however, core-spanning lines of block reached at least 1 unexcitable boundary, reentrant activity was consistently terminated by wave collision. Lines of block away from the core region resulted merely in rotor destabilization (ie, drifting). Localized optogenetic targeting of rotors in atrial monolayers could lead to both stabilization and destabilization of reentrant activity. For termination, however, a line of block is required reaching from the core region to at least 1 unexcitable boundary. These findings may improve our understanding of the mechanisms involved in rotor-guided ablation. © 2017 American Heart Association, Inc.
Thermoreversible networks for moldable photo-responsive elastomers (Presentation Recording)
NASA Astrophysics Data System (ADS)
Kornfield, Julia A.; Kurji, Zuleikha
2015-10-01
Soft-solids that retain the responsive optical anisotropy of liquid crystals (LC) can be used as mechano-optical, electro-optical and electro-mechanical elements. We use self-assembly of block copolymers to create reversible LC gels and elastomers that flow at elevated temperatures and physically cross link upon cooling. In the melt, they can be spun, coated or molded. Segregation of the end-blocks forms uniform and uniformly spaced crosslinks. Matched sets of block copolymers are synthesized from a single "prepolymer." Specifically, we begin with polymers having polystyrene (PS) end blocks and a poly(1,2-butadiene) midblock. The pendant vinyl groups along the backbone of the midblock are used to graft mesogens, converting it to a side-group LC polymer (SGLCP). In the present case, cyanobiphenyl groups are used as the nonphotoresponsive mesogens and azobenzene groups are used as photoresponsive mesogens. Here we show that matched pairs of block copolymers, with and without photo-responsive mesogens, provide model systems in which the optical density can be adjusted while holding other properties fixed (cross-link density, modulus, birefringence, isotropic-nematic transition temperature). For example, a triblock in which the SGLCP block has 95% cyanobiphenyl and 5% azo side groups is miscible with one having 100% cyanobiphenyl side groups. Simply blending the two gives a series of LC elastomers that have from 0 to 5% azo, while having all other physical properties matched. Results will be presented that show the outcomesof this approach to systematic and largely independent control of optical density and photo-mechanical sensitivity.
Control structural interaction testbed: A model for multiple flexible body verification
NASA Technical Reports Server (NTRS)
Chory, M. A.; Cohen, A. L.; Manning, R. A.; Narigon, M. L.; Spector, V. A.
1993-01-01
Conventional end-to-end ground tests for verification of control system performance become increasingly complicated with the development of large, multiple flexible body spacecraft structures. The expense of accurately reproducing the on-orbit dynamic environment and the attendant difficulties in reducing and accounting for ground test effects limits the value of these tests. TRW has developed a building block approach whereby a combination of analysis, simulation, and test has replaced end-to-end performance verification by ground test. Tests are performed at the component, subsystem, and system level on engineering testbeds. These tests are aimed at authenticating models to be used in end-to-end performance verification simulations: component and subassembly engineering tests and analyses establish models and critical parameters, unit level engineering and acceptance tests refine models, and subsystem level tests confirm the models' overall behavior. The Precision Control of Agile Spacecraft (PCAS) project has developed a control structural interaction testbed with a multibody flexible structure to investigate new methods of precision control. This testbed is a model for TRW's approach to verifying control system performance. This approach has several advantages: (1) no allocation for test measurement errors is required, increasing flight hardware design allocations; (2) the approach permits greater latitude in investigating off-nominal conditions and parametric sensitivities; and (3) the simulation approach is cost effective, because the investment is in understanding the root behavior of the flight hardware and not in the ground test equipment and environment.
A top-down approach in control engineering third-level teaching: The case of hydrogen-generation
NASA Astrophysics Data System (ADS)
Setiawan, Eko; Habibi, M. Afnan; Fall, Cheikh; Hodaka, Ichijo
2017-09-01
This paper presents a top-down approach in control engineering third-level teaching. The paper shows the control engineering solution for the issue of practical implementation in order to motivate students. The proposed strategy only focuses on one technique of control engineering to lead student correctly. The proposed teaching steps are 1) defining the problem, 2) listing of acquired knowledge or required skill, 3) selecting of one control engineering technique, 4) arrangement the order of teaching: problem introduction, implementation of control engineering technique, explanation of system block diagram, model derivation, controller design, and 5) enrichment knowledge by the other control techniques. The approach presented highlights hardware implementation and the use of software simulation as a self-learning tool for students.
Rhee, Minsoung; Burns, Mark A
2008-08-01
An assembly approach for microdevice construction using prefabricated microfluidic components is presented. Although microfluidic systems are convenient platforms for biological assays, their use in the life sciences is still limited mainly due to the high-level fabrication expertise required for construction. This approach involves prefabrication of individual microfluidic assembly blocks (MABs) in PDMS that can be readily assembled to form microfluidic systems. Non-expert users can assemble the blocks on glass slides to build their devices in minutes without any fabrication steps. In this paper, we describe the construction and assembly of the devices using the MAB methodology, and demonstrate common microfluidic applications including laminar flow development, valve control, and cell culture.
Universal block diagram based modeling and simulation schemes for fractional-order control systems.
Bai, Lu; Xue, Dingyü
2017-05-08
Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Finite-size analysis of the detectability limit of the stochastic block model
NASA Astrophysics Data System (ADS)
Young, Jean-Gabriel; Desrosiers, Patrick; Hébert-Dufresne, Laurent; Laurence, Edward; Dubé, Louis J.
2017-06-01
It has been shown in recent years that the stochastic block model is sometimes undetectable in the sparse limit, i.e., that no algorithm can identify a partition correlated with the partition used to generate an instance, if the instance is sparse enough and infinitely large. In this contribution, we treat the finite case explicitly, using arguments drawn from information theory and statistics. We give a necessary condition for finite-size detectability in the general SBM. We then distinguish the concept of average detectability from the concept of instance-by-instance detectability and give explicit formulas for both definitions. Using these formulas, we prove that there exist large equivalence classes of parameters, where widely different network ensembles are equally detectable with respect to our definitions of detectability. In an extensive case study, we investigate the finite-size detectability of a simplified variant of the SBM, which encompasses a number of important models as special cases. These models include the symmetric SBM, the planted coloring model, and more exotic SBMs not previously studied. We conclude with three appendices, where we study the interplay of noise and detectability, establish a connection between our information-theoretic approach and random matrix theory, and provide proofs of some of the more technical results.
An early warning indicator for atmospheric blocking events using transfer operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tantet, Alexis, E-mail: a.j.j.tantet@uu.nl; Burgt, Fiona R. van der; Dijkstra, Henk A.
The existence of persistent midlatitude atmospheric flow regimes with time-scales larger than 5–10 days and indications of preferred transitions between them motivates to develop early warning indicators for such regime transitions. In this paper, we use a hemispheric barotropic model together with estimates of transfer operators on a reduced phase space to develop an early warning indicator of the zonal to blocked flow transition in this model. It is shown that the spectrum of the transfer operators can be used to study the slow dynamics of the flow as well as the non-Markovian character of the reduction. The slowest motionsmore » are thereby found to have time scales of three to six weeks and to be associated with meta-stable regimes (and their transitions) which can be detected as almost-invariant sets of the transfer operator. From the energy budget of the model, we are able to explain the meta-stability of the regimes and the existence of preferred transition paths. Even though the model is highly simplified, the skill of the early warning indicator is promising, suggesting that the transfer operator approach can be used in parallel to an operational deterministic model for stochastic prediction or to assess forecast uncertainty.« less
Hidden Markov model approach for identifying the modular framework of the protein backbone.
Camproux, A C; Tuffery, P; Chevrolat, J P; Boisvieux, J F; Hazout, S
1999-12-01
The hidden Markov model (HMM) was used to identify recurrent short 3D structural building blocks (SBBs) describing protein backbones, independently of any a priori knowledge. Polypeptide chains are decomposed into a series of short segments defined by their inter-alpha-carbon distances. Basically, the model takes into account the sequentiality of the observed segments and assumes that each one corresponds to one of several possible SBBs. Fitting the model to a database of non-redundant proteins allowed us to decode proteins in terms of 12 distinct SBBs with different roles in protein structure. Some SBBs correspond to classical regular secondary structures. Others correspond to a significant subdivision of their bounding regions previously considered to be a single pattern. The major contribution of the HMM is that this model implicitly takes into account the sequential connections between SBBs and thus describes the most probable pathways by which the blocks are connected to form the framework of the protein structures. Validation of the SBBs code was performed by extracting SBB series repeated in recoding proteins and examining their structural similarities. Preliminary results on the sequence specificity of SBBs suggest promising perspectives for the prediction of SBBs or series of SBBs from the protein sequences.
Robust analysis of semiparametric renewal process models
Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.
2013-01-01
Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568
Design-based modeling of magnetically actuated soft diaphragm materials
NASA Astrophysics Data System (ADS)
Jayaneththi, V. R.; Aw, K. C.; McDaid, A. J.
2018-04-01
Magnetic polymer composites (MPC) have shown promise for emerging biomedical applications such as lab-on-a-chip and implantable drug delivery. These soft material actuators are capable of fast response, large deformation and wireless actuation. Existing MPC modeling approaches are computationally expensive and unsuitable for rapid design prototyping and real-time control applications. This paper proposes a macro-scale 1-DOF model capable of predicting force and displacement of an MPC diaphragm actuator. Model validation confirmed both blocked force and displacement can be accurately predicted in a variety of working conditions i.e. different magnetic field strengths, static/dynamic fields, and gap distances. The contribution of this work includes a comprehensive experimental investigation of a macro-scale diaphragm actuator; the derivation and validation of a new phenomenological model to describe MPC actuation; and insights into the proposed model’s design-based functionality i.e. scalability and generalizability in terms of magnetic filler concentration and diaphragm diameter. Due to the lumped element modeling approach, the proposed model can also be adapted to alternative actuator configurations, and thus presents a useful tool for design, control and simulation of novel MPC applications.
An Approach for On-Board Software Building Blocks Cooperation and Interfaces Definition
NASA Astrophysics Data System (ADS)
Pascucci, Dario; Campolo, Giovanni; Candia, Sante; Lisio, Giovanni
2010-08-01
This paper provides an insight on the Avionic SW architecture developed by Thales Alenia Space Italy (TAS-I) to achieve structuring of the OBSW as a set of self-standing and re-usable building blocks. It is initially described the underlying framework for building blocks cooperation, which is based on ECSSE-70 packets forwarding (for services request to a building block) and standard parameters exchange for data communication. Subsequently it is discussed the high level of flexibility and scalability of the resulting architecture, reporting as example an implementation of the Failure Detection, Isolation and Recovery (FDIR) function which exploits the proposed architecture. The presented approach evolves from avionic SW architecture developed in the scope of the project PRIMA (Mult-Purpose Italian Re-configurable Platform) and has been adopted for the Sentinel-1 Avionic Software (ASW).
A Synchronization Algorithm and Implementation for High-Speed Block Codes Applications. Part 4
NASA Technical Reports Server (NTRS)
Lin, Shu; Zhang, Yu; Nakamura, Eric B.; Uehara, Gregory T.
1998-01-01
Block codes have trellis structures and decoders amenable to high speed CMOS VLSI implementation. For a given CMOS technology, these structures enable operating speeds higher than those achievable using convolutional codes for only modest reductions in coding gain. As a result, block codes have tremendous potential for satellite trunk and other future high-speed communication applications. This paper describes a new approach for implementation of the synchronization function for block codes. The approach utilizes the output of the Viterbi decoder and therefore employs the strength of the decoder. Its operation requires no knowledge of the signal-to-noise ratio of the received signal, has a simple implementation, adds no overhead to the transmitted data, and has been shown to be effective in simulation for received SNR greater than 2 dB.
NASA Technical Reports Server (NTRS)
Kim, Kyu-Myong; Lau, K. M.; Wu, H. T.; Kim, Maeng-Ki; Cho, Chunho
2012-01-01
The Russia heat wave and wild fires of the summer of 2010 was the most extreme weather event in the history of the country. Studies show that the root cause of the 2010 Russia heat wave/wild fires was an atmospheric blocking event which started to develop at the end of June and peaked around late July and early August. Atmospheric blocking in the summer of 2010 was anomalous in terms of the size, duration, and the location, which shifted to the east from the normal location. This and other similar continental scale severe summertime heat waves and blocking events in recent years have raised the question of whether such events are occurring more frequently and with higher intensity in a warmer climate induced by greenhouse gases. We studied the spatial and temporal distributions of the occurrence and intensity of atmospheric blocking and associated heat waves for northern summer over Eurasia based on CMIPS model simulations. To examine the global warming induced change of atmospheric blocking and heat waves, experiments for a high emissions scenario (RCP8.S) and a medium mitigation scenario (RCP4.S) are compared to the 20th century simulations (historical). Most models simulate the mean distributions of blockings reasonably well, including major blocking centers over Eurasia, northern Pacific, and northern Atlantic. However, the models tend to underestimate the number of blockings compared to MERRA and NCEPIDOE reanalysis, especially in western Siberia. Models also reproduced associated heat waves in terms of the shifting in the probability distribution function of near surface temperature. Seven out of eight models used in this study show that the frequency of atmospheric blocking over the Europe will likely decrease in a warmer climate, but slightly increase over the western Siberia. This spatial pattern resembles the blocking in the summer of 2010, indicating the possibility of more frequent occurrences of heat waves in western Siberia. In this talk, we will also discuss the potential effect of atmosphere-land feedback, particularly how the wetter spring affects the frequency and intensity of atmospheric blocking and heat wave during summer.
NASA Astrophysics Data System (ADS)
Aubé, M.; Simoneau, A.
2018-05-01
Illumina is one of the most physically detailed artificial night sky brightness model to date. It has been in continuous development since 2005 [1]. In 2016-17, many improvements were made to the Illumina code including an overhead cloud scheme, an improved blocking scheme for subgrid obstacles (trees and buildings), and most importantly, a full hyperspectral modeling approach. Code optimization resulted in significant reduction in execution time enabling users to run the model on standard personal computers for some applications. After describing the new schemes introduced in the model, we give some examples of applications for a peri-urban and a rural site both located inside the International Dark Sky reserve of Mont-Mégantic (QC, Canada).
Assessing Uncertainties in Surface Water Security: A Probabilistic Multi-model Resampling approach
NASA Astrophysics Data System (ADS)
Rodrigues, D. B. B.
2015-12-01
Various uncertainties are involved in the representation of processes that characterize interactions between societal needs, ecosystem functioning, and hydrological conditions. Here, we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multi-model and resampling framework. We consider several uncertainty sources including those related to: i) observed streamflow data; ii) hydrological model structure; iii) residual analysis; iv) the definition of Environmental Flow Requirement method; v) the definition of critical conditions for water provision; and vi) the critical demand imposed by human activities. We estimate the overall uncertainty coming from the hydrological model by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km² agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multi-model framework and provided by each model uncertainty estimation approach. The method is general and can be easily extended forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision making process.
Possibilities of rock constitutive modelling and simulations
NASA Astrophysics Data System (ADS)
Baranowski, Paweł; Małachowski, Jerzy
2018-01-01
The paper deals with a problem of rock finite element modelling and simulation. The main intention of authors was to present possibilities of different approaches in case of rock constitutive modelling. For this purpose granite rock was selected, due to its wide mechanical properties recognition and prevalence in literature. Two significantly different constitutive material models were implemented to simulate the granite fracture in various configurations: Johnson - Holmquist ceramic model which is very often used for predicting rock and other brittle materials behavior, and a simple linear elastic model with a brittle failure which can be used for simulating glass fracturing. Four cases with different loading conditions were chosen to compare the aforementioned constitutive models: uniaxial compression test, notched three-point-bending test, copper ball impacting a block test and small scale blasting test.
Nagata, Jun; Watanabe, Jun; Sawatsubashi, Yusuke; Akiyama, Masaki; Arase, Koichi; Minagawa, Noritaka; Torigoe, Takayuki; Hamada, Kotaro; Nakayama, Yoshifumi; Hirata, Keiji
2017-04-04
Although the laparoscopic approach reduces pain associated with abdominal surgery, postoperative pain remains a problem. Ultrasound-guided rectus sheath block and transversus abdominis plane block have become increasingly popular means of providing analgesia for laparoscopic surgery. Ninety patients were enrolled in this study. A laparoscopic puncture needle was inserted via the port, and levobupivacaine was injected into the correct plane through the peritoneum. The patients' postoperative pain intensity was assessed using a numeric rating scale. The effects of laparoscopic nerve block versus percutaneous anesthesia were compared. This novel form of transperitoneal anesthesia did not jeopardize completion of the operative procedures. The percutaneous approach required more time for performance of the procedure than the transperitoneal technique. This new analgesia technique can become an optional postoperative treatment regimen for various laparoscopic abdominal surgeries. What we mainly want to suggest is that the transperitoneal approach has the advantage of a higher completion rate. A percutaneous technique is sometimes difficult with patients who have severe obesity and/or coagulation disorders. Additional studies are required to evaluate its benefits. Copyright © 2017. Published by Elsevier Taiwan.
Underwood, Harold; Kilheffer, Chellby R.; Francis, Robert A.; Millington, James D. A.; Chadwick, Michael A.
2016-01-01
Expanding ungulate populations are causing concerns for wildlife professionals and residents in many urban areas worldwide. Nowhere is the phenomenon more apparent than in the eastern US, where urban white-tailed deer (Odocoileus virginianus) populations are increasing. Most habitat suitability models for deer have been developed in rural areas and across large (>1000 km2) spatial extents. Only recently have we begun to understand the factors that contribute to space use by deer over much smaller spatial extents. In this study, we explore the concepts, terminology, methodology and state-of-the-science in wildlife abundance modeling as applied to overabundant deer populations across heterogeneous urban landscapes. We used classified, high-resolution digital orthoimagery to extract landscape characteristics in several urban areas of upstate New York. In addition, we assessed deer abundance and distribution in 1-km2 blocks across each study area from either aerial surveys or ground-based distance sampling. We recorded the number of detections in each block and used binomial mixture models to explore important relationships between abundance and key landscape features. Finally, we cross-validated statistical models of abundance and compared covariate relationships across study sites. Study areas were characterized along a gradient of urbanization based on the proportions of impervious surfaces and natural vegetation which, based on the best-supported models, also distinguished blocks potentially occupied by deer. Models performed better at identifying occurrence of deer and worse at predicting abundance in cross-validation comparisons. We attribute poor predictive performance to differences in deer population trajectories over time. The proportion of impervious surfaces often yielded better predictions of abundance and occurrence than did the proportion of natural vegetation, which we attribute to a lack of certain land cover classes during cold and snowy winters. Merits and limitations of our approach to habitat suitability modeling are discussed in detail.
NASA Astrophysics Data System (ADS)
Schaller, N.; Sillmann, J.; Anstey, J.; Fischer, E. M.; Grams, C. M.; Russo, S.
2018-05-01
Better preparedness for summer heatwaves could mitigate their adverse effects on society. This can potentially be attained through an increased understanding of the relationship between heatwaves and one of their main dynamical drivers, atmospheric blocking. In the 1979–2015 period, we find that there is a significant correlation between summer heatwave magnitudes and the number of days influenced by atmospheric blocking in Northern Europe and Western Russia. Using three large global climate model ensembles, we find similar correlations, indicating that these three models are able to represent the relationship between extreme temperature and atmospheric blocking, despite having biases in their simulation of individual climate variables such as temperature or geopotential height. Our results emphasize the need to use large ensembles of different global climate models as single realizations do not always capture this relationship. The three large ensembles further suggest that the relationship between summer heatwaves and atmospheric blocking will not change in the future. This could be used to statistically model heatwaves with atmospheric blocking as a covariate and aid decision-makers in planning disaster risk reduction and adaptation to climate change.
Gwanpua, Sunny George; Verlinden, Bert E; Hertog, Maarten Latm; Nicolai, Bart M; Geeraerd, Annemie H
2017-08-01
1-Methylcyclopropene (1-MCP) inhibits ripening in climacteric fruit by blocking ethylene receptors, preventing ethylene from binding and eliciting its action. The objective of the current study was to use mathematical models to describe 1-MCP inhibition of apple fruit ripening, and to provide a tool for predicting ethylene production, and two important quality indicators of apple fruit, firmness and background colour. A model consisting of coupled differential equations describing 1-MCP inhibition of apple ripening was developed. Data on ethylene production, expression of ethylene receptors, firmness, and background colour during ripening of untreated and 1-MCP treated apples were used to calibrate the model. An overall adjusted R 2 of 95% was obtained. The impact of time from harvest to treatment, and harvest maturity on 1-MCP efficacy was modelled. Different hypotheses on the partial response of 'Jonagold' apple to 1-MCP treatment were tested using the model. The model was validated using an independent dataset. Low 1-MCP blocking efficacy was shown to be the most likely cause of partial response for delayed 1-MCP treatment, and 1-MCP treatment of late-picked apples. Time from harvest to treatment was a more important factor than maturity for 1-MCP efficacy in 'Jonagold' apples. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Alexander S.; Bryantsev, Vyacheslav S.
An accurate description of solvation effects for trivalent lanthanide ions is a main stumbling block to the qualitative prediction of selectivity trends along the lanthanide series. In this work, we propose a simple model to describe the differential effect of solvation in the competitive binding of a ligand by lanthanide ions by including weakly co-ordinated counterions in the complexes of more than a +1 charge. The success of the approach to quantitatively reproduce selectivities obtained from aqueous phase complexation studies demonstrates its potential for the design and screening of new ligands for efficient size-based separation.
NASA Technical Reports Server (NTRS)
Buchanan, H. J.
1983-01-01
Work performed in Large Space Structures Controls research and development program at Marshall Space Flight Center is described. Studies to develop a multilevel control approach which supports a modular or building block approach to the buildup of space platforms are discussed. A concept has been developed and tested in three-axis computer simulation utilizing a five-body model of a basic space platform module. Analytical efforts have continued to focus on extension of the basic theory and subsequent application. Consideration is also given to specifications to evaluate several algorithms for controlling the shape of Large Space Structures.
Ivanov, Alexander S.; Bryantsev, Vyacheslav S.
2016-06-20
An accurate description of solvation effects for trivalent lanthanide ions is a main stumbling block to the qualitative prediction of selectivity trends along the lanthanide series. In this work, we propose a simple model to describe the differential effect of solvation in the competitive binding of a ligand by lanthanide ions by including weakly co-ordinated counterions in the complexes of more than a +1 charge. The success of the approach to quantitatively reproduce selectivities obtained from aqueous phase complexation studies demonstrates its potential for the design and screening of new ligands for efficient size-based separation.
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
NASA Astrophysics Data System (ADS)
Christie, Dane; Register, Richard; Priestley, Rodney
Block copolymers can self-assemble into periodic structures containing a high internal surface area, nanoscale domain periods, and periodically varying composition profiles. Depending on their components, block copolymers may also exhibit variations in their dynamic properties e.g., glass transition temperature (Tg) across the domain period. Measuring the variation of Tg across the domain period of block copolymers has remained a significant challenge due to the nanometer length scale of the domain period. Here we use fluorescence spectroscopy and the selective incorporation of a pyrene-containing methacrylate monomer at various positions along the chain to characterize the distribution of glass transition temperatures across the domain period of an amorphous block copolymer. The pyrene-containing monomer location is determined from the monomer segment distribution calculated using self-consistent field theory. Our model system is a lamella-forming diblock copolymer of poly(butyl methacrylate - b- methyl methacrylate). We show that Tg is asymmetrically distributed across the interface; as the interface is approached, larger gradients in Tg exist in the hard PMMA-rich domain than in the soft PBMA-rich domain. By characterizing Tg of PBMA or PMMA interfacial segments, we show that polymer dynamics at the interface are heterogeneous; there is a 15 K difference in Tg measured between PBMA interfacial segments and PMMA interfacial segments.
Conjugated block copolymers as model materials to examine charge transfer in donor-acceptor systems
NASA Astrophysics Data System (ADS)
Gomez, Enrique; Aplan, Melissa; Lee, Youngmin
Weak intermolecular interactions and disorder at junctions of different organic materials limit the performance and stability of organic interfaces and hence the applicability of organic semiconductors to electronic devices. The lack of control of interfacial structure has also prevented studies of how driving forces promote charge photogeneration, leading to conflicting hypotheses in the organic photovoltaic literature. Our approach has focused on utilizing block copolymer architectures -where critical interfaces are controlled and stabilized by covalent bonds- to provide the hierarchical structure needed for high-performance organic electronics from self-assembled soft materials. For example, we have demonstrated control of donor-acceptor heterojunctions through microphase-separated conjugated block copolymers to achieve 3% power conversion efficiencies in non-fullerene photovoltaics. Furthermore, incorporating the donor-acceptor interface within the molecular structure facilitates studies of charge transfer processes. Conjugated block copolymers enable studies of the driving force needed for exciton dissociation to charge transfer states, which must be large to maximize charge photogeneration but must be minimized to prevent losses in photovoltage in solar cell devices. Our work has systematically varied the chemical structure, energetics, and dielectric constant to perturb charge transfer. As a consequence, we predict a minimum dielectric constant needed to minimize the driving force and therefore simultaneously maximize photocurrent and photovoltage in organic photovoltaic devices.
V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S
2016-12-01
The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.
From global to heavy-light: 5-point conformal blocks
NASA Astrophysics Data System (ADS)
Alkalaev, Konstantin; Belavin, Vladimir
2016-03-01
We consider Virasoro conformal blocks in the large central charge limit. There are different regimes depending on the behavior of the conformal dimensions. The most simple regime is reduced to the global sl(2,C) conformal blocks while the most complicated one is known as the classical conformal blocks. Recently, Fitzpatrick, Kaplan, and Walters showed that the two regimes are related through the intermediate stage of the so-called heavy-light semiclassical limit. We study this idea in the particular case of the 5-point conformal block. To find the 5-point global block we use the projector technique and the Casimir operator approach. Furthermore, we discuss the relation between the global and the heavy-light limits and construct the heavy-light block from the global block. In this way we reproduce our previous results for the 5-point perturbative classical block obtained by means of the monodromy method.
ERIC Educational Resources Information Center
Tidd, Simon T.; Stoelinga, Timothy M.; Bush-Richards, Angela M.; De Sena, Donna L.; Dwyer, Theodore J.
2018-01-01
Double-block instruction has become a popular strategy for supporting struggling mathematics students in algebra I. Despite its widespread adoption, little consistent evidence supports the attributes of a successful double-block design or the effectiveness of this instructional strategy. In this study, the authors examine a pilot implementation of…
Consolidation of Federal Aid Programs for Education: A Case for Block Grant Funding.
ERIC Educational Resources Information Center
Main, Robert G.
The need for a new approach to federal support of education by reducing the number of narrow categorical aid programs is developed through a case study of the 1976 Ford Administration proposal for a consolidated block grant of 24 separate authorities. The merits of block grant funding are examined both in terms of the administration-sponsored bill…
Retrofit Audits and Cost Estimates. A Look at Quality and Consistency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisenberg, L.; Shapiro, C.; Fleischer, W.
Retrofit NYC Block by Block is an outreach program targeting owners of one- to four-family homes, the most common building type in New York City, with more than 600,000 structures citywide. Administered by the Pratt Center for Community Development and implemented by four nonprofit, community-based organizations, Block by Block connects residents, businesses, and religious and civic organizations in predominantly low-and moderate-income neighborhoods with one or more of a half-dozen public and private financial incentive programs that facilitate energy-efficiency retrofits. This research project sought to evaluate the approach, effectiveness, and the energy use reductions accomplished by the Retrofit NYC: Block bymore » Block program.« less
Retrofit Audits and Cost Estimates: A Look at Quality and Consistency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisenberg, L.; Shapiro, C.; Fleischer, W.
Retrofit NYC Block by Block is an outreach program targeting owners of one- to four-family homes, the most common building type in New York City, with more than 600,000 structures citywide. Administered by the Pratt Center for Community Development and implemented by four nonprofit, community based organizations, Block by Block connects residents, businesses, and religious and civic organizations in predominantly low- and moderate-income neighborhoods with one or more of a half-dozen public and private financial incentive programs that facilitate energy-efficiency retrofits. This research project sought to evaluate the approach, effectiveness, and the energy use reductions accomplished by the Retrofit NYC:more » Block by Block program.« less
Development of the Functional Flow Block Diagram for the J-2X Rocket Engine System
NASA Technical Reports Server (NTRS)
White, Thomas; Stoller, Sandra L.; Greene, WIlliam D.; Christenson, Rick L.; Bowen, Barry C.
2007-01-01
The J-2X program calls for the upgrade of the Apollo-era Rocketdyne J-2 engine to higher power levels, using new materials and manufacturing techniques, and with more restrictive safety and reliability requirements than prior human-rated engines in NASA history. Such requirements demand a comprehensive systems engineering effort to ensure success. Pratt & Whitney Rocketdyne system engineers performed a functional analysis of the engine to establish the functional architecture. J-2X functions were captured in six major operational blocks. Each block was divided into sub-blocks or states. In each sub-block, functions necessary to perform each state were determined. A functional engine schematic consistent with the fidelity of the system model was defined for this analysis. The blocks, sub-blocks, and functions were sequentially numbered to differentiate the states in which the function were performed and to indicate the sequence of events. The Engine System was functionally partitioned, to provide separate and unique functional operators. Establishing unique functional operators as work output of the System Architecture process is novel in Liquid Propulsion Engine design. Each functional operator was described such that its unique functionality was identified. The decomposed functions were then allocated to the functional operators both of which were the inputs to the subsystem or component performance specifications. PWR also used a novel approach to identify and map the engine functional requirements to customer-specified functions. The final result was a comprehensive Functional Flow Block Diagram (FFBD) for the J-2X Engine System, decomposed to the component level and mapped to all functional requirements. This FFBD greatly facilitates component specification development, providing a well-defined trade space for functional trades at the subsystem and component level. It also provides a framework for function-based failure modes and effects analysis (FMEA), and a rigorous baseline for the functional architecture.
Quistberg, D. Alex; Howard, Eric J.; Ebel, Beth E.; Moudon, Anne V.; Saelens, Brian E.; Hurvitz, Philip M.; Curtin, James E.; Rivara, Frederick P.
2015-01-01
Walking is a popular form of physical activity associated with clear health benefits. Promoting safe walking for pedestrians requires evaluating the risk of pedestrian-motor vehicle collisions at specific roadway locations in order to identify where road improvements and other interventions may be needed. The objective of this analysis was to estimate the risk of pedestrian collisions at intersections and mid-blocks in Seattle, WA. The study used 2007-2013 pedestrian-motor vehicle collision data from police reports and detailed characteristics of the microenvironment and macroenvironment at intersection and mid-block locations. The primary outcome was the number of pedestrian-motor vehicle collisions over time at each location (incident rate ratio [IRR] and 95% confidence interval [95% CI]). Multilevel mixed effects Poisson models accounted for correlation within and between locations and census blocks over time. Analysis accounted for pedestrian and vehicle activity (e.g., residential density and road classification). In the final multivariable model, intersections with 4 segments or 5 or more segments had higher pedestrian collision rates compared to mid-blocks. Non-residential roads had significantly higher rates than residential roads, with principal arterials having the highest collision rate. The pedestrian collision rate was higher by 9% per 10 feet of street width. Locations with traffic signals had twice the collision rate of locations without a signal and those with marked crosswalks also had a higher rate. Locations with a marked crosswalk also had higher risk of collision. Locations with a one-way road or those with signs encouraging motorists to cede the right-of-way to pedestrians had fewer pedestrian collisions. Collision rates were higher in locations that encourage greater pedestrian activity (more bus use, more fast food restaurants, higher employment, residential, and population densities). Locations with higher intersection density had a lower rate of collisions as did those in areas with higher residential property values. The novel spatiotemporal approach used that integrates road/crossing characteristics with surrounding neighborhood characteristics should help city agencies better identify high-risk locations for further study and analysis. Improving roads and making them safer for pedestrians achieves the public health goals of reducing pedestrian collisions and promoting physical activity. PMID:26339944
SLS Model Based Design: A Navigation Perspective
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin
2018-01-01
The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.
Modeling the response of small myelinated axons in a compound nerve to kilohertz frequency signals.
Pelot, N A; Behrend, C E; Grill, W M
2017-08-01
There is growing interest in electrical neuromodulation of peripheral nerves, particularly autonomic nerves, to treat various diseases. Electrical signals in the kilohertz frequency (KHF) range can produce different responses, including conduction block. For example, EnteroMedics' vBloc ® therapy for obesity delivers 5 kHz stimulation to block the abdominal vagus nerves, but the mechanisms of action are unclear. We developed a two-part computational model, coupling a 3D finite element model of a cuff electrode around the human abdominal vagus nerve with biophysically-realistic electrical circuit equivalent (cable) model axons (1, 2, and 5.7 µm in diameter). We developed an automated algorithm to classify conduction responses as subthreshold (transmission), KHF-evoked activity (excitation), or block. We quantified neural responses across kilohertz frequencies (5-20 kHz), amplitudes (1-8 mA), and electrode designs. We found heterogeneous conduction responses across the modeled nerve trunk, both for a given parameter set and across parameter sets, although most suprathreshold responses were excitation, rather than block. The firing patterns were irregular near transmission and block boundaries, but otherwise regular, and mean firing rates varied with electrode-fibre distance. Further, we identified excitation responses at amplitudes above block threshold, termed 're-excitation', arising from action potentials initiated at virtual cathodes. Excitation and block thresholds decreased with smaller electrode-fibre distances, larger fibre diameters, and lower kilohertz frequencies. A point source model predicted a larger fraction of blocked fibres and greater change of threshold with distance as compared to the realistic cuff and nerve model. Our findings of widespread asynchronous KHF-evoked activity suggest that conduction block in the abdominal vagus nerves is unlikely with current clinical parameters. Our results indicate that compound neural or downstream muscle force recordings may be unreliable as quantitative measures of neural activity for in vivo studies or as biomarkers in closed-loop clinical devices.
Boomer, Sarah M.; Latham, Kristin L.
2011-01-01
The first course in our year-long introductory series for Biology majors encompasses four learning units: biological molecules and cells, metabolism, genetics, and evolution. Of these, the metabolism unit, which includes respiration and photosynthesis, has shown the lowest student exam scores, least interest, and lowest laboratory ratings. Consequently, we hypothesized that modeling metabolic processes in the laboratory would improve student content learning during this course unit. Specifically, we developed manipulatives-based laboratory exercises that combined paper cutouts, movable blocks, and large diagrams of the cell. In particular, our novel use of connecting LEGO blocks allowed students to move model electrons and phosphates between molecules and within defined spaces of the cell. We assessed student learning using both formal (content indicators and attitude surveys) and informal (the identification of misconceptions or discussions with students) approaches. On the metabolism unit content exam, student performance improved by 46% over pretest scores and by the end of the course, the majority of students rated metabolism as their most-improved (43%) and favorite (33%) subject as compared with other unit topics. The majority of students rated manipulatives-based labs as very helpful, as compared to non-manipulatives-based labs. In this report, we will demonstrate that students made learning gains across all content areas, but most notably in the unit that covered respiration and photosynthesis. PMID:23653756
Non-Destructive Approaches for the Validation of Visually Observed Spatial Patterns of Decay
NASA Astrophysics Data System (ADS)
Johnston, Brian; McKinley, Jennifer; Warke, Patricia; Ruffell, Alastair
2017-04-01
Historical structures are regarded as a built legacy that is passed down through the generations and as such the conservation and restoration of these buildings is of great importance to governmental, religious and charitable organisations. As these groups play the role of custodians of this built heritage, they are therefore keen that the approaches employed in these studies of stone condition are non-destructive in nature. Determining sections of facades requiring repair work is often achieved through a visual conditional inspection of the stonework by a specialist. However, these reports focus upon the need to identify blocks requiring restorative action rather than the determination of spatial trends that lead to the identification of causes. This fixation on decay occurring at the block scale results in the spatial distribution of weathering present at the larger 'wall' scale appearing to have developed chaotically. Recent work has shown the importance of adopting a geomorphological focus when undertaking visual inspection of the facades of historical buildings to overcome this issue. Once trends have been ascertained, they can be used to bolster remedial strategies that target the sources of decay rather than just undertaking an aesthetic treatment of symptoms. Visual inspection of the study site, Fitzroy Presbyterian Church in Belfast, using the geomorphologically driven approach revealed three features suggestive of decay extending beyond the block scale. Firstly, the influence of architectural features on the susceptibility of blocks to decay. Secondly, the impact of the fluctuation in groundwater rise over the seasons and the influence of aspect upon this process. And finally, the interconnectivity of blocks, due to deteriorating mortar and poor repointing, providing conduits for the passage of moisture. Once these patterns were identified, it has proven necessary to validate the outcome of the visual inspection using other techniques. In this study, three complimentary approaches were employed, ground penetrating radar (GPR), probe permeametry and 3D modelling. Each of these strategies were selected as they were both capable of substantiating the suggested causes of visible decay trends and non-destructive in nature. GPR was employed to detect variations in the wall corresponding to the presence of hollows or moisture within the wall sections. The returns support the conclusions that empty spaces, created through the deterioration of mortar exist within the wall, allowing the passage of moisture. Using probe permeametry, the surface permeability of the wall surface was measured, analysis of which was carried out using kriging. The variograms created for this purpose suggest a significant directional element. 3D Models created by scanning the wall sections was used to calculate a measurement of roughness for the surfaces of the study area. Due to the stonework at the church being hammer dressed, the effectiveness of the determination of changing roughness was restricted, however some variation was identified. Through the combined use of these techniques, the wall scale trends suggested by the results of the visual inspection were validated. Thus, the apparent potential of these techniques, in particular the use of GPR, in supporting future studies of decay is promising.
A study of the tolerance block approach to special stratification. [winter wheat in Kansas
NASA Technical Reports Server (NTRS)
Richardson, W. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Twelve winter wheat LACIE segments in Kansas were used to compare the performance of three clustering methods: (1) BCLUST, which uses a spectral distance function to accumulate clusters; (2) blocks-alone, which divides spectral space into equally populated blocks; and (3) block-seeds, which uses spectral means of blocks-alone as seeds for accumulating distance-type clusters. Both BCLUST and block-seeds performed equally well and outperformed blocks-alone significantly. Their average variance ratio of about 0.5 showed imperfect separation of wheat from non-wheat. This result points to the need to explore the achievable crop separability in the spectral/temporal domain, and suggest evaluating derived features rather than data channels as a means to achieve purer spectral strata.
Anti-lysophosphatidic acid antibodies improve traumatic brain injury outcomes
2014-01-01
Background Lysophosphatidic acid (LPA) is a bioactive phospholipid with a potentially causative role in neurotrauma. Blocking LPA signaling with the LPA-directed monoclonal antibody B3/Lpathomab is neuroprotective in the mouse spinal cord following injury. Findings Here we investigated the use of this agent in treatment of secondary brain damage consequent to traumatic brain injury (TBI). LPA was elevated in cerebrospinal fluid (CSF) of patients with TBI compared to controls. LPA levels were also elevated in a mouse controlled cortical impact (CCI) model of TBI and B3 significantly reduced lesion volume by both histological and MRI assessments. Diminished tissue damage coincided with lower brain IL-6 levels and improvement in functional outcomes. Conclusions This study presents a novel therapeutic approach for the treatment of TBI by blocking extracellular LPA signaling to minimize secondary brain damage and neurological dysfunction. PMID:24576351
Transient photocurrent in molecular junctions: singlet switching on and triplet blocking.
Petrov, E G; Leonov, V O; Snitsarev, V
2013-05-14
The kinetic approach adapted to describe charge transmission in molecular junctions, is used for the analysis of the photocurrent under conditions of moderate light intensity of the photochromic molecule. In the framework of the HOMO-LUMO model for the single electron molecular states, the analytic expressions describing the temporary behavior of the transient and steady state sequential (hopping) as well as direct (tunnel) current components have been derived. The conditions at which the current components achieve their maximal values are indicated. It is shown that if the rates of charge transmission in the unbiased molecular diode are much lower than the intramolecular singlet-singlet excitation/de-excitation rate, and the threefold degenerated triplet excited state of the molecule behaves like a trap blocking the charge transmission, a possibility of a large peak-like transient switch-on photocurrent arises.
NASA Astrophysics Data System (ADS)
Jha, Pradeep Kumar
Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow. Comparisons are made between the predicted results of the present FPI scheme and Steady Laminar Flamelet Model (SLFM) approach for diffusion flames. The effects of grid resolution on the predicted overall flame solutions are also assessed. Other non-reacting flows have also been considered to further validate other aspects of the numerical scheme. The present schemes predict results which are in good agreement with published experimental results and reduces the computational cost involved in modelling turbulent diffusion flames significantly, both in terms of storage and processing time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hong; Kong, Vic; Ren, Lei
2016-01-15
Purpose: A preobject grid can reduce and correct scatter in cone beam computed tomography (CBCT). However, half of the signal in each projection is blocked by the grid. A synchronized moving grid (SMOG) has been proposed to acquire two complimentary projections at each gantry position and merge them into one complete projection. That approach, however, suffers from increased scanning time and the technical difficulty of accurately merging the two projections per gantry angle. Herein, the authors present a new SMOG approach which acquires a single projection per gantry angle, with complimentary grid patterns for any two adjacent projections, and usemore » an interprojection sensor fusion (IPSF) technique to estimate the blocked signal in each projection. The method may have the additional benefit of reduced imaging dose due to the grid blocking half of the incident radiation. Methods: The IPSF considers multiple paired observations from two adjacent gantry angles as approximations of the blocked signal and uses a weighted least square regression of these observations to finally determine the blocked signal. The method was first tested with a simulated SMOG on a head phantom. The signal to noise ratio (SNR), which represents the difference of the recovered CBCT image to the original image without the SMOG, was used to evaluate the ability of the IPSF in recovering the missing signal. The IPSF approach was then tested using a Catphan phantom on a prototype SMOG assembly installed in a bench top CBCT system. Results: In the simulated SMOG experiment, the SNRs were increased from 15.1 and 12.7 dB to 35.6 and 28.9 dB comparing with a conventional interpolation method (inpainting method) for a projection and the reconstructed 3D image, respectively, suggesting that IPSF successfully recovered most of blocked signal. In the prototype SMOG experiment, the authors have successfully reconstructed a CBCT image using the IPSF-SMOG approach. The detailed geometric features in the Catphan phantom were mostly recovered according to visual evaluation. The scatter related artifacts, such as cupping artifacts, were almost completely removed. Conclusions: The IPSF-SMOG is promising in reducing scatter artifacts and improving image quality while reducing radiation dose.« less
Universal RCFT correlators from the holomorphic bootstrap
NASA Astrophysics Data System (ADS)
Mukhi, Sunil; Muralidhara, Girish
2018-02-01
We elaborate and extend the method of Wronskian differential equations for conformal blocks to compute four-point correlation functions on the plane for classes of primary fields in rational (and possibly more general) conformal field theories. This approach leads to universal differential equations for families of CFT's and provides a very simple re-derivation of the BPZ results for the degenerate fields ϕ 1,2 and ϕ 2,1 in the c < 1 minimal models. We apply this technique to compute correlators for the WZW models corresponding to the Deligne-Cvitanović exceptional series of Lie algebras. The application turns out to be subtle in certain cases where there are multiple decoupled primaries. The power of this approach is demonstrated by applying it to compute four-point functions for the Baby Monster CFT, which does not belong to any minimal series.
Anti-arrhythmic strategies for atrial fibrillation
Grandi, Eleonora; Maleckar, Mary M.
2016-01-01
Atrial fibrillation (AF), the most common cardiac arrhythmia, is associated with increased risk of cerebrovascular stroke, and with several other pathologies, including heart failure. Current therapies for AF are targeted at reducing risk of stroke (anticoagulation) and tachycardia-induced cardiomyopathy (rate or rhythm control). Rate control, typically achieved by atrioventricular nodal blocking drugs, is often insufficient to alleviate symptoms. Rhythm control approaches include antiarrhythmic drugs, electrical cardioversion, and ablation strategies. Here, we offer several examples of how computational modeling can provide a quantitative framework for integrating multi scale data to: (a) gain insight into multi-scale mechanisms of AF; (b) identify and test pharmacological and electrical therapy and interventions; and (c) support clinical decisions. We review how modeling approaches have evolved and contributed to the research pipeline and preclinical development and discuss future directions and challenges in the field. PMID:27612549
Electronic damping of anharmonic adsorbate vibrations at metallic surfaces
NASA Astrophysics Data System (ADS)
Tremblay, Jean Christophe; Monturet, Serge; Saalfrank, Peter
2010-03-01
The nonadiabatic coupling of an adsorbate close to a metallic surface leads to electronic damping of adsorbate vibrations and line broadening in vibrational spectroscopy. Here, a perturbative treatment of the electronic contribution to the lifetime broadening serves as a building block for a new approach, in which anharmonic vibrational transition rates are calculated from a position-dependent coupling function. Different models for the coupling function will be tested, all related to embedding theory. The first two are models based on a scattering approach with (i) a jellium-type and (ii) a density functional theory based embedding density, respectively. In a third variant a further refined model is used for the embedding density, and a semiempirical approach is taken in which a scaling factor is chosen to match harmonic, single-site, first-principles transition rates, obtained from periodic density functional theory. For the example of hydrogen atoms on (adsorption) and below (subsurface absorption) a Pd(111) surface, lifetimes of and transition rates between vibrational levels are computed. The transition rates emerging from different models serve as input for the selective subsurface adsorption of hydrogen in palladium starting from an adsorption site, by using sequences of infrared laser pulses in a laser distillation scheme.
Sparse network-based models for patient classification using fMRI
Rosa, Maria J.; Portugal, Liana; Hahn, Tim; Fallgatter, Andreas J.; Garrido, Marta I.; Shawe-Taylor, John; Mourao-Miranda, Janaina
2015-01-01
Pattern recognition applied to whole-brain neuroimaging data, such as functional Magnetic Resonance Imaging (fMRI), has proved successful at discriminating psychiatric patients from healthy participants. However, predictive patterns obtained from whole-brain voxel-based features are difficult to interpret in terms of the underlying neurobiology. Many psychiatric disorders, such as depression and schizophrenia, are thought to be brain connectivity disorders. Therefore, pattern recognition based on network models might provide deeper insights and potentially more powerful predictions than whole-brain voxel-based approaches. Here, we build a novel sparse network-based discriminative modeling framework, based on Gaussian graphical models and L1-norm regularized linear Support Vector Machines (SVM). In addition, the proposed framework is optimized in terms of both predictive power and reproducibility/stability of the patterns. Our approach aims to provide better pattern interpretation than voxel-based whole-brain approaches by yielding stable brain connectivity patterns that underlie discriminative changes in brain function between the groups. We illustrate our technique by classifying patients with major depressive disorder (MDD) and healthy participants, in two (event- and block-related) fMRI datasets acquired while participants performed a gender discrimination and emotional task, respectively, during the visualization of emotional valent faces. PMID:25463459
Computational modeling for cardiac safety pharmacology analysis: Contribution of fibroblasts.
Gao, Xin; Engel, Tyler; Carlson, Brian E; Wakatsuki, Tetsuro
2017-09-01
Drug-induced proarrhythmic potential is an important regulatory criterion in safety pharmacology. The application of in silico approaches to predict proarrhythmic potential of new compounds is under consideration as part of future guidelines. Current approaches simulate the electrophysiology of a single human adult ventricular cardiomyocyte. However, drug-induced proarrhythmic potential can be different when cardiomyocytes are surrounded by non-muscle cells. Incorporating fibroblasts in models of myocardium is important particularly for predicting a drugs cardiac liability in the aging population - a growing population who take more medications and exhibit increased cardiac fibrosis. In this study, we used computational models to investigate the effects of fibroblast coupling on the electrophysiology and response to drugs of cardiomyocytes. A computational model of cardiomyocyte electrophysiology and ion handling (O'Hara, Virag, Varro, & Rudy, 2011) is coupled to a passive model of fibroblast electrophysiology to test the effects of three compounds that block cardiomyocyte ion channels. Results are compared to model results without fibroblast coupling to see how fibroblasts affect cardiomyocyte action potential duration at 90% repolarization (APD 90 ) and propensity for early after depolarization (EAD). Simulation results show changes in cardiomyocyte APD 90 with increasing concentration of three drugs that affect cardiac function (dofetilide, vardenafil and nebivolol) when no fibroblasts are coupled to the cardiomyocyte. Coupling fibroblasts to cardiomyocytes markedly shortens APD 90 . Moreover, increasing the number of fibroblasts can augment the shortening effect. Coupling cardiomyocytes and fibroblasts are predicted to decrease proarrhythmic susceptibility under dofetilide, vardenafil and nebivolol block. However, this result is sensitive to parameters which define the electrophysiological function of the fibroblast. Fibroblast membrane capacitance and conductance (C FB and G FB ) have less of an effect on APD 90 than the fibroblast resting membrane potential (E FB ). This study suggests that in both theoretical models and experimental tissue constructs that represent cardiac tissue, both cardiomyocytes and non-muscle cells should be considered when testing cardiac pharmacological agents. Copyright © 2017 Elsevier Inc. All rights reserved.
Representation of deformable motion for compression of dynamic cardiac image data
NASA Astrophysics Data System (ADS)
Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André
2012-02-01
We present a new approach for efficient estimation and storage of tissue deformation in dynamic medical image data like 3-D+t computed tomography reconstructions of human heart acquisitions. Tissue deformation between two points in time can be described by means of a displacement vector field indicating for each voxel of a slice, from which position in the previous slice at a fixed position in the third dimension it has moved to this position. Our deformation model represents the motion in a compact manner using a down-sampled potential function of the displacement vector field. This function is obtained by a Gauss-Newton minimization of the estimation error image, i. e., the difference between the current and the deformed previous slice. For lossless or lossy compression of volume slices, the potential function and the error image can afterwards be coded separately. By assuming deformations instead of translational motion, a subsequent coding algorithm using this method will achieve better compression ratios for medical volume data than with conventional block-based motion compensation known from video coding. Due to the smooth prediction without block artifacts, particularly whole-image transforms like wavelet decomposition as well as intra-slice prediction methods can benefit from this approach. We show that with discrete cosine as well as with Karhunen-Lo`eve transform the method can achieve a better energy compaction of the error image than block-based motion compensation while reaching approximately the same prediction error energy.
Ultramap v3 - a Revolution in Aerial Photogrammetry
NASA Astrophysics Data System (ADS)
Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.
2012-07-01
In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.
Palla, A; Gnecco, I; La Barbera, P
2017-04-15
In the framework of storm water management, Domestic Rainwater Harvesting (DRWH) systems are recently recognized as source control solutions according to LID principles. In order to assess the impact of these systems in storm water runoff control, a simple methodological approach is proposed. The hydrologic-hydraulic modelling is undertaken using EPA SWMM; the DRWH is implemented in the model by using a storage unit linked to the building water supply system and to the drainage network. The proposed methodology has been implemented for a residential urban block located in Genoa (Italy). Continuous simulations are performed by using the high-resolution rainfall data series for the ''do nothing'' and DRWH scenarios. The latter includes the installation of a DRWH system for each building of the urban block. Referring to the test site, the peak and volume reduction rate evaluated for the 2125 rainfall events are respectively equal to 33 and 26 percent, on average (with maximum values of 65 percent for peak and 51 percent for volume). In general, the adopted methodology indicates that the hydrologic performance of the storm water drainage network equipped with DRWH systems is noticeable even for the design storm event (T = 10 years) and the rainfall depth seems to affect the hydrologic performance at least when the total depth exceeds 20 mm. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nouri-Borujerdi, Ali; Moazezi, Arash
2018-01-01
The current study investigates the conjugate heat transfer characteristics for laminar flow in backward facing step channel. All of the channel walls are insulated except the lower thick wall under a constant temperature. The upper wall includes a insulated obstacle perpendicular to flow direction. The effect of obstacle height and location on the fluid flow and heat transfer are numerically explored for the Reynolds number in the range of 10 ≤ Re ≤ 300. Incompressible Navier-Stokes and thermal energy equations are solved simultaneously in fluid region by the upwind compact finite difference scheme based on flux-difference splitting in conjunction with artificial compressibility method. In the thick wall, the energy equation is obtained by Laplace equation. A multi-block approach is used to perform parallel computing to reduce the CPU time. Each block is modeled separately by sharing boundary conditions with neighbors. The developed program for modeling was written in FORTRAN language with OpenMP API. The obtained results showed that using of the multi-block parallel computing method is a simple robust scheme with high performance and high-order accurate. Moreover, the obtained results demonstrated that the increment of Reynolds number and obstacle height as well as decrement of horizontal distance between the obstacle and the step improve the heat transfer.
Robust and Blind 3D Mesh Watermarking in Spatial Domain Based on Faces Categorization and Sorting
NASA Astrophysics Data System (ADS)
Molaei, Amir Masoud; Ebrahimnezhad, Hossein; Sedaaghi, Mohammad Hossein
2016-06-01
In this paper, a 3D watermarking algorithm in spatial domain is presented with blind detection. In the proposed method, a negligible visual distortion is observed in host model. Initially, a preprocessing is applied on the 3D model to make it robust against geometric transformation attacks. Then, a number of triangle faces are determined as mark triangles using a novel systematic approach in which faces are categorized and sorted robustly. In order to enhance the capability of information retrieval by attacks, block watermarks are encoded using Reed-Solomon block error-correcting code before embedding into the mark triangles. Next, the encoded watermarks are embedded in spherical coordinates. The proposed method is robust against additive noise, mesh smoothing and quantization attacks. Also, it is stout next to geometric transformation, vertices and faces reordering attacks. Moreover, the proposed algorithm is designed so that it is robust against the cropping attack. Simulation results confirm that the watermarked models confront very low distortion if the control parameters are selected properly. Comparison with other methods demonstrates that the proposed method has good performance against the mesh smoothing attacks.
Knight, Jason S.; Luo, Wei; O’Dell, Alexander A.; Yalavarthi, Srilakshmi; Zhao, Wenpu; Subramanian, Venkataraman; Guo, Chiao; Grenn, Robert C.; Thompson, Paul R.; Eitzman, Daniel T.; Kaplan, Mariana J.
2014-01-01
Rationale Neutrophil extracellular trap (NET) formation promotes vascular damage, thrombosis, and activation of interferon-α-producing plasmacytoid dendritic cells in diseased arteries. Peptidylarginine deiminase inhibition is a strategy that can decrease in vivo NET formation. Objective To test whether peptidylarginine deiminase inhibition, a novel approach to targeting arterial disease, can reduce vascular damage and inhibit innate immune responses in murine models of atherosclerosis. Methods and Results Apolipoprotein-E (Apoe)−/− mice demonstrated enhanced NET formation, developed autoantibodies to NETs, and expressed high levels of interferon-α in diseased arteries. Apoe−/− mice were treated for 11 weeks with daily injections of Cl-amidine, a peptidylarginine deiminase inhibitor. Peptidylarginine deiminase inhibition blocked NET formation, reduced atherosclerotic lesion area, and delayed time to carotid artery thrombosis in a photochemical injury model. Decreases in atherosclerosis burden were accompanied by reduced recruitment of netting neutrophils and macrophages to arteries, as well as by reduced arterial interferon-α expression. Conclusions Pharmacological interventions that block NET formation can reduce atherosclerosis burden and arterial thrombosis in murine systems. These results support a role for aberrant NET formation in the pathogenesis of atherosclerosis through modulation of innate immune responses. PMID:24425713
Tian, Mi; Deng, Zhu; Meng, Zhaokun; Li, Rui; Zhang, Zhiyi; Qi, Wenhui; Wang, Rui; Yin, Tingting; Ji, Menghui
2018-01-01
Children's block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children's block building performance. Chinese preschoolers ( N = 180) participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children's block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation.
A new lumped-parameter approach to simulating flow processes in unsaturated dual-porosity media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, R.W.; Hadgu, T.; Bodvarsson, G.S.
We have developed a new lumped-parameter dual-porosity approach to simulating unsaturated flow processes in fractured rocks. Fluid flow between the fracture network and the matrix blocks is described by a nonlinear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. This equation is a generalization of the Warren-Root equation, but unlike the Warren-Root equation, is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into a computational module, compatible with the TOUGH simulator, to serve as a source/sink term for fracture elements.more » The new approach achieves accuracy comparable to simulations in which the matrix blocks are discretized, but typically requires an order of magnitude less computational time.« less
Experimental demonstration of a measurement-based realisation of a quantum channel
NASA Astrophysics Data System (ADS)
McCutcheon, W.; McMillan, A.; Rarity, J. G.; Tame, M. S.
2018-03-01
We introduce and experimentally demonstrate a method for realising a quantum channel using the measurement-based model. Using a photonic setup and modifying the basis of single-qubit measurements on a four-qubit entangled cluster state, representative channels are realised for the case of a single qubit in the form of amplitude and phase damping channels. The experimental results match the theoretical model well, demonstrating the successful performance of the channels. We also show how other types of quantum channels can be realised using our approach. This work highlights the potential of the measurement-based model for realising quantum channels which may serve as building blocks for simulations of realistic open quantum systems.
High fidelity CFD-CSD aeroelastic analysis of slender bladed horizontal-axis wind turbine
NASA Astrophysics Data System (ADS)
Sayed, M.; Lutz, Th.; Krämer, E.; Shayegan, Sh.; Ghantasala, A.; Wüchner, R.; Bletzinger, K.-U.
2016-09-01
The aeroelastic response of large multi-megawatt slender horizontal-axis wind turbine blades is investigated by means of a time-accurate CFD-CSD coupling approach. A loose coupling approach is implemented and used to perform the simulations. The block- structured CFD solver FLOWer is utilized to obtain the aerodynamic blade loads based on the time-accurate solution of the unsteady Reynolds-averaged Navier-Stokes equations. The CSD solver Carat++ is applied to acquire the blade elastic deformations based on non-linear beam elements. In this contribution, the presented coupling approach is utilized to study the aeroelastic response of the generic DTU 10MW wind turbine. Moreover, the effect of the coupled results on the wind turbine performance is discussed. The results are compared to the aeroelastic response predicted by FLOWer coupled to the MBS tool SIMPACK as well as the response predicted by SIMPACK coupled to a Blade Element Momentum code for aerodynamic predictions. A comparative study among the different modelling approaches for this coupled problem is discussed to quantify the coupling effects of the structural models on the aeroelastic response.
Creation of system of computer-aided design for technological objects
NASA Astrophysics Data System (ADS)
Zubkova, T. M.; Tokareva, M. A.; Sultanov, N. Z.
2018-05-01
Due to the competition in the market of process equipment, its production should be flexible, retuning to various product configurations, raw materials and productivity, depending on the current market needs. This process is not possible without CAD (computer-aided design). The formation of CAD begins with planning. Synthesizing, analyzing, evaluating, converting operations, as well as visualization and decision-making operations, can be automated. Based on formal description of the design procedures, the design route in the form of an oriented graph is constructed. The decomposition of the design process, represented by the formalized description of the design procedures, makes it possible to make an informed choice of the CAD component for the solution of the task. The object-oriented approach allows us to consider the CAD as an independent system whose properties are inherited from the components. The first step determines the range of tasks to be performed by the system, and a set of components for their implementation. The second one is the configuration of the selected components. The interaction between the selected components is carried out using the CALS standards. The chosen CAD / CAE-oriented approach allows creating a single model, which is stored in the database of the subject area. Each of the integration stages is implemented as a separate functional block. The transformation of the CAD model into the model of the internal representation is realized by the block of searching for the geometric parameters of the technological machine, in which the XML-model of the construction is obtained on the basis of the feature method from the theory of image recognition. The configuration of integrated components is divided into three consecutive steps: configuring tasks, components, interfaces. The configuration of the components is realized using the theory of "soft computations" using the Mamdani fuzzy inference algorithm.
Reynolds-averaged Navier-Stokes based ice accretion for aircraft wings
NASA Astrophysics Data System (ADS)
Lashkajani, Kazem Hasanzadeh
This thesis addresses one of the current issues in flight safety towards increasing icing simulation capabilities for prediction of complex 2D and 3D glaze ice shapes over aircraft surfaces. During the 1980's and 1990's, the field of aero-icing was established to support design and certification of aircraft flying in icing conditions. The multidisciplinary technologies used in such codes were: aerodynamics (panel method), droplet trajectory calculations (Lagrangian framework), thermodynamic module (Messinger model) and geometry module (ice accretion). These are embedded in a quasi-steady module to simulate the time-dependent ice accretion process (multi-step procedure). The objectives of the present research are to upgrade the aerodynamic module from Laplace to Reynolds-Average Navier-Stokes equations solver. The advantages are many. First, the physical model allows accounting for viscous effects in the aerodynamic module. Second, the solution of the aero-icing module directly provides the means for characterizing the aerodynamic effects of icing, such as loss of lift and increased drag. Third, the use of a finite volume approach to solving the Partial Differential Equations allows rigorous mesh and time convergence analysis. Finally, the approaches developed in 2D can be easily transposed to 3D problems. The research was performed in three major steps, each providing insights into the overall numerical approaches. The most important realization comes from the need to develop specific mesh generation algorithms to ensure feasible solutions in very complex multi-step aero-icing calculations. The contributions are presented in chronological order of their realization. First, a new framework for RANS based two-dimensional ice accretion code, CANICE2D-NS, is developed. A multi-block RANS code from U. of Liverpool (named PMB) is providing the aerodynamic field using the Spalart-Allmaras turbulence model. The ICEM-CFD commercial tool is used for the iced airfoil remeshing and field smoothing. The new coupling is fully automated and capable of multi-step ice accretion simulations via a quasi-steady approach. In addition, the framework allows for flow analysis and aerodynamic performance prediction of the iced airfoils. The convergence of the quasi-steady algorithm is verified and identifies the need for an order of magnitude increase in the number of multi-time steps in icing simulations to achieve solver independent solutions. Second, a Multi-Block Navier-Stokes code, NSMB, is coupled with the CANICE2D icing framework. Attention is paid to the roughness implementation of the ONERA roughness model within the Spalart-Allmaras turbulence model, and to the convergence of the steady and quasi-steady iterative procedure. Effects of uniform surface roughness in quasi-steady ice accretion simulation are analyzed through different validation test cases. The results of CANICE2D-NS show good agreement with experimental data both in terms of predicted ice shapes as well as aerodynamic analysis of predicted and experimental ice shapes. Third, an efficient single-block structured Navier-Stokes CFD code, NSCODE, is coupled with the CANICE2D-NS icing framework. Attention is paid to the roughness implementation of the Boeing model within the Spalart-Allmaras turbulence model, and to acceleration of the convergence of the steady and quasi-steady iterative procedures. Effects of uniform surface roughness in quasi-steady ice accretion simulation are analyzed through different validation test cases, including code to code comparisons with the same framework coupled with the NSMB Navier-Stokes solver. The efficiency of the J-multigrid approach to solve the flow equations on complex iced geometries is demonstrated. Since it was noted in all these calculations that the ICEM-CFD grid generation package produced a number of issues such as inefficient mesh quality and smoothing deficiencies (notably grid shocks), a fourth study proposes a new mesh generation algorithm. A PDE based multi-block structured grid generation code, NSGRID, is developed for this purpose. The study includes the developments of novel mesh generation algorithms over complex glaze ice shapes containing multi-curvature ice accretion geometries, such as single/double ice horns. The twofold approaches tackle surface geometry discretization as well as field mesh generation. An adaptive curvilinear curvature control algorithm is constructed solving a 1D elliptic PDE equation with periodic source terms. This method controls the arclength grid spacing so that high convex and concave curvature regions around ice horns are appropriately captured and is shown to effectively treat the grid shock problem. Then, a novel blended method is developed by defining combinations of source terms with 2D elliptic equations. The source terms include two common control functions, Sorenson and Spekreijse, and an additional third source term to improve orthogonality. This blended method is shown to be very effective for improving grid quality metrics for complex glaze ice meshes with RANS resolution. The performance in terms of residual reduction per non-linear iteration of several solution algorithms (Point-Jacobi, Gauss-Seidel, ADI, Point and Line SOR) are discussed within the context of a full Multi-grid operator. Details are given on the various formulations used in the linearization process. It is shown that the performance of the solution algorithm depends on the type of control function used. Finally, the algorithms are validated on standard complex experimental ice shapes, demonstrating the applicability of the methods. Finally, the automated framework of RANS based two-dimensional multi-step ice accretion, CANICE2D-NS is developed, coupled with a Multi-Block Navier-Stokes CFD code, NSCODE2D, a Multi-Block elliptic grid generation code, NSGRID2D, and a Multi-Block Eulerian droplet solver, NSDROP2D (developed at Polytechnique Montreal). The framework allows Lagrangian and Eulerian droplet computations within a chimera approach treating multi-elements geometries. The code was tested on public and confidential validation test cases including standard NATO cases. In addition, up to 10 times speedup is observed in the mesh generation procedure by using the implicit line SOR and ADI smoothers within a multigrid procedure. The results demonstrate the benefits and robustness of the new framework in predicting ice shapes and aerodynamic performance parameters.
NASA Astrophysics Data System (ADS)
Liao, Fanxi; Wang, Qinyan; Chen, Nengsong; Santosh, M.; Xu, Yixian; Mustafa, Hassan Abdelsalam
2018-05-01
The role of the Tarim Block in the reconstruction of the Neoproterozoic supercontinent Rodinia remains contentious. Here we report a suite of high-Mg gabbroic dykes from the Yingfeng area in northwestern Quanji Massif, which is considered as a fragment of the Tarim Block in NW China. Magmatic zircons from these dykes yield to have a weighted mean 206Pb/238U age of 822.2 ± 5.3 Ma, recording the timing of their emplacement. The gabbros have high MgO (9.91-13.09 wt%), Mg numbers (69.89-75.73) and CaO (8.41-13.55 wt%), medium FeOt (8.50-9.67 wt%) and TiO2 (0.67-0.93 wt%), variable Al2O3 (13.04-16.07 wt%), and high Cr (346.14-675.25 ppm), but relatively low Ni (138.72-212.94 ppm), suggestive of derivation from a primary magma. The rocks display chondrite-normalized LREE patterns with weak fractionation but flat HREE patterns relative to those of the N-MORB. Their primitive mantle normalized trace elemental patterns show positive Rb, Ba and U but negative Th, Nb, Ti and Zr anomalies, carrying characteristics of both mid-ocean ridge basalts and arc basalts. The εHf(t) values of the zircons from these rocks vary from +4.7 to +13.5 with depleted mantle model ages (TDM) of 1.23-0.85 Ga, and the youngest value nearly approaching that for the coeval depleted mantle, suggesting significant addition of juvenile materials. Our data suggest that the strongly depleted basaltic magma was probably sourced from a depleted mantle source that had undergone metasomatism by subduction-related components in a back-arc setting. Accordingly we postulate that a subduction-related tectonic regime possibly prevailed at ∼0.8 Ga along the southeastern margin of the Tarim Block. Combining with available information from the northern Tarim Block, we propose an opposite verging double-sided subduction model for coeval subduction of the oceanic crust beneath both the southern and northern margins of the Tarim Block during early Neoproterozoic.
Distinct Element Modeling of the Large Block Test
NASA Astrophysics Data System (ADS)
Carlson, S. R.; Blair, S. C.; Wagoner, J. L.
2001-12-01
The Yucca Mountain Site Characterization Project is investigating Yucca Mountain, Nevada as a potential nuclear waste repository site. As part of this effort, the Large Block, a 3m x 3m x 4.5m rectangular prism of Topopah Spring tuff, was excavated at Fran Ridge near Yucca Mountain. The Large Block was heated to a peak temperature of 145\\deg C along a horizontal plane 2.75m below the top of the block over a period of about one-year. Displacements were measured in three orthogonal directions with an array of six Multiple Point Borehole Extensometers (MPBX) and were numerically simulated in three dimensions with 3DEC, a distinct element code. The distinct element method was chosen to incorporate discrete fractures in the simulations. The model domain was extended 23m below the ground surface and, in the subsurface, 23m outward from each vertical face so that fixed displacement boundary conditions could be applied well away from the heated portion of the block. A single continuum model and three distinct element models, incorporating six to twenty eight mapped fractures, were tested. Two thermal expansion coefficients were tested for the six-fracture model: a higher value taken from laboratory measurements and a lower value from an earlier field test. The MPBX data show that the largest displacements occurred in the upper portion of the block despite the higher temperatures near the center. The continuum model was found to under-predict the MPBX displacements except in the east west direction near the base of the block. The high thermal expansion model over-predicted the MPBX displacements except in the north south direction near the top of the block. The highly fractured model under-predicted most of the MPBX displacements and poorly simulated the cool-down portion of the test. Although no model provided the single best fit to all of the MPBX data, the six and seven fracture models consistently provided good fits and in most cases showed much improvement over the other three models. Both provided particularly good fits to the east west displacements in the upper portion of the block throughout the entire test. This exercise demonstrates that distinct element models can surpass continuum models in their ability to simulate fractured rock mass deformation, but care needs to be taken in the selection of fractures incorporated in the models. *This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.
A comparison between block and smooth modeling in finite element simulations of tDCS*
Indahlastari, Aprinda; Sadleir, Rosalind J.
2018-01-01
Current density distributions in five selected structures, namely, anterior superior temporal gyrus (ASTG), hippocampus (HIP), inferior frontal gyrus (IFG), occipital lobe (OCC) and pre-central gyrus (PRC) were investigated as part of a comparison between electrostatic finite element models constructed directly from MRI-resolution data (block models), and smoothed tetrahedral finite element models (smooth models). Three electrode configurations were applied, mimicking different tDCS therapies. Smooth model simulations were found to require three times longer to complete. The percentage differences between mean and median current densities of each model type in arbitrarily chosen brain structures ranged from −33.33–48.08%. No clear relationship was found between structure volumes and current density differences between the two model types. Tissue regions nearby the electrodes demonstrated the least percentage differences between block and smooth models. Therefore, block models may be adequate to predict current density values in cortical regions presumed targeted by tDCS. PMID:26737023
Medial Versus Traditional Approach to US-guided TAP Blocks for Open Inguinal Hernia Repair
2012-04-30
Abdominal Muscles/Ultrasonography; Adult; Ambulatory Surgical Procedures; Anesthetics, Local/Administration & Dosage; Ropivacaine/Administration & Dosage; Ropivacaine/Analogs & Derivatives; Hernia, Inguinal/Surgery; Humans; Nerve Block/Methods; Pain Measurement/Methods; Pain, Postoperative/Prevention & Control; Ultrasonography, Interventional
Classical conformal blocks and accessory parameters from isomonodromic deformations
NASA Astrophysics Data System (ADS)
Lencsés, Máté; Novaes, Fábio
2018-04-01
Classical conformal blocks appear in the large central charge limit of 2D Virasoro conformal blocks. In the AdS3 /CFT2 correspondence, they are related to classical bulk actions and used to calculate entanglement entropy and geodesic lengths. In this work, we discuss the identification of classical conformal blocks and the Painlevé VI action showing how isomonodromic deformations naturally appear in this context. We recover the accessory parameter expansion of Heun's equation from the isomonodromic τ -function. We also discuss how the c = 1 expansion of the τ -function leads to a novel approach to calculate the 4-point classical conformal block.
Block-Parallel Data Analysis with DIY2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less
Observing atmospheric blocking with GPS radio occultation - one decade of measurements
NASA Astrophysics Data System (ADS)
Brunner, Lukas; Steiner, Andrea
2017-04-01
Atmospheric blocking has received a lot of attention in recent years due to its impact on mid-latitude circulation and subsequently on weather extremes such as cold and warm spells. So far blocking studies have been based mainly on re-analysis data or model output. However, it has been shown that blocking frequency exhibits considerable inter-model spread in current climate models. Here we use one decade (2006 to 2016) of satellite-based observations from GPS radio occultation (RO) to analyze blocking in RO data building on work by Brunner et al. (2016). Daily fields on a 2.5°×2.5° longitude-latitude grid are calculated by applying an adequate gridding strategy to the RO measurements. For blocking detection we use a standard blocking detection algorithm based on 500 hPa geopotential height (GPH) gradients. We investigate vertically resolved atmospheric variables such as GPH, temperature, and water vapor before, during, and after blocking events to increase process understanding. Moreover, utilizing the coverage of the RO data set, we investigate global blocking frequencies. The main blocking regions in the northern and southern hemisphere are identified and the (vertical) atmospheric structure linked to blocking events is compared. Finally, an inter-comparison of results from RO data to different re-analyses, such as ERA-Interim, MERRA 2, and JRA-55, is presented. Brunner, L., A. K. Steiner, B. Scherllin-Pirscher, and M. W. Jury (2016): Exploring atmospheric blocking with GPS radio occultation observations. Atmos. Chem. Phys., 16, 4593-4604, doi:10.5194/acp-16-4593-2016.
Kim, Kris S; Gunari, Nikhil; MacNeil, Drew; Finlay, John; Callow, Maureen; Callow, James; Walker, Gilbert C
2016-08-10
The ability to fabricate nanostructured films by exploiting the phenomenon of microphase separation has made block copolymers an invaluable tool for a wide array of coating applications. Standard approaches to engineering nanodomains commonly involve the application of organic solvents, either through dissolution or annealing protocols, resulting in the release of volatile organic compounds (VOCs). In this paper, an aqueous-based method of fabricating low-VOC nanostructured block copolymer films is presented. The reported procedure allows for the phase transfer of water insoluble triblock copolymer, poly(styrene-block-2 vinylpyridine-block-ethylene oxide) (PS-b-P2VP-b-PEO), from a water immiscible phase to an aqueous environment with the assistance of a diblock copolymeric phase transfer agent, poly(styrene-block-ethylene oxide) (PS-b-PEO). Phase transfer into the aqueous phase results in self-assembly of PS-b-P2VP-b-PEO into core-shell-corona micelles, which are characterized by dynamic light scattering techniques. The films that result from coating the micellar solution onto Si/SiO2 surfaces exhibit nanoscale features that disrupt the ability of a model foulant, a zoospore of Ulva linza, to settle. The multilayered architecture consists of a pH-responsive P2VP-"shell" which can be stimulated to control the size of these features. The ability of these nanostructured thin films to resist protein adsorption and serve as potential marine antifouling coatings is supported through atomic force microscopy (AFM) and analysis of the settlement of Ulva linza zoospore. Field trials of the surfaces in a natural environment show the inhibition of macrofoulants for 1 month.
NASA Astrophysics Data System (ADS)
Wallace, Laura M.; Beavan, John; McCaffrey, Robert; Berryman, Kelvin; Denys, Paul
2007-01-01
The landmass of New Zealand exists as a consequence of transpressional collision between the Australian and Pacific plates, providing an excellent opportunity to quantify the kinematics of deformation at this type of tectonic boundary. We interpret GPS, geological and seismological data describing the active deformation in the South Island, New Zealand by using an elastic, rotating block approach that automatically balances the Pacific/Australia relative plate motion budget. The data in New Zealand are fit to within uncertainty when inverted simultaneously for angular velocities of rotating tectonic blocks and the degree of coupling on faults bounding the blocks. We find that most of the plate motion budget has been accounted for in previous geological studies, although we suggest that the Porter's Pass/Amberley fault zone in North Canterbury, and a zone of faults in the foothills of the Southern Alps may have slip rates about twice that of the geological estimates. Up to 5 mm yr-1 of active deformation on faults distributed within the Southern Alps <100 km to the east of the Alpine Fault is possible. The role of tectonic block rotations in partitioning plate boundary deformation is less pronounced in the South Island compared to the North Island. Vertical axis rotation rates of tectonic blocks in the South Island are similar to that of the Pacific Plate, suggesting that edge forces dominate the block kinematics there. The southward migrating Chatham Rise exerts a major influence on the evolution of the New Zealand plate boundary; we discuss a model for the development of the Marlborough fault system and Hikurangi subduction zone in the context of this migration.
Sousa, A.M.; Ashmawi, H.A.; Costa, L.S.; Posso, I.P.; Slullitel, A.
2011-01-01
Local anesthetic efficacy of tramadol has been reported following intradermal application. Our aim was to investigate the effect of perineural tramadol as the sole analgesic in two pain models. Male Wistar rats (280-380 g; N = 5/group) were used in these experiments. A neurostimulation-guided sciatic nerve block was performed and 2% lidocaine or tramadol (1.25 and 5 mg) was perineurally injected in two different animal pain models. In the flinching behavior test, the number of flinches was evaluated and in the plantar incision model, mechanical and heat thresholds were measured. Motor effects of lidocaine and tramadol were quantified and a motor block score elaborated. Tramadol, 1.25 mg, completely blocked the first and reduced the second phase of the flinching behavior test. In the plantar incision model, tramadol (1.25 mg) increased both paw withdrawal latency in response to radiant heat (8.3 ± 1.1, 12.7 ± 1.8, 8.4 ± 0.8, and 11.1 ± 3.3 s) and mechanical threshold in response to von Frey filaments (459 ± 82.8, 447.5 ± 91.7, 320.1 ± 120, 126.43 ± 92.8 mN) at 5, 15, 30, and 60 min, respectively. Sham block or contralateral sciatic nerve block did not differ from perineural saline injection throughout the study in either model. The effect of tramadol was not antagonized by intraperitoneal naloxone. High dose tramadol (5 mg) blocked motor function as well as 2% lidocaine. In conclusion, tramadol blocks nociception and motor function in vivo similar to local anesthetics. PMID:22183244
Inhibitory control in mind and brain 2.0: Blocked-input models of saccadic countermanding
Logan, Gordon D.; Yamaguchi, Motonori; Schall, Jeffrey D.; Palmeri, Thomas J.
2015-01-01
The interactive race model of saccadic countermanding assumes that response inhibition results from an interaction between a go unit, identified with gaze-shifting neurons, and a stop unit, identified with gaze-holding neurons, in which activation of the stop unit inhibits the growth of activation in the go unit to prevent it from reaching threshold. The interactive race model accounts for behavioral data and predicts physiological data in monkeys performing the stop-signal task. We propose an alternative model that assumes that response inhibition results from blocking the input to the go unit. We show that the blocked-input model accounts for behavioral data as accurately as the original interactive race model and predicts aspects of the physiological data more accurately. We extend the models to address the steady-state fixation period before the go stimulus is presented and find that the blocked-input model fits better than the interactive race model. We consider a model in which fixation activity is boosted when a stop signal occurs and find that it fits as well as the blocked input model but predicts very high steady-state fixation activity after the response is inhibited. We discuss the alternative linking propositions that connect computational models to neural mechanisms, the lessons to be learned from model mimicry, and generalization from countermanding saccades to countermanding other kinds of responses. PMID:25706403
NASA Astrophysics Data System (ADS)
Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming
2018-04-01
Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.
Decision Criterion Dynamics in Animals Performing an Auditory Detection Task
Mill, Robert W.; Alves-Pinto, Ana; Sumner, Christian J.
2014-01-01
Classical signal detection theory attributes bias in perceptual decisions to a threshold criterion, against which sensory excitation is compared. The optimal criterion setting depends on the signal level, which may vary over time, and about which the subject is naïve. Consequently, the subject must optimise its threshold by responding appropriately to feedback. Here a series of experiments was conducted, and a computational model applied, to determine how the decision bias of the ferret in an auditory signal detection task tracks changes in the stimulus level. The time scales of criterion dynamics were investigated by means of a yes-no signal-in-noise detection task, in which trials were grouped into blocks that alternately contained easy- and hard-to-detect signals. The responses of the ferrets implied both long- and short-term criterion dynamics. The animals exhibited a bias in favour of responding “yes” during blocks of harder trials, and vice versa. Moreover, the outcome of each single trial had a strong influence on the decision at the next trial. We demonstrate that the single-trial and block-level changes in bias are a manifestation of the same criterion update policy by fitting a model, in which the criterion is shifted by fixed amounts according to the outcome of the previous trial and decays strongly towards a resting value. The apparent block-level stabilisation of bias arises as the probabilities of outcomes and shifts on single trials mutually interact to establish equilibrium. To gain an intuition into how stable criterion distributions arise from specific parameter sets we develop a Markov model which accounts for the dynamic effects of criterion shifts. Our approach provides a framework for investigating the dynamics of decisions at different timescales in other species (e.g., humans) and in other psychological domains (e.g., vision, memory). PMID:25485733
Air pollution and environmental justice in the Great Lakes region
NASA Astrophysics Data System (ADS)
Comer, Bryan
While it is true that air quality has steadily improved in the Great Lakes region, air pollution remains at unhealthy concentrations in many areas. Research suggests that vulnerable and susceptible groups in society -- e.g., minorities, the poor, children, and poorly educated -- are often disproportionately impacted by exposure to environmental hazards, including air pollution. This dissertation explores the relationship between exposure to ambient air pollution (interpolated concentrations of fine particulate matter, PM2.5) and sociodemographic factors (race, housing value, housing status, education, age, and population density) at the Census block-group level in the Great Lakes region of the United States. A relatively novel approach to quantitative environmental justice analysis, geographically weighted regression (GWR), is compared with a simplified approach: ordinary least squares (OLS) regression. While OLS creates one global model to describe the relationship between air pollution exposure and sociodemographic factors, GWR creates many local models (one at each Census block group) that account for local variations in this relationship by allowing the value of regression coefficients to vary over space, overcoming OLS's assumption of homogeneity and spatial independence. Results suggest that GWR can elucidate patterns of potential environmental injustices that OLS models may miss. In fact, GWR results show that the relationship between exposure to ambient air pollution and sociodemographic characteristics is non-stationary and can vary geographically and temporally throughout the Great Lakes region. This suggests that regulators may need to address environmental justice issues at the neighborhood level, while understanding that the severity of environmental injustices can change throughout the year.
Cracking the Stoping Paradigm: Field and Modeling Constraints From the Sierra Nevada Batholith
NASA Astrophysics Data System (ADS)
Pignotta, G. S.; Paterson, S. R.; Okaya, D.
2001-12-01
The significance of stoping during pluton emplacement remains a controversial issue. This mechanism has fallen out of favor recently largely due to the apparent lack of stoped blocks preserved in plutons. Our field studies in plutons in a variety of tectonic settings clearly show evidence of stoping. This is not surprising since stoping should be favored when large thermal gradients exist at magma-host rock boundaries. Preservation of stoped blocks is uncommon however, since the rate at which blocks sink is much greater than the rate at which magmas crystallize (Paterson and Okaya, 1999). Thus, only during final crystallization when magmatic yield strength is high, should stoped blocks be trapped. The Mitchell Peak granodiorite, Sierra Nevada is a rare example of a pluton that preserves abundant stoped blocks, with the youngest intrusive phase preserving >25% stoped blocks, and locally, near the margins >50% of exposed surface area is stoped blocks. Thus stoping is an important process here, at least during the final stages of emplacement. This area is ideal to study the mechanisms of block formation and disintegration using both field and modeling techniques, because of abundant stoped blocks, excellent exposure, and nature of host rock. The host rock is a slightly older, coarse grained, granodioritic intrusion that preserves extremely weak to no magmatic fabric, and thus can be treated as a "homogeneous and isotropic" medium for the purposes of thermal-mechanical modeling. Detailed mapping indicates that preserved stoped blocks range in size from hundreds of m's to xenocrystic feldspars, and there is abundant evidence for mechanical disintegration of blocks. Thermal-mechanical models, using detailed maps from the Mitchell Peak area, further support field observations. Rates at which thermal stresses develop and exceed host rock tensile strength are extremely rapid (hours to days) compared to onset of crystal plastic flow and/or melting. The calculated pattern of thermal stresses (i.e. high magnitudes at block corners) strongly supports rapid mechanical breakdown of stoped blocks. We suggest that rapid disintegration coupled with rapid rates of sinking of blocks explains the lack of observable blocks in plutons, and is an effective way to contaminate magmas thermally, mechanically and chemically. Furthermore, lack of observable stoped blocks in plutons should not be used as evidence that stoping did not occur.
A User-Friendly DNA Modeling Software for the Interpretation of Cryo-Electron Microscopy Data.
Larivière, Damien; Galindo-Murillo, Rodrigo; Fourmentin, Eric; Hornus, Samuel; Lévy, Bruno; Papillon, Julie; Ménétret, Jean-François; Lamour, Valérie
2017-01-01
The structural modeling of a macromolecular machine is like a "Lego" approach that is challenged when blocks, like proteins imported from the Protein Data Bank, are to be assembled with an element adopting a serpentine shape, such as DNA templates. DNA must then be built ex nihilo, but modeling approaches are either not user-friendly or very long and fastidious. In this method chapter we show how to use GraphiteLifeExplorer, a software with a simple graphical user interface that enables the sketching of free forms of DNA, of any length, at the atomic scale, as fast as drawing a line on a sheet of paper. We took as an example the nucleoprotein complex of DNA gyrase, a bacterial topoisomerase whose structure has been determined using cryo-electron microscopy (Cryo-EM). Using GraphiteLifeExplorer, we could model in one go a 155 bp long and twisted DNA duplex that wraps around DNA gyrase in the cryo-EM map, improving the quality and interpretation of the final model compared to the initially published data.
Novel modes and adaptive block scanning order for intra prediction in AV1
NASA Astrophysics Data System (ADS)
Hadar, Ofer; Shleifer, Ariel; Mukherjee, Debargha; Joshi, Urvang; Mazar, Itai; Yuzvinsky, Michael; Tavor, Nitzan; Itzhak, Nati; Birman, Raz
2017-09-01
The demand for streaming video content is on the rise and growing exponentially. Networks bandwidth is very costly and therefore there is a constant effort to improve video compression rates and enable the sending of reduced data volumes while retaining quality of experience (QoE). One basic feature that utilizes the spatial correlation of pixels for video compression is Intra-Prediction, which determines the codec's compression efficiency. Intra prediction enables significant reduction of the Intra-Frame (I frame) size and, therefore, contributes to efficient exploitation of bandwidth. In this presentation, we propose new Intra-Prediction algorithms that improve the AV1 prediction model and provide better compression ratios. Two (2) types of methods are considered: )1( New scanning order method that maximizes spatial correlation in order to reduce prediction error; and )2( New Intra-Prediction modes implementation in AVI. Modern video coding standards, including AVI codec, utilize fixed scan orders in processing blocks during intra coding. The fixed scan orders typically result in residual blocks with high prediction error mainly in blocks with edges. This means that the fixed scan orders cannot fully exploit the content-adaptive spatial correlations between adjacent blocks, thus the bitrate after compression tends to be large. To reduce the bitrate induced by inaccurate intra prediction, the proposed approach adaptively chooses the scanning order of blocks according to criteria of firstly predicting blocks with maximum number of surrounding, already Inter-Predicted blocks. Using the modified scanning order method and the new modes has reduced the MSE by up to five (5) times when compared to conventional TM mode / Raster scan and up to two (2) times when compared to conventional CALIC mode / Raster scan, depending on the image characteristics (which determines the percentage of blocks predicted with Inter-Prediction, which in turn impacts the efficiency of the new scanning method). For the same cases, the PSNR was shown to improve by up to 7.4dB and up to 4 dB, respectively. The new modes have yielded 5% improvement in BD-Rate over traditionally used modes, when run on K-Frame, which is expected to yield 1% of overall improvement.
Taylor, P. R.; Baker, R. E.; Simpson, M. J.; Yates, C. A.
2016-01-01
Numerous processes across both the physical and biological sciences are driven by diffusion. Partial differential equations are a popular tool for modelling such phenomena deterministically, but it is often necessary to use stochastic models to accurately capture the behaviour of a system, especially when the number of diffusing particles is low. The stochastic models we consider in this paper are ‘compartment-based’: the domain is discretized into compartments, and particles can jump between these compartments. Volume-excluding effects (crowding) can be incorporated by blocking movement with some probability. Recent work has established the connection between fine- and coarse-grained models incorporating volume exclusion, but only for uniform lattices. In this paper, we consider non-uniform, hybrid lattices that incorporate both fine- and coarse-grained regions, and present two different approaches to describe the interface of the regions. We test both techniques in a range of scenarios to establish their accuracy, benchmarking against fine-grained models, and show that the hybrid models developed in this paper can be significantly faster to simulate than the fine-grained models in certain situations and are at least as fast otherwise. PMID:27383421
Multiphase complete exchange on a circuit switched hypercube
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1991-01-01
On a distributed memory parallel computer, the complete exchange (all-to-all personalized) communication pattern requires each of n processors to send a different block of data to each of the remaining n - 1 processors. This pattern is at the heart of many important algorithms, most notably the matrix transpose. For a circuit switched hypercube of dimension d(n = 2(sup d)), two algorithms for achieving complete exchange are known. These are (1) the Standard Exchange approach that employs d transmissions of size 2(sup d-1) blocks each and is useful for small block sizes, and (2) the Optimal Circuit Switched algorithm that employs 2(sup d) - 1 transmissions of 1 block each and is best for large block sizes. A unified multiphase algorithm is described that includes these two algorithms as special cases. The complete exchange on a hypercube of dimension d and block size m is achieved by carrying out k partial exchange on subcubes of dimension d(sub i) Sigma(sup k)(sub i=1) d(sub i) = d and effective block size m(sub i) = m2(sup d-di). When k = d and all d(sub i) = 1, this corresponds to algorithm (1) above. For the case of k = 1 and d(sub i) = d, this becomes the circuit switched algorithm (2). Changing the subcube dimensions d, varies the effective block size and permits a compromise between the data permutation and block transmission overhead of (1) and the startup overhead of (2). For a hypercube of dimension d, the number of possible combinations of subcubes is p(d), the number of partitions of the integer d. This is an exponential but very slowly growing function and it is feasible over these partitions to discover the best combination for a given message size. The approach was analyzed for, and implemented on, the Intel iPSC-860 circuit switched hypercube. Measurements show good agreement with predictions and demonstrate that the multiphase approach can substantially improve performance for block sizes in the 0 to 160 byte range. This range, which corresponds to 0 to 40 floating point numbers per processor, is commonly encountered in practical numeric applications. The multiphase technique is applicable to all circuit-switched hypercubes that use the common e-cube routing strategy.
Strategies for Teaching in a Block-of-Time Schedule.
ERIC Educational Resources Information Center
Hackmann, Donald G.; Schmitt, Donna M.
1997-01-01
Offers suggestions for developing creative instructional approaches in time-blocked classes. Teachers should continuously engage students in active learning, include group activities to encourage student participation, incorporate activities addressing multiple intelligences, use creative thinking activities, move outside the classroom, employ…
A Block Iterative Finite Element Model for Nonlinear Leaky Aquifer Systems
NASA Astrophysics Data System (ADS)
Gambolati, Giuseppe; Teatini, Pietro
1996-01-01
A new quasi three-dimensional finite element model of groundwater flow is developed for highly compressible multiaquifer systems where aquitard permeability and elastic storage are dependent on hydraulic drawdown. The model is solved by a block iterative strategy, which is naturally suggested by the geological structure of the porous medium and can be shown to be mathematically equivalent to a block Gauss-Seidel procedure. As such it can be generalized into a block overrelaxation procedure and greatly accelerated by the use of the optimum overrelaxation factor. Results for both linear and nonlinear multiaquifer systems emphasize the excellent computational performance of the model and indicate that convergence in leaky systems can be improved up to as much as one order of magnitude.
Identifying the most hazardous synoptic meteorological conditions for Winter UK PM10 exceedences
NASA Astrophysics Data System (ADS)
Webber, Chris; Dacre, Helen; Collins, Bill; Masato, Giacomo
2016-04-01
Summary We investigate the relationship between synoptic scale meteorological variability and local scale pollution concentrations within the UK. Synoptic conditions representative of atmospheric blocking highlighted significant increases in UK PM10 concentration ([PM10]), with the probability of exceeding harmful [PM10] limits also increased. Once relationships had been diagnosed, The Met Office Unified Model (UM) was used to replicate these relationships, using idealised source regions of PM10. This helped to determine the PM10 source regions most influential throughout UK PM10 exceedance events and to test whether the model was capable of capturing the relationships between UK PM10 and atmospheric blocking. Finally, a time slice simulation for 2050-2060 helped to answer the question whether PM10 exceedance events are more likely to occur within a changing climate. Introduction Atmospheric blocking events are well understood to lead to conditions, conducive to pollution events within the UK. Literature shows that synoptic conditions with the ability to deflect the Northwest Atlantic storm track from the UK, often lead to the highest UK pollution concentrations. Rossby wave breaking (RWB) has been identified as a mechanism, which results in atmospheric blocking and its relationship with UK [PM10] is explored using metrics designed in Masato, et al., 2013. Climate simulations facilitated by the Met Office UM, enable these relationships between RWB and PM10 to be found within the model. Subsequently the frequency of events that lead to hazardous PM10 concentrations ([PM10]) in a future climate, can be determined, within a climate simulation. An understanding of the impact, meteorology has on UK [PM10] within a changing climate, will help inform policy makers, regarding the importance of limiting PM10 emissions, ensuring safe air quality in the future. Methodology and Results Three Blocking metrics were used to subset RWB into four categories. These RWB categories were all shown to increase UK [PM10] and to increase the probability of exceeding a UK [PM10] threshold, when they occurred within constrained regions. Further analysis highlighted that Omega Block events lead to the greatest probability of exceeding hazardous UK [PM10] limits. These events facilitated the advection of European PM10, while also providing stagnant conditions over the UK, facilitating PM10 accumulation. The Met Office UM was used and nudged to ERA-Interim Reanalysis wind and temperature fields, to replicate the relationships found using observed UK [PM10]. Inert tracers were implemented into the model to replicate UK PM10 source regions throughout Europe. The modelled tracers were seen to correlate well with observed [PM10] and Figure 1 highlights the correlations between a RWB metric and observed (a) and modelled (b) [PM10]. A further free running model simulation highlighted the deficiency of the Met Office UM in capturing RWB frequency, with a reduction over the Northwest Atlantic/ European region. A final time slice simulation was undertaken for the period 2050-2060, using Representative Concentration Pathway 8.5, which attempted to determine the change in frequency of UK PM10 exceedance events, due to changing meteorology, in a future climate. Conclusions RWB has been shown to increase UK [PM10] and to lead to greater probabilities of exceeding a harmful [PM10] threshold. Omega block events have been determined the most hazardous RWB subset and this is due to a combination of European advection and UK stagnation. Simulations within the Met Office UM were undertaken and the relationships seen between observed UK [PM10] and RWB were replicated within the model, using inert tracers. Finally, time slice simulations were undertaken, determining the change in frequency of UK [PM10] exceedance events within a changing climate. References Masato, G., Hoskins, B. J., Woolings, T., 2013; Wave-breaking Characteristics of Northern Hemisphere Winter Blocking: A Two-Dimensional Approach. J. Climate, 26, 4535-4549.
Profile-Based LC-MS Data Alignment—A Bayesian Approach
Tsai, Tsung-Heng; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.
2014-01-01
A Bayesian alignment model (BAM) is proposed for alignment of liquid chromatography-mass spectrometry (LC-MS) data. BAM belongs to the category of profile-based approaches, which are composed of two major components: a prototype function and a set of mapping functions. Appropriate estimation of these functions is crucial for good alignment results. BAM uses Markov chain Monte Carlo (MCMC) methods to draw inference on the model parameters and improves on existing MCMC-based alignment methods through 1) the implementation of an efficient MCMC sampler and 2) an adaptive selection of knots. A block Metropolis-Hastings algorithm that mitigates the problem of the MCMC sampler getting stuck at local modes of the posterior distribution is used for the update of the mapping function coefficients. In addition, a stochastic search variable selection (SSVS) methodology is used to determine the number and positions of knots. We applied BAM to a simulated data set, an LC-MS proteomic data set, and two LC-MS metabolomic data sets, and compared its performance with the Bayesian hierarchical curve registration (BHCR) model, the dynamic time-warping (DTW) model, and the continuous profile model (CPM). The advantage of applying appropriate profile-based retention time correction prior to performing a feature-based approach is also demonstrated through the metabolomic data sets. PMID:23929872
Control of muscle relaxation during anesthesia: a novel approach for clinical routine.
Stadler, Konrad S; Schumacher, Peter M; Hirter, Sibylle; Leibundgut, Daniel; Bouillon, Thomas W; Glattfelder, Adolf H; Zbinden, Alex M
2006-03-01
During general anesthesia drugs are administered to provide hypnosis, ensure analgesia, and skeletal muscle relaxation. In this paper, the main components of a newly developed controller for skeletal muscle relaxation are described. Muscle relaxation is controlled by administration of neuromuscular blocking agents. The degree of relaxation is assessed by supramaximal train-of-four stimulation of the ulnar nerve and measuring the electromyogram response of the adductor pollicis muscle. For closed-loop control purposes, a physiologically based pharmacokinetic and pharmacodynamic model of the neuromuscular blocking agent mivacurium is derived. The model is used to design an observer-based state feedback controller. Contrary to similar automatic systems described in the literature this controller makes use of two different measures obtained in the train-of-four measurement to maintain the desired level of relaxation. The controller is validated in a clinical study comparing the performance of the controller to the performance of the anesthesiologist. As presented, the controller was able to maintain a preselected degree of muscle relaxation with excellent precision while minimizing drug administration. The controller performed at least equally well as the anesthesiologist.
An Evaluation of the Effective Block Approach Using P-3C and F-111 Crack Growth Data
2008-09-01
the end of 2006 where his research interests included, modelling of fatigue crack growth, infrared NDT technologies and fibre optic corrosion...2006). It was claimed that the growth of these cracks in structures made of 7050 aluminium alloy could not be adequately predicted using classical...the crack growth behaviour of 7050 aluminium alloy subjected to the service load of the F/A-18 fighter planes. To make the matter worse, the
Development of Analytical Systems for Evaluation of US Reconstitution and Recovery Programs.
1980-09-01
Program Evaluation Economic M4odels US Economy ’MABB"ACT (Cort~at m~ Mae @0b neamv md kavily by block numbr) ~This study identifies economic models and...planning tasks Are more complex and difficult than those faced by planners In the post s era. Also, because of those same factors and that the 1980s...comparative analysis outlined in the second study , while also concerned with the accomplishment of societal objectives, is somewhat different. The approach
Gonzato, Carlo; Semsarilar, Mona; Jones, Elizabeth R; Li, Feng; Krooshof, Gerard J P; Wyman, Paul; Mykhaylyk, Oleksandr O; Tuinier, Remco; Armes, Steven P
2014-08-06
Block copolymer self-assembly is normally conducted via post-polymerization processing at high dilution. In the case of block copolymer vesicles (or "polymersomes"), this approach normally leads to relatively broad size distributions, which is problematic for many potential applications. Herein we report the rational synthesis of low-polydispersity diblock copolymer vesicles in concentrated solution via polymerization-induced self-assembly using reversible addition-fragmentation chain transfer (RAFT) polymerization of benzyl methacrylate. Our strategy utilizes a binary mixture of a relatively long and a relatively short poly(methacrylic acid) stabilizer block, which become preferentially expressed at the outer and inner poly(benzyl methacrylate) membrane surface, respectively. Dynamic light scattering was utilized to construct phase diagrams to identify suitable conditions for the synthesis of relatively small, low-polydispersity vesicles. Small-angle X-ray scattering (SAXS) was used to verify that this binary mixture approach produced vesicles with significantly narrower size distributions compared to conventional vesicles prepared using a single (short) stabilizer block. Calculations performed using self-consistent mean field theory (SCMFT) account for the preferred self-assembled structures of the block copolymer binary mixtures and are in reasonable agreement with experiment. Finally, both SAXS and SCMFT indicate a significant degree of solvent plasticization for the membrane-forming poly(benzyl methacrylate) chains.
Blocking and the detection of odor components in blends.
Hosler, J S; Smith, B H
2000-09-01
Recent studies of olfactory blocking have revealed that binary odorant mixtures are not always processed as though they give rise to mixture-unique configural properties. When animals are conditioned to one odorant (A) and then conditioned to a mixture of that odorant with a second (X), the ability to learn or express the association of X with reinforcement appears to be reduced relative to animals that were not preconditioned to A. A recent model of odor-based response patterns in the insect antennal lobe predicts that the strength of the blocking effect will be related to the perceptual similarity between the two odorants, i.e. greater similarity should increase the blocking effect. Here, we test that model in the honeybee Apis mellifera by first establishing a generalization matrix for three odorants and then testing for blocking between all possible combinations of them. We confirm earlier findings demonstrating the occurrence of the blocking effect in olfactory learning of compound stimuli. We show that the occurrence and the strength of the blocking effect depend on the odorants used in the experiment. In addition, we find very good agreement between our results and the model, and less agreement between our results and an alternative model recently proposed to explain the effect.
Tian, Mi; Deng, Zhu; Meng, Zhaokun; Li, Rui; Zhang, Zhiyi; Qi, Wenhui; Wang, Rui; Yin, Tingting; Ji, Menghui
2018-01-01
Children’s block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children’s block building performance. Chinese preschoolers (N = 180) participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children’s block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation. PMID:29441031
Combining Approach in Stages with Least Squares for fits of data in hyperelasticity
NASA Astrophysics Data System (ADS)
Beda, Tibi
2006-10-01
The present work concerns a method of continuous approximation by block of a continuous function; a method of approximation combining the Approach in Stages with the finite domains Least Squares. An identification procedure by sub-domains: basic generating functions are determined step-by-step permitting their weighting effects to be felt. This procedure allows one to be in control of the signs and to some extent of the optimal values of the parameters estimated, and consequently it provides a unique set of solutions that should represent the real physical parameters. Illustrations and comparisons are developed in rubber hyperelastic modeling. To cite this article: T. Beda, C. R. Mecanique 334 (2006).
Finite element based N-Port model for preliminary design of multibody systems
NASA Astrophysics Data System (ADS)
Sanfedino, Francesco; Alazard, Daniel; Pommier-Budinger, Valérie; Falcoz, Alexandre; Boquet, Fabrice
2018-02-01
This article presents and validates a general framework to build a linear dynamic Finite Element-based model of large flexible structures for integrated Control/Structure design. An extension of the Two-Input Two-Output Port (TITOP) approach is here developed. The authors had already proposed such framework for simple beam-like structures: each beam was considered as a TITOP sub-system that could be interconnected to another beam thanks to the ports. The present work studies bodies with multiple attaching points by allowing complex interconnections among several sub-structures in tree-like assembly. The TITOP approach is extended to generate NINOP (N-Input N-Output Port) models. A Matlab toolbox is developed integrating beam and bending plate elements. In particular a NINOP formulation of bending plates is proposed to solve analytic two-dimensional problems. The computation of NINOP models using the outputs of a MSC/Nastran modal analysis is also investigated in order to directly use the results provided by a commercial finite element software. The main advantage of this tool is to provide a model of a multibody system under the form of a block diagram with a minimal number of states. This model is easy to operate for preliminary design and control. An illustrative example highlights the potential of the proposed approach: the synthesis of the dynamical model of a spacecraft with two deployable and flexible solar arrays.
FAST Mast Structural Response to Axial Loading: Modeling and Verification
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Elliott, Kenny B.; Templeton, Justin D.; Song, Kyongchan; Rayburn, Jeffery T.
2012-01-01
The International Space Station s solar array wing mast shadowing problem is the focus of this paper. A building-block approach to modeling and analysis is pursued for the primary structural components of the solar array wing mast structure. Starting with an ANSYS (Registered Trademark) finite element model, a verified MSC.Nastran (Trademark) model is established for a single longeron. This finite element model translation requires the conversion of several modeling and analysis features for the two structural analysis tools to produce comparable results for the single-longeron configuration. The model is then reconciled using test data. The resulting MSC.Nastran (Trademark) model is then extended to a single-bay configuration and verified using single-bay test data. Conversion of the MSC. Nastran (Trademark) single-bay model to Abaqus (Trademark) is also performed to simulate the elastic-plastic longeron buckling response of the single bay prior to folding.
NASA Technical Reports Server (NTRS)
Glocer, A.; Toth, G.; Ma, Y.; Gombosi, T.; Zhang, J.-C.; Kistler, L. M.
2009-01-01
The magnetosphere contains a significant amount of ionospheric O+, particularly during geomagnetically active times. The presence of ionospheric plasma in the magnetosphere has a notable impact on magnetospheric composition and processes. We present a new multifluid MHD version of the Block-Adaptive-Tree Solar wind Roe-type Upwind Scheme model of the magnetosphere to track the fate and consequences of ionospheric outflow. The multifluid MHD equations are presented as are the novel techniques for overcoming the formidable challenges associated with solving them. Our new model is then applied to the May 4, 1998 and March 31, 2001 geomagnetic storms. The results are juxtaposed with traditional single-fluid MHD and multispecies MHD simulations from a previous study, thereby allowing us to assess the benefits of using a more complex model with additional physics. We find that our multifluid MHD model (with outflow) gives comparable results to the multispecies MHD model (with outflow), including a more strongly negative Dst, reduced CPCP, and a drastically improved magnetic field at geosynchronous orbit, as compared to single-fluid MHD with no outflow. Significant differences in composition and magnetic field are found between the multispecies and multifluid approach further away from the Earth. We further demonstrate the ability to explore pressure and bulk velocity differences between H+ and O+, which is not possible when utilizing the other techniques considered
Arched needle technique for inferior alveolar mandibular nerve block.
Chakranarayan, Ashish; Mukherjee, B
2013-03-01
One of the most commonly used local anesthetic techniques in dentistry is the Fischer's technique for the inferior alveolar nerve block. Incidentally this technique also suffers the maximum failure rate of approximately 35-45%. We studied a method of inferior alveolar nerve block by injecting a local anesthetic solution into the pterygomandibular space by arching and changing the approach angle of the conventional technique and estimated its efficacy. The needle after the initial insertion is arched and inserted in a manner that it approaches the medial surface of the ramus at an angle almost perpendicular to it. The technique was applied to 100 patients for mandibular molar extraction and the anesthetic effects were assessed. A success rate of 98% was obtained.
Elliot, Samuel G; Tolborg, Søren; Sádaba, Irantzu; Taarning, Esben; Meier, Sebastian
2017-07-21
The future role of biomass-derived chemicals relies on the formation of diverse functional monomers in high yields from carbohydrates. Recently, it has become clear that a series of α-hydroxy acids, esters, and lactones can be formed from carbohydrates in alcohol and water solvents using tin-containing catalysts such as Sn-Beta. These compounds are potential building blocks for polyesters bearing additional olefin and alcohol functionalities. An NMR approach was used to identify, quantify, and optimize the formation of these building blocks in the Sn-Beta-catalyzed transformation of abundant carbohydrates. Record yields of the target molecules can be achieved by obstructing competing reactions through solvent selection. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Benner, Peter; Dolgov, Sergey; Khoromskaia, Venera; Khoromskij, Boris N.
2017-04-01
In this paper, we propose and study two approaches to approximate the solution of the Bethe-Salpeter equation (BSE) by using structured iterative eigenvalue solvers. Both approaches are based on the reduced basis method and low-rank factorizations of the generating matrices. We also propose to represent the static screen interaction part in the BSE matrix by a small active sub-block, with a size balancing the storage for rank-structured representations of other matrix blocks. We demonstrate by various numerical tests that the combination of the diagonal plus low-rank plus reduced-block approximation exhibits higher precision with low numerical cost, providing as well a distinct two-sided error estimate for the smallest eigenvalues of the Bethe-Salpeter operator. The complexity is reduced to O (Nb2) in the size of the atomic orbitals basis set, Nb, instead of the practically intractable O (Nb6) scaling for the direct diagonalization. In the second approach, we apply the quantized-TT (QTT) tensor representation to both, the long eigenvectors and the column vectors in the rank-structured BSE matrix blocks, and combine this with the ALS-type iteration in block QTT format. The QTT-rank of the matrix entities possesses almost the same magnitude as the number of occupied orbitals in the molecular systems, No
Improving Ambulatory Training in Internal Medicine: X + Y (or Why Not?).
Ray, Alaka; Jones, Danielle; Palamara, Kerri; Overland, Maryann; Steinberg, Kenneth P
2016-12-01
The Accreditation Council for Graduate Medical Education (ACGME) requirement that internal medicine residents spend one-third of their training in an ambulatory setting has resulted in programmatic innovation across the country. The traditional weekly half-day clinic model has lost ground to the block or "X + Y" clinic model, which has gained in popularity for many reasons. Several disadvantages of the block model have been reported, however, and residency programs are caught between the threat of old and new challenges. We offer the perspectives of three large residency programs (University of Washington, Emory University, and Massachusetts General Hospital) that have successfully navigated scheduling challenges in our individual settings without implementing the block model. By sharing our innovative non-block models, we hope to demonstrate that programs can and should create the solution that fits their individual needs.
NASA Astrophysics Data System (ADS)
Sakaguchi, Hidetsugu; Kadowaki, Shuntaro
2017-07-01
We study slowly pulling block-spring models in random media. Second-order phase transitions exist in a model pulled by a constant force in the case of velocity-strengthening friction. If external forces are slowly increased, nearly critical states are self-organized. Slips of various sizes occur, and the probability distributions of slip size roughly obey power laws. The exponent is close to that in the quenched Edwards-Wilkinson model. Furthermore, the slip-size distributions are investigated in cases of Coulomb friction, velocity-weakening friction, and two-dimensional block-spring models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, R. B.; Dion, S.; Konigslow, K. von
Self-consistent field theory equations are presented that are suitable for use as a coarse-grained model for DNA coated colloids, polymer-grafted nanoparticles and other systems with approximately isotropic interactions. The equations are generalized for arbitrary numbers of chemically distinct colloids. The advantages and limitations of such a coarse-grained approach for DNA coated colloids are discussed, as are similarities with block copolymer self-assembly. In particular, preliminary results for three species self-assembly are presented that parallel results from a two dimensional ABC triblock copolymer phase. The possibility of incorporating crystallization, dynamics, inverse statistical mechanics and multiscale modelling techniques are discussed.
NASA Astrophysics Data System (ADS)
Xiong, Yan; Chen, Yang; Sun, Zhiyong; Hao, Lina; Dong, Jie
2014-07-01
Ionic polymer metal composites (IPMCs) are a type of electroactive polymer (EAP) that can be used as both sensors and actuators. An IPMC has enormous potential application in the field of biomimetic robotics, medical devices, and so on. However, an IPMC actuator has a great number of disadvantages, such as creep and time-variation, making it vulnerable to external disturbances. In addition, the complex actuation mechanism makes it difficult to model and the demand of the control algorithm is laborious to implement. In this paper, we obtain a creep model of the IPMC by means of model identification based on the method of creep operator linear superposition. Although the mathematical model is not approximate to the IPMC accurate model, it is accurate enough to be used in MATLAB to prove the control algorithm. A controller based on the active disturbance rejection control (ADRC) method is designed to solve the drawbacks previously given. Because the ADRC controller is separate from the mathematical model of the controlled plant, the control algorithm has the ability to complete disturbance estimation and compensation. Some factors, such as all external disturbances, uncertainty factors, the inaccuracy of the identification model and different kinds of IPMCs, have little effect on controlling the output block force of the IPMC. Furthermore, we use the particle swarm optimization algorithm to adjust ADRC parameters so that the IPMC actuator can approach the desired block force with unknown external disturbances. Simulations and experimental examples validate the effectiveness of the ADRC controller.
NASA Astrophysics Data System (ADS)
Perrault, D. S.; Furbish, D. J.; Miller, C. F.
2006-05-01
Searchlight pluton, a steeply tilted, 10 km thick Miocene intrusion in the Colorado River Extensional Corridor, exposes a zone with abundant, 5-400 m long blocks of Proterozoic gneiss. Blocks are present within a pair of subparallel horizons that make up a 2 km-thick zone and extend about 6 km laterally away from the pluton's north margin slightly oblique to the initially subhorizontal boundary between the pluton's middle unit (granite) and lower unit (qtz monzonite). Blocks are a variety of Precambrian metasedimentary gneisses, granitic gneisses, and mylonites. Blocks are commonly polylithologic and well foliated, with long and intermediate dimensions parallel to both their own foliation and that of the granitic host. Their average aspect ratio is ~ 4:1. Blocks within these horizons are interpreted as stoped (detached country rock that experience gravity- induced displacement) based on several lines of evidence. First, the distribution and abundances of blocks are not consistent with an isolated panel of wall rock (screen). The zone is laterally discontinuous (local abundances vary from ~ 0-40 %); transects a gradational (cm-m scale) internal contact at a slightly oblique angle; and tapers away from the pluton's margin. Second, while block foliations are homoclinal and show fairly consistent attitudes from block to block, block foliations are discordant with wall rock foliations at the same stratigraphic level (adjacent north wall). Third, mush disturbance features such as schlieren and enhanced feldspar foliation beneath blocks suggest a downward compaction. We interpret the blocks to have been emplaced after wall collapse events. We are using scaled settling experiments to clarify how blocks move within viscous fluids and interact with crystal mushes. The experiments, involving tabular ceramic blocks with density ρ = 1.75-2.20 g cm-3 settling in shampoo (ρ = 1.02 g cm-3) with viscosity μ = 20.35 Pa s, are scaled to order-of-magnitude by the particle Reynolds number (Re ~ 10-2) based on a prototype spherical block of diameter ~ 50 m settling through a crystal free magma with the density (~ 2.25 g cm-3) and viscosity (~ 105 Pa s) of a granitic melt. With low Reynolds number settling, tabular blocks starting from arbitrary orientation tend to become aligned with their long dimension vertical. During alignment fluid shear is focused on the trailing part of the downward facing surface of the block, inducing a torque that tends to upright the block. Settling experiments also provide insights regarding how blocks might interact with mush/melt interfaces. Mushes at sufficiently high crystal content (>50%) stiffen rheologically. Stoped blocks settling on these interfaces may impart stresses that result in localized deformation of the mush (forming schlieren and/or compaction features). We moreover suggest that tabular blocks are deposited with their long axes horizontal at these interfaces; blocks with a geometry controlled by internal structures (e.g. foliation) would on average possess a subparallel alignment of both their geometric shape and internal structure. At sufficiently low Re, this realignment of a tabular block begins as it approaches a semi-rigid surface or interface and the leading fluid boundary layer interacts with the interface; the vertical speed of the block decreases and it begins to deflect and take on a lateral motion.
NASA Astrophysics Data System (ADS)
Hubbard, J.; Onac, B. P.; Kruse, S.; Forray, F. L.
2017-12-01
Research at Scăriloara Ice Cave has proceeded for over 150 years, primarily driven by the presence and paleoclimatic importance of the large perennial ice block and various ice speleothems located within its galleries. Previous observations of the ice block led to rudimentary volume estimates of 70,000 to 120,000 cubic meters (m3), prospectively placing it as one of the world's largest cave ice deposits. The cave morphology and the surface of the ice block are now recreated in a total station survey-validated 3D model, produced using Structure from Motion (SfM) software. With the total station survey and the novel use of ArcGIS tools, the SfM validation process is drastically simplified to produce a scaled, georeferenced, and photo-texturized 3D model of the cave environment with a root-mean-square error (RMSE) of 0.24 m. Furthermore, ground penetrating radar data was collected and spatially oriented with the total station survey to recreate the ice block basal surface and was combined with the SfM model to create a model of the ice block itself. The resulting ice block model has a volume of over 118,000 m3 with an uncertainty of 9.5%, with additional volumes left un-surveyed. The varying elevation of the ice block basal surface model reflect specific features of the cave roof, such as areas of enlargement, shafts, and potential joints, which offer further validation and inform theories on cave and ice genesis. Specifically, a large depression area was identified as a potential area of initial ice growth. Finally, an ice thickness map was produced that will aid in the designing of future ice coring projects. This methodology presents a powerful means to observe and accurately characterize and measure cave and cave ice morphologies with ease and affordability. Results further establish the significance of Scăriloara's ice block to paleoclimate research, provide insights into cave and ice block genesis, and aid future study design.
Blocking Mechanism Study of Self-Compacting Concrete Based on Discrete Element Method
NASA Astrophysics Data System (ADS)
Zhang, Xuan; Li, Zhida; Zhang, Zhihua
2017-11-01
In order to study the influence factors of blocking mechanism of Self-Compaction Concrete (SCC), Roussel’s granular blocking model was verified and extended by establishing the discrete element model of SCC. The influence of different parameters on the filling capacity and blocking mechanism of SCC were also investigated. The results showed that: it was feasible to simulate the blocking mechanism of SCC by using Discrete Element Method (DEM). The passing ability of pebble aggregate was superior to the gravel aggregate and the passing ability of hexahedron particles was bigger than tetrahedron particles, while the tetrahedron particle simulation results were closer to the actual situation. The flow of SCC as another significant factor affected the passing ability that with the flow increased, the passing ability increased. The correction coefficient λ of the steel arrangement (channel section shape) and flow rate γ in the block model were introduced that the value of λ was 0.90-0.95 and the maximum casting rate was 7.8 L/min.
Flight control systems properties and problems. Volume 2: Block diagram compendium
NASA Technical Reports Server (NTRS)
Johnston, D. E.
1975-01-01
A compendium of stability augmentation system and autopilot block diagrams is presented. Descriptive materials for 48 different types of aircraft systems are provided. A broad representation of the many mechanical approaches which have been used for aircraft control is developed.
Pathway-engineering for highly-aligned block copolymer arrays
Choo, Youngwoo; Majewski, Paweł W.; Fukuto, Masafumi; ...
2017-12-06
While kinetic aspects of self-assembly can hinder ordering, non-equilibirum effects can also be exploited to enforce a particular kind of order. We develop a pathway-engineering approach, using it to select a particular arrangement of a block copolymer cylinder phase.
Pathway-engineering for highly-aligned block copolymer arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choo, Youngwoo; Majewski, Paweł W.; Fukuto, Masafumi
While kinetic aspects of self-assembly can hinder ordering, non-equilibirum effects can also be exploited to enforce a particular kind of order. We develop a pathway-engineering approach, using it to select a particular arrangement of a block copolymer cylinder phase.
Lunardi, Andrea; Ala, Ugo; Epping, Mirjam T; Salmena, Leonardo; Clohessy, John G; Webster, Kaitlyn A; Wang, Guocan; Mazzucchelli, Roberta; Bianconi, Maristella; Stack, Edward C; Lis, Rosina; Patnaik, Akash; Cantley, Lewis C; Bubley, Glenn; Cordon-Cardo, Carlos; Gerald, William L; Montironi, Rodolfo; Signoretti, Sabina; Loda, Massimo; Nardella, Caterina; Pandolfi, Pier Paolo
2013-07-01
Here we report an integrated analysis that leverages data from treatment of genetic mouse models of prostate cancer along with clinical data from patients to elucidate new mechanisms of castration resistance. We show that castration counteracts tumor progression in a Pten loss-driven mouse model of prostate cancer through the induction of apoptosis and proliferation block. Conversely, this response is bypassed with deletion of either Trp53 or Zbtb7a together with Pten, leading to the development of castration-resistant prostate cancer (CRPC). Mechanistically, the integrated acquisition of data from mouse models and patients identifies the expression patterns of XAF1, XIAP and SRD5A1 as a predictive and actionable signature for CRPC. Notably, we show that combined inhibition of XIAP, SRD5A1 and AR pathways overcomes castration resistance. Thus, our co-clinical approach facilitates the stratification of patients and the development of tailored and innovative therapeutic treatments.
Residential water demand model under block rate pricing: A case study of Beijing, China
NASA Astrophysics Data System (ADS)
Chen, H.; Yang, Z. F.
2009-05-01
In many cities, the inconsistency between water supply and water demand has become a critical problem because of deteriorating water shortage and increasing water demand. Uniform price of residential water cannot promote the efficient water allocation. In China, block water price will be put into practice in the future, but the outcome of such regulation measure is unpredictable without theory support. In this paper, the residential water is classified by the volume of water usage based on economic rules and block water is considered as different kinds of goods. A model based on extended linear expenditure system (ELES) is constructed to simulate the relationship between block water price and water demand, which provide theoretical support for the decision-makers. Finally, the proposed model is used to simulate residential water demand under block rate pricing in Beijing.
Update on Bayesian Blocks: Segmented Models for Sequential Data
NASA Technical Reports Server (NTRS)
Scargle, Jeff
2017-01-01
The Bayesian Block algorithm, in wide use in astronomy and other areas, has been improved in several ways. The model for block shape has been generalized to include other than constant signal rate - e.g., linear, exponential, or other parametric models. In addition the computational efficiency has been improved, so that instead of O(N**2) the basic algorithm is O(N) in most cases. Other improvements in the theory and application of segmented representations will be described.
Yu, Lin; Zhang, Zheng; Zhang, Huan; Ding, Jiandong
2009-06-08
A facile method to obtain a thermoreversible physical hydrogel was found by simply mixing an aqueous sol of a block copolymer with a precipitate of a similar copolymer but with a different block ratio. Two ABA-type triblock copolymers poly(D,L-lactic acid-co-glycolic acid)-B-poly(ethylene glycol)-B-poly(D,L-lactic acid-co-glycolic acid) (PLGA-PEG-PLGA) were synthesized. One sample in water was a sol in a broad temperature region, while the other in water was just a precipitate. The mixture of these two samples with a certain mix ratio underwent, however, a sol-to-gel-to-precipitate transition upon an increase of temperature. A dramatic tuning of the sol-gel transition temperature was conveniently achieved by merely varying mix ratio, even in the case of a similar molecular weight. Our study indicates that the balance of hydrophobicity and hydrophilicity within this sort of amphiphilic copolymers is critical to the inverse thermal gelation in water resulting from aggregation of micelles. The availability of encapsulation and sustained release of lysozyme, a model protein by the thermogelling systems was confirmed. This "mix" method provides a very convenient approach to design injectable thermogelling biomaterials with a broad adjustable window, and the novel copolymer mixture platform is potentially used in drug delivery and other biomedical applications.
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
Role of Polyalanine Domains in -Sheet Formation in Spider Silk Block Copolymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabotyagova, O.; Cebe, P; Kaplan, D
2010-01-01
Genetically engineered spider silk-like block copolymers were studied to determine the influence of polyalanine domain size on secondary structure. The role of polyalanine block distribution on {beta}-sheet formation was explored using FT-IR and WAXS. The number of polyalanine blocks had a direct effect on the formation of crystalline {beta}-sheets, reflected in the change in crystallinity index as the blocks of polyalanines increased. WAXS analysis confirmed the crystalline nature of the sample with the largest number of polyalanine blocks. This approach provides a platform for further exploration of the role of specific amino acid chemistries in regulating the assembly of {beta}-sheetmore » secondary structures, leading to options to regulate material properties through manipulation of this key component in spider silks.« less
The Building Blocks of Geology.
ERIC Educational Resources Information Center
Gibson, Betty O.
2001-01-01
Discusses teaching techniques for teaching about rocks, minerals, and the differences between them. Presents a model-building activity that uses plastic building blocks to build crystal and rock models. (YDS)
Autonomous self-healing structural composites with bio-inspired design
D’Elia, Eleonora; Eslava, Salvador; Miranda, Miriam; Georgiou, Theoni K.; Saiz, Eduardo
2016-01-01
Strong and tough natural composites such as bone, silk or nacre are often built from stiff blocks bound together using thin interfacial soft layers that can also provide sacrificial bonds for self-repair. Here we show that it is possible exploit this design in order to create self-healing structural composites by using thin supramolecular polymer interfaces between ceramic blocks. We have built model brick-and-mortar structures with ceramic contents above 95 vol% that exhibit strengths of the order of MPa (three orders of magnitude higher than the interfacial polymer) and fracture energies that are two orders of magnitude higher than those of the glass bricks. More importantly, these properties can be fully recovered after fracture without using external stimuli or delivering healing agents. This approach demonstrates a very promising route towards the design of strong, ideal self-healing materials able to self-repair repeatedly without degradation or external stimuli. PMID:27146382
Autonomous self-healing structural composites with bio-inspired design.
D'Elia, Eleonora; Eslava, Salvador; Miranda, Miriam; Georgiou, Theoni K; Saiz, Eduardo
2016-05-05
Strong and tough natural composites such as bone, silk or nacre are often built from stiff blocks bound together using thin interfacial soft layers that can also provide sacrificial bonds for self-repair. Here we show that it is possible exploit this design in order to create self-healing structural composites by using thin supramolecular polymer interfaces between ceramic blocks. We have built model brick-and-mortar structures with ceramic contents above 95 vol% that exhibit strengths of the order of MPa (three orders of magnitude higher than the interfacial polymer) and fracture energies that are two orders of magnitude higher than those of the glass bricks. More importantly, these properties can be fully recovered after fracture without using external stimuli or delivering healing agents. This approach demonstrates a very promising route towards the design of strong, ideal self-healing materials able to self-repair repeatedly without degradation or external stimuli.
Autonomous self-healing structural composites with bio-inspired design
NASA Astrophysics Data System (ADS)
D'Elia, Eleonora; Eslava, Salvador; Miranda, Miriam; Georgiou, Theoni K.; Saiz, Eduardo
2016-05-01
Strong and tough natural composites such as bone, silk or nacre are often built from stiff blocks bound together using thin interfacial soft layers that can also provide sacrificial bonds for self-repair. Here we show that it is possible exploit this design in order to create self-healing structural composites by using thin supramolecular polymer interfaces between ceramic blocks. We have built model brick-and-mortar structures with ceramic contents above 95 vol% that exhibit strengths of the order of MPa (three orders of magnitude higher than the interfacial polymer) and fracture energies that are two orders of magnitude higher than those of the glass bricks. More importantly, these properties can be fully recovered after fracture without using external stimuli or delivering healing agents. This approach demonstrates a very promising route towards the design of strong, ideal self-healing materials able to self-repair repeatedly without degradation or external stimuli.
Ab initio treatment of ion-induced charge transfer dynamics of isolated 2-deoxy-D-ribose.
Bacchus-Montabonel, Marie-Christine
2014-08-21
Modeling-induced radiation damage in biological systems, in particular, in DNA building blocks, is of major concern in cancer therapy studies. Ion-induced charge-transfer dynamics may indeed be involved in proton and hadrontherapy treatments. We have thus performed a theoretical approach of the charge-transfer dynamics in collision of C(4+) ions and protons with isolated 2-deoxy-D-ribose in a wide collision energy range by means of ab initio quantum chemistry molecular methods. The comparison of both projectile ions has been performed with regard to previous theoretical and experimental results. The charge transfer appears markedly less efficient with the 2-deoxy-D-ribose target than that with pyrimidine nucleobases, which would induce an enhancement of the fragmentation process in agreement with experimental measurements. The mechanism has been analyzed with regard to inner orbital excitations, and qualitative tendencies have been pointed out for studies on DNA buiding block damage.
Conformal Bootstrap in Mellin Space
NASA Astrophysics Data System (ADS)
Gopakumar, Rajesh; Kaviraj, Apratim; Sen, Kallol; Sinha, Aninda
2017-02-01
We propose a new approach towards analytically solving for the dynamical content of conformal field theories (CFTs) using the bootstrap philosophy. This combines the original bootstrap idea of Polyakov with the modern technology of the Mellin representation of CFT amplitudes. We employ exchange Witten diagrams with built-in crossing symmetry as our basic building blocks rather than the conventional conformal blocks in a particular channel. Demanding consistency with the operator product expansion (OPE) implies an infinite set of constraints on operator dimensions and OPE coefficients. We illustrate the power of this method in the ɛ expansion of the Wilson-Fisher fixed point by reproducing anomalous dimensions and, strikingly, obtaining OPE coefficients to higher orders in ɛ than currently available using other analytic techniques (including Feynman diagram calculations). Our results enable us to get a somewhat better agreement between certain observables in the 3D Ising model and the precise numerical values that have been recently obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karakalos, Stavros; Zugic, Branko; Stowers, Kara J.
Modern methods of esterification, one of the most important reactions in organic synthesis, are reaching their limits, as far as waste and expense are concerned. Novel chemical approaches to ester formation are therefore of importance. We report a simple procedure free of caustic reagents or byproducts for the facile direct oxidative methyl esterification of aldehydes over nanoporous Au catalysts. Complementary model studies on single crystal gold surfaces establish the fundamental reactions involved. We also find that methanol more readily reacts with adsorbed active oxygen than do the aldehydes, but that once the aldehydes do react, they form strongly-bound acrylates thatmore » block reactive sites and decrease the yields of acrylic esters under steady flow conditions at 420 K. We can achieve significant improvements in yield by operating at higher temperatures, which render the site-blocking acrylates unstable.« less
Karakalos, Stavros; Zugic, Branko; Stowers, Kara J.; ...
2016-03-18
Modern methods of esterification, one of the most important reactions in organic synthesis, are reaching their limits, as far as waste and expense are concerned. Novel chemical approaches to ester formation are therefore of importance. We report a simple procedure free of caustic reagents or byproducts for the facile direct oxidative methyl esterification of aldehydes over nanoporous Au catalysts. Complementary model studies on single crystal gold surfaces establish the fundamental reactions involved. We also find that methanol more readily reacts with adsorbed active oxygen than do the aldehydes, but that once the aldehydes do react, they form strongly-bound acrylates thatmore » block reactive sites and decrease the yields of acrylic esters under steady flow conditions at 420 K. We can achieve significant improvements in yield by operating at higher temperatures, which render the site-blocking acrylates unstable.« less
NASA Astrophysics Data System (ADS)
Amireghbali, A.; Coker, D.
2018-01-01
Burridge and Knopoff proposed a mass-spring model to explore interface dynamics along a fault during an earthquake. The Burridge and Knopoff (BK) model is composed of a series of blocks of equal mass connected to each other by springs of same stiffness. The blocks also are attached to a rigid driver via another set of springs that pulls them at a constant velocity against a rigid substrate. They studied dynamics of interface for an especial case with ten blocks and a specific set of fault properties. In our study effects of Coulomb and rate-state dependent friction laws on the dynamics of a single block BK model is investigated. The model dynamics is formulated as a system of coupled nonlinear ordinary differential equations in state-space form which lends itself to numerical integration methods, e.g. Runge-Kutta procedure for solution. The results show that the rate and state dependent friction law has the potential of triggering dynamic patterns that are different from those under Coulomb law.
Phase response curves for models of earthquake fault dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franović, Igor, E-mail: franovic@ipb.ac.rs; Kostić, Srdjan; Perc, Matjaž
We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how themore » profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.« less
Block entropy and quantum phase transition in the anisotropic Kondo necklace model
NASA Astrophysics Data System (ADS)
Mendoza-Arenas, J. J.; Franco, R.; Silva-Valencia, J.
2010-06-01
We study the von Neumann block entropy in the Kondo necklace model for different anisotropies η in the XY interaction between conduction spins using the density matrix renormalization group method. It was found that the block entropy presents a maximum for each η considered, and, comparing it with the results of the quantum criticality of the model based on the behavior of the energy gap, we observe that the maximum block entropy occurs at the quantum critical point between an antiferromagnetic and a Kondo singlet state, so this measure of entanglement is useful for giving information about where a quantum phase transition occurs in this model. We observe that the block entropy also presents a maximum at the quantum critical points that are obtained when an anisotropy Δ is included in the Kondo exchange between localized and conduction spins; when Δ diminishes for a fixed value of η, the critical point increases, favoring the antiferromagnetic phase.
Cloning and characterization of the canine receptor for advanced glycation end products.
Murua Escobar, Hugo; Soller, Jan T; Sterenczak, Katharina A; Sperveslage, Jan D; Schlueter, Claudia; Burchardt, Birgit; Eberle, Nina; Fork, Melanie; Nimzyk, Rolf; Winkler, Susanne; Nolte, Ingo; Bullerdiek, Jörn
2006-03-15
Metastasis is one of the major problems when dealing with malignant neoplasias. Accordingly, the finding of molecular targets, which can be addressed to reduce tumour metastasising, will have significant impact on the development of new therapeutic approaches. Recently, the receptor for advanced glycation end products (RAGE)-high mobility group B1 (HMGB1) protein complex has been shown to have significant influence on invasiveness, growth and motility of tumour cells, which are essential characteristics required for metastatic behaviour. A set of in vitro and in vivo approaches showed that blocking of this complex resulted in drastic suppression of tumour cell growth. Due to the similarities of human and canine cancer the dog has joined the common rodent animal model for therapeutic and preclinical studies. However, complete characterisation of the protein complex is a precondition to a therapeutic approach based on the blocking of the RAGE-HMGB1 complex to spontaneously occurring tumours in dogs. We recently characterised the canine HMGB1 gene and protein completely. Here we present the complete characterisation of the canine RAGE gene including its 1384 bp mRNA, the 1215 bp protein coding sequence, the 2835 bp genomic structure, chromosomal localisation, gene expression pattern, and its 404 amino acid protein. Furthermore we compared the CDS of six different canine breeds and screened them for single nucleotide polymorphisms.
Amino acid neurotransmitters and new approaches to anticonvulsant drug action.
Meldrum, B
1984-01-01
Amino acids provide the most universal and important inhibitory (gamma-aminobutyric acid (GABA), glycine) and excitatory (glutamate, aspartate, cysteic acid, cysteine sulphinic acid) neurotransmitters in the brain. An anticonvulsant action may be produced (1) by enhancing inhibitory (GABAergic) processes, and (2) by diminishing excitatory transmission. Possible pharmacological mechanisms for enhancing GABA-mediated inhibition include (1) GABA agonist action, (2) GABA prodrugs, (3) drugs facilitating GABA release from terminals, (4) inhibition of GABA-transaminase, (5) allosteric enhancement of the efficacy of GABA at the receptor complex, (6) direction action on the chloride ionophore, and (7) inhibition of GABA reuptake. Examples of these approaches include the use of irreversible GABA-transaminase inhibitors, such as gamma-vinyl GABA, and the development of anticonvulsant beta-carbolines that interact with the "benzodiazepine receptor." Pharmacological mechanisms for diminishing excitatory transmission include (1) enzyme inhibitors that decrease the maximal rate of synthesis of glutamate or aspartate, (2) drugs that decrease the synaptic release of glutamate or aspartate, and (3) drugs that block the post-synaptic action of excitatory amino acids. Compounds that selectively antagonise excitation due to dicarboxylic amino acids have recently been developed. Those that selectively block excitation produced by N-methyl-D-aspartate (and aspartate) have proved to be potent anticonvulsants in many animal models of epilepsy. This provides a novel approach to the design of anticonvulsant drugs.
Shashidharamurthy, R; Machiah, D; Bozeman, E N; Srivatsan, S; Patel, J; Cho, A; Jacob, J; Selvaraj, P
2012-09-01
Therapeutic use and function of recombinant molecules can be studied by the expression of foreign genes in mice. In this study, we have expressed human Fcγ receptor-Ig fusion molecules (FcγR-Igs) in mice by administering FcγR-Ig plasmid DNAs hydrodynamically and compared their effectiveness with purified molecules in blocking immune-complex (IC)-mediated inflammation in mice. The concentration of hydrodynamically expressed FcγR-Igs (CD16A(F)-Ig, CD32A(R)-Ig and CD32A(H)-Ig) reached a maximum of 130 μg ml(-1) of blood within 24 h after plasmid DNA administration. The in vivo half-life of FcγR-Igs was found to be 9-16 days and western blot analysis showed that the FcγR-Igs were expressed as a homodimer. The hydrodynamically expressed FcγR-Igs blocked 50-80% of IC-mediated inflammation up to 3 days in a reverse passive Arthus reaction model. Comparative analysis with purified molecules showed that hydrodynamically expressed FcγR-Igs are more efficient than purified molecules in blocking IC-mediated inflammation and had a higher half-life. In summary, these results suggest that the administration of a plasmid vector with the FcγR-Ig gene can be used to study the consequences of blocking IC binding to FcγRs during the development of inflammatory diseases. This approach may have potential therapeutic value in treating IC-mediated inflammatory autoimmune diseases such as lupus, arthritis and autoimmune vasculitis.
Assessing delay and lag in sagittal trunk control using a tracking task.
Reeves, N Peter; Luis, Abraham; Chan, Elizabeth C; Sal Y Rosas, Victor G; Tanaka, Martin L
2018-05-17
Slower trunk muscle responses are linked to back pain and injury. Unfortunately, clinical assessments of spine function do not objectively evaluate this important attribute, which reflects speed of trunk control. Speed of trunk control can be parsed into two components: (1) delay, the time it takes to initiate a movement, and (2) lag, the time it takes to execute a movement once initiated. The goal of this study is to demonstrate a new approach to assess delay and lag in trunk control using a simple tracking task. Ten healthy subjects performed four blocks of six trials of trunk tracking in the sagittal plane. Delay and lag were estimated by modeling trunk control for predictable and unpredictable (control mode) trunk movements in flexion and extension (control direction) at movement amplitudes of 2°, 4°, and 6° (control amplitude). The main effect of control mode, direction, and amplitude of movement were compared between trial blocks to assess secondary influencers (e.g., fatigue). Only control mode was consistent across trial blocks with predictable movements being faster than unpredictable for both delay and lag. Control direction and amplitude effects on delay and lag were consistent across the first two trial blocks and less consistent in later blocks. Given the heterogeneity in the presentation of back pain, clinical assessment of trunk control should include different control modes, directions, and amplitudes. To reduce testing time and the influence of fatigue, we recommend six trials to assess trunk control. Copyright © 2018 Elsevier Ltd. All rights reserved.
How the continents deform: The evidence from tectonic geodesy
Thatcher, Wayne R.
2009-01-01
Space geodesy now provides quantitative maps of the surface velocity field within tectonically active regions, supplying constraints on the spatial distribution of deformation, the forces that drive it, and the brittle and ductile properties of continental lithosphere. Deformation is usefully described as relative motions among elastic blocks and is block-like because major faults are weaker than adjacent intact crust. Despite similarities, continental block kinematics differs from global plate tectonics: blocks are much smaller, typically ∼100–1000 km in size; departures from block rigidity are sometimes measurable; and blocks evolve over ∼1–10 Ma timescales, particularly near their often geometrically irregular boundaries. Quantitatively relating deformation to the forces that drive it requires simplifying assumptions about the strength distribution in the lithosphere. If brittle/elastic crust is strongest, interactions among blocks control the deformation. If ductile lithosphere is the stronger, its flow properties determine the surface deformation, and a continuum approach is preferable.
Final report for “Extreme-scale Algorithms and Solver Resilience”
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, William Douglas
2017-06-30
This is a joint project with principal investigators at Oak Ridge National Laboratory, Sandia National Laboratories, the University of California at Berkeley, and the University of Tennessee. Our part of the project involves developing performance models for highly scalable algorithms and the development of latency tolerant iterative methods. During this project, we extended our performance models for the Multigrid method for solving large systems of linear equations and conducted experiments with highly scalable variants of conjugate gradient methods that avoid blocking synchronization. In addition, we worked with the other members of the project on alternative techniques for resilience and reproducibility.more » We also presented an alternative approach for reproducible dot-products in parallel computations that performs almost as well as the conventional approach by separating the order of computation from the details of the decomposition of vectors across the processes.« less
Object schemas for grounding language in a responsive robot
NASA Astrophysics Data System (ADS)
Hsiao, Kai-Yuh; Tellex, Stefanie; Vosoughi, Soroush; Kubat, Rony; Roy, Deb
2008-12-01
An approach is introduced for physically grounded natural language interpretation by robots that reacts appropriately to unanticipated physical changes in the environment and dynamically assimilates new information pertinent to ongoing tasks. At the core of the approach is a model of object schemas that enables a robot to encode beliefs about physical objects in its environment using collections of coupled processes responsible for sensorimotor interaction. These interaction processes run concurrently in order to ensure responsiveness to the environment, while co-ordinating sensorimotor expectations, action planning and language use. The model has been implemented on a robot that manipulates objects on a tabletop in response to verbal input. The implementation responds to verbal requests such as 'Group the green block and the red apple', while adapting in real time to unexpected physical collisions and taking opportunistic advantage of any new information it may receive through perceptual and linguistic channels.
Block copolymer libraries: modular versatility of the macromolecular Lego system.
Lohmeijer, Bas G G; Wouters, Daan; Yin, Zhihui; Schubert, Ulrich S
2004-12-21
The synthesis and characterization of a new 4 x 4 library of block copolymers based on polystyrene and poly(ethylene oxide) connected by an asymmetrical octahedral bis(terpyridine) ruthenium complex at the block junction are described, while initial studies on the thin film morphology of the components of the library are presented by the use of Atomic Force Microscopy, demonstrating the impact of a library approach to derive structure-property relationships.
Manpower Planning Models. 5. Optimization Models
1975-10-01
aide 11 neceaaary and Identity by block number) Manpower Planning \\ \\ X Modelling Optimization 20. ABS emry and Identity by block number...notation resulting from the previous maximum M. We exploit the probabilistic interpretation of the flow process whenever it eases the exposi - tion
Toward Generalization of Iterative Small Molecule Synthesis
Lehmann, Jonathan W.; Blair, Daniel J.; Burke, Martin D.
2018-01-01
Small molecules have extensive untapped potential to benefit society, but access to this potential is too often restricted by limitations inherent to the customized approach currently used to synthesize this class of chemical matter. In contrast, the “building block approach”, i.e., generalized iterative assembly of interchangeable parts, has now proven to be a highly efficient and flexible way to construct things ranging all the way from skyscrapers to macromolecules to artificial intelligence algorithms. The structural redundancy found in many small molecules suggests that they possess a similar capacity for generalized building block-based construction. It is also encouraging that many customized iterative synthesis methods have been developed that improve access to specific classes of small molecules. There has also been substantial recent progress toward the iterative assembly of many different types of small molecules, including complex natural products, pharmaceuticals, biological probes, and materials, using common building blocks and coupling chemistry. Collectively, these advances suggest that a generalized building block approach for small molecule synthesis may be within reach. PMID:29696152
Mutale, Wilbroad; Bond, Virginia; Mwanamwenge, Margaret Tembo; Mlewa, Susan; Balabanova, Dina; Spicer, Neil; Ayles, Helen
2013-08-01
The primary bottleneck to achieving the MDGs in low-income countries is health systems that are too fragile to deliver the volume and quality of services to those in need. Strong and effective health systems are increasingly considered a prerequisite to reducing the disease burden and to achieving the health MDGs. Zambia is one of the countries that are lagging behind in achieving millennium development targets. Several barriers have been identified as hindering the progress towards health related millennium development goals. Designing an intervention that addresses these barriers was crucial and so the Better Health Outcomes through Mentorship (BHOMA) project was designed to address the challenges in the Zambia's MOH using a system wide approach. We applied systems thinking approach to describe the baseline status of the Six WHO building blocks for health system strengthening. A qualitative study was conducted looking at the status of the Six WHO building blocks for health systems strengthening in three BHOMA districts. We conducted Focus group discussions with community members and In-depth Interviews with key informants. Data was analyzed using Nvivo version 9. The study showed that building block specific weaknesses had cross cutting effect in other health system building blocks which is an essential element of systems thinking. Challenges noted in service delivery were linked to human resources, medical supplies, information flow, governance and finance building blocks either directly or indirectly. Several barriers were identified as hindering access to health services by the local communities. These included supply side barriers: Shortage of qualified health workers, bad staff attitude, poor relationships between community and health staff, long waiting time, confidentiality and the gender of health workers. Demand side barriers: Long distance to health facility, cost of transport and cultural practices. Participating communities seemed to lack the capacity to hold health workers accountable for the drugs and services. The study has shown that building block specific weaknesses had cross cutting effect in other health system building blocks. These linkages emphasised the need to use system wide approaches in assessing the performance of health system strengthening interventions.
NASA Astrophysics Data System (ADS)
Raveendran Thankamoni, Ratheesh Kumar
2017-04-01
Southern India is comprised of a collage of crustal blocks ranging in age from Archean to Neoproterozoic. Previous studies considered the Archean high-grade granulite terrain to the north of the Southern Granuilte Terrain (SGT) of southern India as the part of the Dharwar Craton and hence subdivided this craton into western, central and eastern provinces. This contribution presents my detailed examinations on the least studied Central Dharwar Province, comprising the Biligiri Rangan (BR) - Male Mahadeshwara (MM) Hills domain composed predominantly of charnockites. One of my recent study (Ratheesh-Kumar et al., 2016) for the first time provided necessary evidence for Neoarchean subduction-accretion-collision tectonic evolution of this domain as a separate crustal block which has been named as Biligiri Rangan Block (BRB) by using a multidisciplinary approach involving field investigation, petrography, mineral chemistry, thermodynamic modeling of metamorphic P-T evolution, and LA-ICPMS U-Pb and Lu-Hf analyses of zircons on representative rocks together with regional-scale crustal thickness model derived using isostatic gravimetric geophysical method. The important findings of this study are: (1) The BRB preserves the vestiges of a Mesoarchean primitive continental crust as indicated by the age (ca. 3207) and positive ɛHf value (+2.7) of quartzofeldspathic gneiss occurred in the central part of the block (2) The charnockites and associated mafic granulites and granites provide ages between ca. 2650 Ma and ca. 2498 Ma with large negative ɛHf values are suggestive of Neoarchean charnockitization and crustal remelting (3) New geochemical data of charnockites and mafic granulites from BRB are consistent with arc magmatic rocks generated through oceanic plate subduction (4) Delineation of a suture zone along the Kollegal structural lineament bounding the BRB and the Western Dharwar Craton surmised from the occurrences of quartzite-iron formation intercalations and also mafic-ultramafic lenses along this lineament with their evolution through a clockwise prograde and retrograde metamorphism in a subduction zone setting at a high-pressure of 18-19 kbar and temperature of ˜840°C (5) Spatial variation of crustal thickness data reveal high crustal thickness in the Biligiri Rangan and the Nilgiri Blocks, and are attributed to a more competently thickened crust resulted by the subduction and collision processes. Based on these results, this study proposes a new tectonic model for the evolution of the BRB that envisages eastward subduction of the Western Dharwar oceanic crust beneath the BRB along the Kollegal suture zone resulted in the arc magmatism during the Neoarchean. The relevance of this study relies on the fact that the proposed evolutionary model revises the existing debates on the tectonic framework and evolution of the Archean terranes of southern India.