Sample records for matching model complexity

  1. A Novel BA Complex Network Model on Color Template Matching

    PubMed Central

    Han, Risheng; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235

  2. A novel BA complex network model on color template matching.

    PubMed

    Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.

  3. History matching of a complex epidemiological model of human immunodeficiency virus transmission by using variance emulation.

    PubMed

    Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G

    2017-08-01

    Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.

  4. Temporal Sequences Quantify the Contributions of Individual Fixations in Complex Perceptual Matching Tasks

    ERIC Educational Resources Information Center

    Busey, Thomas; Yu, Chen; Wyatte, Dean; Vanderkolk, John

    2013-01-01

    Perceptual tasks such as object matching, mammogram interpretation, mental rotation, and satellite imagery change detection often require the assignment of correspondences to fuse information across views. We apply techniques developed for machine translation to the gaze data recorded from a complex perceptual matching task modeled after…

  5. Bayesian History Matching of Complex Infectious Disease Models Using Emulation: A Tutorial and a Case Study on HIV in Uganda

    PubMed Central

    Andrianakis, Ioannis; Vernon, Ian R.; McCreesh, Nicky; McKinley, Trevelyan J.; Oakley, Jeremy E.; Nsubuga, Rebecca N.; Goldstein, Michael; White, Richard G.

    2015-01-01

    Advances in scientific computing have allowed the development of complex models that are being routinely applied to problems in disease epidemiology, public health and decision making. The utility of these models depends in part on how well they can reproduce empirical data. However, fitting such models to real world data is greatly hindered both by large numbers of input and output parameters, and by long run times, such that many modelling studies lack a formal calibration methodology. We present a novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators). We present this in the form of a tutorial and a case study where we history match a dynamic, event-driven, individual-based stochastic HIV simulator, using extensive demographic, behavioural and epidemiological data available from Uganda. The tutorial describes history matching and emulation. History matching is an iterative procedure that reduces the simulator's input space by identifying and discarding areas that are unlikely to provide a good match to the empirical data. History matching relies on the computational efficiency of a Bayesian representation of the simulator, known as an emulator. Emulators mimic the simulator's behaviour, but are often several orders of magnitude faster to evaluate. In the case study, we use a 22 input simulator, fitting its 18 outputs simultaneously. After 9 iterations of history matching, a non-implausible region of the simulator input space was identified that was times smaller than the original input space. Simulator evaluations made within this region were found to have a 65% probability of fitting all 18 outputs. History matching and emulation are useful additions to the toolbox of infectious disease modellers. Further research is required to explicitly address the stochastic nature of the simulator as well as to account for correlations between outputs. PMID:25569850

  6. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.

    PubMed

    Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano

    2017-11-08

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.

  7. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex

    PubMed Central

    Lindsay, Grace W.

    2017-01-01

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463

  8. Use of an Improved Matching Algorithm to Select Scaffolds for Enzyme Design Based on a Complex Active Site Model.

    PubMed

    Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan

    2016-01-01

    Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic efficiencies towards target reactions.

  9. Use of an Improved Matching Algorithm to Select Scaffolds for Enzyme Design Based on a Complex Active Site Model

    PubMed Central

    Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan

    2016-01-01

    Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic efficiencies towards target reactions. PMID:27243223

  10. Action detection by double hierarchical multi-structure space-time statistical matching model

    NASA Astrophysics Data System (ADS)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-03-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  11. Action detection by double hierarchical multi-structure space–time statistical matching model

    NASA Astrophysics Data System (ADS)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-06-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  12. SDIA: A dynamic situation driven information fusion algorithm for cloud environment

    NASA Astrophysics Data System (ADS)

    Guo, Shuhang; Wang, Tong; Wang, Jian

    2017-09-01

    Information fusion is an important issue in information integration domain. In order to form an extensive information fusion technology under the complex and diverse situations, a new information fusion algorithm is proposed. Firstly, a fuzzy evaluation model of tag utility was proposed that can be used to count the tag entropy. Secondly, a ubiquitous situation tag tree model is proposed to define multidimensional structure of information situation. Thirdly, the similarity matching between the situation models is classified into three types: the tree inclusion, the tree embedding, and the tree compatibility. Next, in order to reduce the time complexity of the tree compatible matching algorithm, a fast and ordered tree matching algorithm is proposed based on the node entropy, which is used to support the information fusion by ubiquitous situation. Since the algorithm revolve from the graph theory of disordered tree matching algorithm, it can improve the information fusion present recall rate and precision rate in the situation. The information fusion algorithm is compared with the star and the random tree matching algorithm, and the difference between the three algorithms is analyzed in the view of isomorphism, which proves the innovation and applicability of the algorithm.

  13. Bidirectional selection between two classes in complex social networks.

    PubMed

    Zhou, Bin; He, Zhe; Jiang, Luo-Luo; Wang, Nian-Xin; Wang, Bing-Hong

    2014-12-19

    The bidirectional selection between two classes widely emerges in various social lives, such as commercial trading and mate choosing. Until now, the discussions on bidirectional selection in structured human society are quite limited. We demonstrated theoretically that the rate of successfully matching is affected greatly by individuals' neighborhoods in social networks, regardless of the type of networks. Furthermore, it is found that the high average degree of networks contributes to increasing rates of successful matches. The matching performance in different types of networks has been quantitatively investigated, revealing that the small-world networks reinforces the matching rate more than scale-free networks at given average degree. In addition, our analysis is consistent with the modeling result, which provides the theoretical understanding of underlying mechanisms of matching in complex networks.

  14. Unconditional or Conditional Logistic Regression Model for Age-Matched Case-Control Data?

    PubMed

    Kuo, Chia-Ling; Duan, Yinghui; Grady, James

    2018-01-01

    Matching on demographic variables is commonly used in case-control studies to adjust for confounding at the design stage. There is a presumption that matched data need to be analyzed by matched methods. Conditional logistic regression has become a standard for matched case-control data to tackle the sparse data problem. The sparse data problem, however, may not be a concern for loose-matching data when the matching between cases and controls is not unique, and one case can be matched to other controls without substantially changing the association. Data matched on a few demographic variables are clearly loose-matching data, and we hypothesize that unconditional logistic regression is a proper method to perform. To address the hypothesis, we compare unconditional and conditional logistic regression models by precision in estimates and hypothesis testing using simulated matched case-control data. Our results support our hypothesis; however, the unconditional model is not as robust as the conditional model to the matching distortion that the matching process not only makes cases and controls similar for matching variables but also for the exposure status. When the study design involves other complex features or the computational burden is high, matching in loose-matching data can be ignored for negligible loss in testing and estimation if the distributions of matching variables are not extremely different between cases and controls.

  15. Unconditional or Conditional Logistic Regression Model for Age-Matched Case–Control Data?

    PubMed Central

    Kuo, Chia-Ling; Duan, Yinghui; Grady, James

    2018-01-01

    Matching on demographic variables is commonly used in case–control studies to adjust for confounding at the design stage. There is a presumption that matched data need to be analyzed by matched methods. Conditional logistic regression has become a standard for matched case–control data to tackle the sparse data problem. The sparse data problem, however, may not be a concern for loose-matching data when the matching between cases and controls is not unique, and one case can be matched to other controls without substantially changing the association. Data matched on a few demographic variables are clearly loose-matching data, and we hypothesize that unconditional logistic regression is a proper method to perform. To address the hypothesis, we compare unconditional and conditional logistic regression models by precision in estimates and hypothesis testing using simulated matched case–control data. Our results support our hypothesis; however, the unconditional model is not as robust as the conditional model to the matching distortion that the matching process not only makes cases and controls similar for matching variables but also for the exposure status. When the study design involves other complex features or the computational burden is high, matching in loose-matching data can be ignored for negligible loss in testing and estimation if the distributions of matching variables are not extremely different between cases and controls. PMID:29552553

  16. Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting

    NASA Astrophysics Data System (ADS)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu

    2017-09-01

    In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.

  17. Complex networks untangle competitive advantage in Australian football

    NASA Astrophysics Data System (ADS)

    Braham, Calum; Small, Michael

    2018-05-01

    We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.

  18. Complex networks untangle competitive advantage in Australian football.

    PubMed

    Braham, Calum; Small, Michael

    2018-05-01

    We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R 2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.

  19. a Target Aware Texture Mapping for Sculpture Heritage Modeling

    NASA Astrophysics Data System (ADS)

    Yang, C.; Zhang, F.; Huang, X.; Li, D.; Zhu, Y.

    2017-08-01

    In this paper, we proposed a target aware image to model registration method using silhouette as the matching clues. The target sculpture object in natural environment can be automatically detected from image with complex background with assistant of 3D geometric data. Then the silhouette can be automatically extracted and applied in image to model matching. Due to the user don't need to deliberately draw target area, the time consumption for precisely image to model matching operation can be greatly reduced. To enhance the function of this method, we also improved the silhouette matching algorithm to support conditional silhouette matching. Two experiments using a stone lion sculpture of Ming Dynasty and a potable relic in museum are given to evaluate the method we proposed. The method we proposed in this paper is extended and developed into a mature software applied in many culture heritage documentation projects.

  20. Investigations of Tissue-Level Mechanisms of Primary Blast Injury Through Modeling, Simulation, Neuroimaging and Neuropathological Studies

    DTIC Science & Technology

    2012-07-10

    materials used, the complexity of the human anatomy , manufacturing limitations, and analysis capability prohibits exactly matching surrogate material...upper and lower bounds for possible loading behaviour. Although it is impossible to exactly match the human anatomy according to mechanical

  1. Bayesian uncertainty analysis for complex systems biology models: emulation, global parameter searches and evaluation of gene functions.

    PubMed

    Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith

    2018-01-02

    Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.

  2. Model-order reduction of lumped parameter systems via fractional calculus

    NASA Astrophysics Data System (ADS)

    Hollkamp, John P.; Sen, Mihir; Semperlotti, Fabio

    2018-04-01

    This study investigates the use of fractional order differential models to simulate the dynamic response of non-homogeneous discrete systems and to achieve efficient and accurate model order reduction. The traditional integer order approach to the simulation of non-homogeneous systems dictates the use of numerical solutions and often imposes stringent compromises between accuracy and computational performance. Fractional calculus provides an alternative approach where complex dynamical systems can be modeled with compact fractional equations that not only can still guarantee analytical solutions, but can also enable high levels of order reduction without compromising on accuracy. Different approaches are explored in order to transform the integer order model into a reduced order fractional model able to match the dynamic response of the initial system. Analytical and numerical results show that, under certain conditions, an exact match is possible and the resulting fractional differential models have both a complex and frequency-dependent order of the differential operator. The implications of this type of approach for both model order reduction and model synthesis are discussed.

  3. Estimating material viscoelastic properties based on surface wave measurements: A comparison of techniques and modeling assumptions

    PubMed Central

    Royston, Thomas J.; Dai, Zoujun; Chaunsali, Rajesh; Liu, Yifei; Peng, Ying; Magin, Richard L.

    2011-01-01

    Previous studies of the first author and others have focused on low audible frequency (<1 kHz) shear and surface wave motion in and on a viscoelastic material comprised of or representative of soft biological tissue. A specific case considered has been surface (Rayleigh) wave motion caused by a circular disk located on the surface and oscillating normal to it. Different approaches to identifying the type and coefficients of a viscoelastic model of the material based on these measurements have been proposed. One approach has been to optimize coefficients in an assumed viscoelastic model type to match measurements of the frequency-dependent Rayleigh wave speed. Another approach has been to optimize coefficients in an assumed viscoelastic model type to match the complex-valued frequency response function (FRF) between the excitation location and points at known radial distances from it. In the present article, the relative merits of these approaches are explored theoretically, computationally, and experimentally. It is concluded that matching the complex-valued FRF may provide a better estimate of the viscoelastic model type and parameter values; though, as the studies herein show, there are inherent limitations to identifying viscoelastic properties based on surface wave measurements. PMID:22225067

  4. A 3D model of polarized dust emission in the Milky Way

    NASA Astrophysics Data System (ADS)

    Martínez-Solaeche, Ginés; Karakci, Ata; Delabrouille, Jacques

    2018-05-01

    We present a three-dimensional model of polarized galactic dust emission that takes into account the variation of the dust density, spectral index and temperature along the line of sight, and contains randomly generated small-scale polarization fluctuations. The model is constrained to match observed dust emission on large scales, and match on smaller scales extrapolations of observed intensity and polarization power spectra. This model can be used to investigate the impact of plausible complexity of the polarized dust foreground emission on the analysis and interpretation of future cosmic microwave background polarization observations.

  5. Probabilistic seismic history matching using binary images

    NASA Astrophysics Data System (ADS)

    Davolio, Alessandra; Schiozer, Denis Jose

    2018-02-01

    Currently, the goal of history-matching procedures is not only to provide a model matching any observed data but also to generate multiple matched models to properly handle uncertainties. One such approach is a probabilistic history-matching methodology based on the discrete Latin Hypercube sampling algorithm, proposed in previous works, which was particularly efficient for matching well data (production rates and pressure). 4D seismic (4DS) data have been increasingly included into history-matching procedures. A key issue in seismic history matching (SHM) is to transfer data into a common domain: impedance, amplitude or pressure, and saturation. In any case, seismic inversions and/or modeling are required, which can be time consuming. An alternative to avoid these procedures is using binary images in SHM as they allow the shape, rather than the physical values, of observed anomalies to be matched. This work presents the incorporation of binary images in SHM within the aforementioned probabilistic history matching. The application was performed with real data from a segment of the Norne benchmark case that presents strong 4D anomalies, including softening signals due to pressure build up. The binary images are used to match the pressurized zones observed in time-lapse data. Three history matchings were conducted using: only well data, well and 4DS data, and only 4DS. The methodology is very flexible and successfully utilized the addition of binary images for seismic objective functions. Results proved the good convergence of the method in few iterations for all three cases. The matched models of the first two cases provided the best results, with similar well matching quality. The second case provided models presenting pore pressure changes according to the expected dynamic behavior (pressurized zones) observed on 4DS data. The use of binary images in SHM is relatively new with few examples in the literature. This work enriches this discussion by presenting a new application to match pressure in a reservoir segment with complex pressure behavior.

  6. A heuristic for efficient data distribution management in distributed simulation

    NASA Astrophysics Data System (ADS)

    Gupta, Pankaj; Guha, Ratan K.

    2005-05-01

    In this paper, we propose an algorithm for reducing the complexity of region matching and efficient multicasting in data distribution management component of High Level Architecture (HLA) Run Time Infrastructure (RTI). The current data distribution management (DDM) techniques rely on computing the intersection between the subscription and update regions. When a subscription region and an update region of different federates overlap, RTI establishes communication between the publisher and the subscriber. It subsequently routes the updates from the publisher to the subscriber. The proposed algorithm computes the update/subscription regions matching for dynamic allocation of multicast group. It provides new multicast routines that exploit the connectivity of federation by communicating updates regarding interactions and routes information only to those federates that require them. The region-matching problem in DDM reduces to clique-covering problem using the connections graph abstraction where the federations represent the vertices and the update/subscribe relations represent the edges. We develop an abstract model based on connection graph for data distribution management. Using this abstract model, we propose a heuristic for solving the region-matching problem of DDM. We also provide complexity analysis of the proposed heuristics.

  7. Insight into the Structure of Light Harvesting Complex II and its Stabilization in Detergent Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoso, Mateus B; Smolensky, Dmitriy; Heller, William T

    2009-01-01

    The structure of spinach light-harvesting complex II (LHC II), stabilized in a solution of the detergent n-octyl-{beta}-d-glucoside (BOG), was investigated by small-angle neutron scattering (SANS). Physicochemical characterization of the isolated complex indicated that it was pure (>95%) and also in its native trimeric state. SANS with contrast variation was used to investigate the properties of the protein-detergent complex at three different H{sub 2}O/D{sub 2}O contrast match points, enabling the scattering properties of the protein and detergent to be investigated independently. The topological shape of LHC II, determined using ab initio shape restoration methods from the SANS data at the contrastmore » match point of BOG, was consistent with the X-ray crystallographic structure of LHC II (Liu et al. Nature 2004 428, 287-292). The interactions of the protein and detergent were investigated at the contrast match point for the protein and also in 100% D{sub 2}O. The data suggested that BOG micelle structure was altered by its interaction with LHC II, but large aggregate structures were not formed. Indirect Fourier transform analysis of the LHC II/BOG scattering curves showed that the increase in the maximum dimension of the protein-detergent complex was consistent with the presence of a monolayer of detergent surrounding the protein. A model of the LHC II/BOG complex was generated to interpret the measurements made in 100% D{sub 2}O. This model adequately reproduced the overall size of the LHC II/BOG complex, but demonstrated that the detergent does not have a highly regular shape that surrounds the hydrophobic periphery of LHC II. In addition to demonstrating that natively structured LHC II can be produced for functional characterization and for use in artificial solar energy applications, the analysis and modeling approaches described here can be used for characterizing detergent-associated {alpha}-helical transmembrane proteins.« less

  8. How much complexity is warranted in a rainfall-runoff model?

    Treesearch

    A.J. Jakeman; G.M. Hornberger

    1993-01-01

    Development of mathmatical models relating the precipitation incident upon a catchment to the streamflow emanating from the catchment has been a major focus af surface water hydrology for decades. Generally, values for parameters in such models must be selected so that runoff calculated from the model "matches" recorded runoff from some historical period....

  9. Data-driven planning of distributed energy resources amidst socio-technical complexities

    NASA Astrophysics Data System (ADS)

    Jain, Rishee K.; Qin, Junjie; Rajagopal, Ram

    2017-08-01

    New distributed energy resources (DER) are rapidly replacing centralized power generation due to their environmental, economic and resiliency benefits. Previous analyses of DER systems have been limited in their ability to account for socio-technical complexities, such as intermittent supply, heterogeneous demand and balance-of-system cost dynamics. Here we develop ReMatch, an interdisciplinary modelling framework, spanning engineering, consumer behaviour and data science, and apply it to 10,000 consumers in California, USA. Our results show that deploying DER would yield nearly a 50% reduction in the levelized cost of electricity (LCOE) over the status quo even after accounting for socio-technical complexities. We abstract a detailed matching of consumers to DER infrastructure from our results and discuss how this matching can facilitate the development of smart and targeted renewable energy policies, programmes and incentives. Our findings point to the large-scale economic and technical feasibility of DER and underscore the pertinent role DER can play in achieving sustainable energy goals.

  10. Impact of intraoperative factor concentrates on blood product transfusions during orthotopic liver transplantation.

    PubMed

    Colavecchia, A Carmine; Cohen, David A; Harris, Jesse E; Thomas, Jeena M; Lindberg, Scott; Leveque, Christopher; Salazar, Eric

    2017-12-01

    Major bleeding in orthotopic liver transplantation is associated with significant morbidity and mortality. Limited literature exists regarding comparative effectiveness of prothrombin complex concentrate and fibrinogen concentrate during orthotopic liver transplantation on blood product utilization. This retrospective, single-institution study evaluated the impact of prothrombin complex concentrate and fibrinogen concentrate on blood product utilization during orthotopic liver transplantation from December 2013 to April 2016. This study included patients age 18 years or older and excluded patients who received simultaneous heart or lung transplantation or did not meet documentation criteria. A propensity score matching technique was used to match patients who were exposed to prothrombin complex concentrate with unexposed patients, at a 2 to 1 ratio, to control for selection bias. During this study, 212 patients received orthotopic liver transplantation with 39 prothrombin complex concentrate exposures. The matched study population included 39 patients who were exposed to prothrombin complex concentrate and 78 unexposed patients. Overall, 84.6% of patients who were exposed to prothrombin complex concentrate also received concomitant fibrinogen concentrate, whereas only 2% of patients in the control group received fibrinogen concentrate. After propensity score matching, no other factors that were included in the model differed significantly or had a standardized mean difference of 0.11 or greater. There was no statistical difference in the utilization of red blood cells or fresh frozen plasma for the exposed group versus the unexposed group after matching (mean ± standard deviation: red blood cell units, 12.4 ± 8.0 units vs. 9.7 ± 5.6 units [p = 0.058]; fresh-frozen plasma units, 10.0 ± 6.3 vs. 12.7 ± 9.7 units [p = 0.119], respectively). The intraoperative use of prothrombin complex concentrate and fibrinogen concentrate during orthotopic liver transplantation did not reduce intraoperative blood product requirements at a single institution. © 2017 AABB.

  11. Helping Students Select Appropriately Challenging Text: Application to a Test of Second Language Reading Ability. Research Report. ETS RR-17-33

    ERIC Educational Resources Information Center

    Sheehan, Kathleen M.

    2017-01-01

    A model-based approach for matching language learners to texts of appropriate difficulty is described. Results are communicated to test takers via a targeted reading range expressed on the reporting scale of an automated text complexity measurement tool (ATCMT). Test takers can use this feedback to select reading materials that are well matched to…

  12. Joint histogram-based cost aggregation for stereo matching.

    PubMed

    Min, Dongbo; Lu, Jiangbo; Do, Minh N

    2013-10-01

    This paper presents a novel method for performing efficient cost aggregation in stereo matching. The cost aggregation problem is reformulated from the perspective of a histogram, giving us the potential to reduce the complexity of the cost aggregation in stereo matching significantly. Differently from previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy that exists among the search range, caused by a repeated filtering for all the hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The tradeoff between accuracy and complexity is extensively investigated by varying the parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity and outperforms existing local methods. This paper also provides new insights into complexity-constrained stereo-matching algorithm design.

  13. Vehicle Surveillance with a Generic, Adaptive, 3D Vehicle Model.

    PubMed

    Leotta, Matthew J; Mundy, Joseph L

    2011-07-01

    In automated surveillance, one is often interested in tracking road vehicles, measuring their shape in 3D world space, and determining vehicle classification. To address these tasks simultaneously, an effective approach is the constrained alignment of a prior model of 3D vehicle shape to images. Previous 3D vehicle models are either generic but overly simple or rigid and overly complex. Rigid models represent exactly one vehicle design, so a large collection is needed. A single generic model can deform to a wide variety of shapes, but those shapes have been far too primitive. This paper uses a generic 3D vehicle model that deforms to match a wide variety of passenger vehicles. It is adjustable in complexity between the two extremes. The model is aligned to images by predicting and matching image intensity edges. Novel algorithms are presented for fitting models to multiple still images and simultaneous tracking while estimating shape in video. Experiments compare the proposed model to simple generic models in accuracy and reliability of 3D shape recovery from images and tracking in video. Standard techniques for classification are also used to compare the models. The proposed model outperforms the existing simple models at each task.

  14. Diffusion approximation of the radiative-conductive heat transfer model with Fresnel matching conditions

    NASA Astrophysics Data System (ADS)

    Chebotarev, Alexander Yu.; Grenkin, Gleb V.; Kovtanyuk, Andrey E.; Botkin, Nikolai D.; Hoffmann, Karl-Heinz

    2018-04-01

    The paper is concerned with a problem of diffraction type. The study starts with equations of complex (radiative and conductive) heat transfer in a multicomponent domain with Fresnel matching conditions at the interfaces. Applying the diffusion, P1, approximation yields a pair of coupled nonlinear PDEs describing the radiation intensity and temperature for each component of the domain. Matching conditions for these PDEs, imposed at the interfaces between the domain components, are derived. The unique solvability of the obtained problem is proven, and numerical experiments are conducted.

  15. Cross-matching: a modified cross-correlation underlying threshold energy model and match-based depth perception

    PubMed Central

    Doi, Takahiro; Fujita, Ichiro

    2014-01-01

    Three-dimensional visual perception requires correct matching of images projected to the left and right eyes. The matching process is faced with an ambiguity: part of one eye's image can be matched to multiple parts of the other eye's image. This stereo correspondence problem is complicated for random-dot stereograms (RDSs), because dots with an identical appearance produce numerous potential matches. Despite such complexity, human subjects can perceive a coherent depth structure. A coherent solution to the correspondence problem does not exist for anticorrelated RDSs (aRDSs), in which luminance contrast is reversed in one eye. Neurons in the visual cortex reduce disparity selectivity for aRDSs progressively along the visual processing hierarchy. A disparity-energy model followed by threshold nonlinearity (threshold energy model) can account for this reduction, providing a possible mechanism for the neural matching process. However, the essential computation underlying the threshold energy model is not clear. Here, we propose that a nonlinear modification of cross-correlation, which we term “cross-matching,” represents the essence of the threshold energy model. We placed half-wave rectification within the cross-correlation of the left-eye and right-eye images. The disparity tuning derived from cross-matching was attenuated for aRDSs. We simulated a psychometric curve as a function of graded anticorrelation (graded mixture of aRDS and normal RDS); this simulated curve reproduced the match-based psychometric function observed in human near/far discrimination. The dot density was 25% for both simulation and observation. We predicted that as the dot density increased, the performance for aRDSs should decrease below chance (i.e., reversed depth), and the level of anticorrelation that nullifies depth perception should also decrease. We suggest that cross-matching serves as a simple computation underlying the match-based disparity signals in stereoscopic depth perception. PMID:25360107

  16. Effect of Radiotherapy Planning Complexity on Survival of Elderly Patients With Unresected Localized Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Chang H.; Bonomi, Marcelo; Cesaretti, Jamie

    2011-11-01

    Purpose: To evaluate whether complex radiotherapy (RT) planning was associated with improved outcomes in a cohort of elderly patients with unresected Stage I-II non-small-cell lung cancer (NSCLC). Methods and Materials: Using the Surveillance, Epidemiology, and End Results registry linked to Medicare claims, we identified 1998 patients aged >65 years with histologically confirmed, unresected stage I-II NSCLC. Patients were classified into an intermediate or complex RT planning group using Medicare physician codes. To address potential selection bias, we used propensity score modeling. Survival of patients who received intermediate and complex simulation was compared using Cox regression models adjusting for propensity scoresmore » and in a stratified and matched analysis according to propensity scores. Results: Overall, 25% of patients received complex RT planning. Complex RT planning was associated with better overall (hazard ratio 0.84; 95% confidence interval, 0.75-0.95) and lung cancer-specific (hazard ratio 0.81; 95% confidence interval, 0.71-0.93) survival after controlling for propensity scores. Similarly, stratified and matched analyses showed better overall and lung cancer-specific survival of patients treated with complex RT planning. Conclusions: The use of complex RT planning is associated with improved survival among elderly patients with unresected Stage I-II NSCLC. These findings should be validated in prospective randomized controlled trials.« less

  17. Matching consumer feeding behaviours and resource traits: a fourth-corner problem in food-web theory.

    PubMed

    Monteiro, Angelo Barbosa; Faria, Lucas Del Bianco

    2018-06-06

    For decades, food web theory has proposed phenomenological models for the underlying structure of ecological networks. Generally, these models rely on latent niche variables that match the feeding behaviour of consumers with their resource traits. In this paper, we used a comprehensive database to evaluate different hypotheses on the best dependency structure of trait-matching patterns between consumers and resource traits. We found that consumer feeding behaviours had complex interactions with resource traits; however, few dimensions (i.e. latent variables) could reproduce the trait-matching patterns. We discuss our findings in the light of three food web models designed to reproduce the multidimensionality of food web data; additionally, we discuss how using species traits clarify food webs beyond species pairwise interactions and enable studies to infer ecological generality at larger scales, despite potential taxonomic differences, variations in ecological conditions and differences in species abundance between communities. © 2018 John Wiley & Sons Ltd/CNRS.

  18. Coarse-grained molecular dynamics simulations for giant protein-DNA complexes

    NASA Astrophysics Data System (ADS)

    Takada, Shoji

    Biomolecules are highly hierarchic and intrinsically flexible. Thus, computational modeling calls for multi-scale methodologies. We have been developing a coarse-grained biomolecular model where on-average 10-20 atoms are grouped into one coarse-grained (CG) particle. Interactions among CG particles are tuned based on atomistic interactions and the fluctuation matching algorithm. CG molecular dynamics methods enable us to simulate much longer time scale motions of much larger molecular systems than fully atomistic models. After broad sampling of structures with CG models, we can easily reconstruct atomistic models, from which one can continue conventional molecular dynamics simulations if desired. Here, we describe our CG modeling methodology for protein-DNA complexes, together with various biological applications, such as the DNA duplication initiation complex, model chromatins, and transcription factor dynamics on chromatin-like environment.

  19. Jealousy Graphs: Structure and Complexity of Decentralized Stable Matching

    DTIC Science & Technology

    2013-01-01

    REPORT Jealousy Graphs: Structure and Complexity of Decentralized Stable Matching 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The stable matching...Franceschetti 858-822-2284 3. DATES COVERED (From - To) Standard Form 298 (Rev 8/98) Prescribed by ANSI Std. Z39.18 - Jealousy Graphs: Structure and...market. Using this structure, we are able to provide a ner analysis of the complexity of a subclass of decentralized matching markets. Jealousy

  20. Stereo Sound Field Controller Design Using Partial Model Matching on the Frequency Domain

    NASA Astrophysics Data System (ADS)

    Kumon, Makoto; Miike, Katsuhiro; Eguchi, Kazuki; Mizumoto, Ikuro; Iwai, Zenta

    The objective of sound field control is to make the acoustic characteristics of a listening room close to those of the desired system. Conventional methods apply feedforward controllers, such as digital filters, to achieve this objective. However, feedback controllers are also necessary in order to attenuate noise or to compensate the uncertainty of the acoustic characteristics of the listening room. Since acoustic characteristics are well modeled on the frequency domain, it is efficient to design controllers with respect to frequency responses, but it is difficult to design a multi input multi output (MIMO) control system on a wide frequency domain. In the present study, a partial model matching method on the frequency domain was adopted because this method requires only sampled data, rather than complex mathematical models of the plant, in order to design controllers for MIMO systems. The partial model matching method was applied to design two-degree-of-freedom controllers for acoustic equalization and noise reduction. Experiments demonstrated effectiveness of the proposed method.

  1. Complexity matching in dyadic conversation.

    PubMed

    Abney, Drew H; Paxton, Alexandra; Dale, Rick; Kello, Christopher T

    2014-12-01

    Recent studies of dyadic interaction have examined phenomena of synchronization, entrainment, alignment, and convergence. All these forms of behavioral matching have been hypothesized to play a supportive role in establishing coordination and common ground between interlocutors. In the present study, evidence is found for a new kind of coordination termed complexity matching. Temporal dynamics in conversational speech signals were analyzed through time series of acoustic onset events. Timing in periods of acoustic energy was found to exhibit behavioral matching that reflects complementary timing in turn-taking. In addition, acoustic onset times were found to exhibit power law clustering across a range of timescales, and these power law functions were found to exhibit complexity matching that is distinct from behavioral matching. Complexity matching is discussed in terms of interactive alignment and other theoretical principles that lead to new hypotheses about information exchange in dyadic conversation and interaction in general. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  2. What’s in a game? A systems approach to enhancing performance analysis in football

    PubMed Central

    2017-01-01

    Purpose Performance analysis (PA) in football is considered to be an integral component of understanding the requirements for optimal performance. Despite vast amounts of research in this area key gaps remain, including what comprises PA in football, and methods to minimise research-practitioner gaps. The aim of this study was to develop a model of the football match system in order to better describe and understand the components of football performance. Such a model could inform the design of new PA methods. Method Eight elite level football Subject Method Experts (SME’s) participated in two workshops to develop a systems model of the football match system. The model was developed using a first-of-its-kind application of Cognitive Work Analysis (CWA) in football. CWA has been used in many other non-sporting domains to analyse and understand complex systems. Result Using CWA, a model of the football match ‘system’ was developed. The model enabled identification of several PA measures not currently utilised, including communication between team members, adaptability of teams, playing at the appropriate tempo, as well as attacking and defending related measures. Conclusion The results indicate that football is characteristic of a complex sociotechnical system, and revealed potential new and unique PA measures regarded as important by SME’s, yet not currently measured. Importantly, these results have identified a gap between the current PA research and the information that is meaningful to football coaches and practitioners. PMID:28212392

  3. Nonnormality and Divergence in Posttreatment Alcohol Use

    PubMed Central

    Witkiewitz, Katie; van der Maas, Han L. J.; Hufford, Michael R.; Marlatt, G. Alan

    2007-01-01

    Alcohol lapses are the modal outcome following treatment for alcohol use disorders, yet many alcohol researchers have encountered limited success in the prediction and prevention of relapse. One hypothesis is that lapses are unpredictable, but another possibility is the complexity of the relapse process is not captured by traditional statistical methods. Data from Project Matching Alcohol Treatments to Client Heterogeneity (Project MATCH), a multisite alcohol treatment study, were reanalyzed with 2 statistical methodologies: catastrophe and 2-part growth mixture modeling. Drawing on previous investigations of self-efficacy as a dynamic predictor of relapse, the current study revisits the self-efficacy matching hypothesis, which was not statistically supported in Project MATCH. Results from both the catastrophe and growth mixture analyses demonstrated a dynamic relationship between self-efficacy and drinking outcomes. The growth mixture analyses provided evidence in support of the original matching hypothesis: Individuals with lower self-efficacy who received cognitive behavior therapy drank far less frequently than did those with low self-efficacy who received motivational therapy. These results highlight the dynamical nature of the relapse process and the importance of the use of methodologies that accommodate this complexity when evaluating treatment outcomes. PMID:17516769

  4. Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns

    PubMed Central

    Cruchet, Steeve; Gustafson, Kyle; Benton, Richard; Floreano, Dario

    2015-01-01

    The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs—locomotor bouts—matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior. PMID:26600381

  5. Network structure of production

    PubMed Central

    Atalay, Enghin; Hortaçsu, Ali; Roberts, James; Syverson, Chad

    2011-01-01

    Complex social networks have received increasing attention from researchers. Recent work has focused on mechanisms that produce scale-free networks. We theoretically and empirically characterize the buyer–supplier network of the US economy and find that purely scale-free models have trouble matching key attributes of the network. We construct an alternative model that incorporates realistic features of firms’ buyer–supplier relationships and estimate the model’s parameters using microdata on firms’ self-reported customers. This alternative framework is better able to match the attributes of the actual economic network and aids in further understanding several important economic phenomena. PMID:21402924

  6. Matching multiple rigid domain decompositions of proteins

    PubMed Central

    Flynn, Emily; Streinu, Ileana

    2017-01-01

    We describe efficient methods for consistently coloring and visualizing collections of rigid cluster decompositions obtained from variations of a protein structure, and lay the foundation for more complex setups that may involve different computational and experimental methods. The focus here is on three biological applications: the conceptually simpler problems of visualizing results of dilution and mutation analyses, and the more complex task of matching decompositions of multiple NMR models of the same protein. Implemented into the KINARI web server application, the improved visualization techniques give useful information about protein folding cores, help examining the effect of mutations on protein flexibility and function, and provide insights into the structural motions of PDB proteins solved with solution NMR. These tools have been developed with the goal of improving and validating rigidity analysis as a credible coarse-grained model capturing essential information about a protein’s slow motions near the native state. PMID:28141528

  7. Double-dictionary matching pursuit for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity

    NASA Astrophysics Data System (ADS)

    Cui, Lingli; Gong, Xiangyang; Zhang, Jianyu; Wang, Huaqing

    2016-12-01

    The quantitative diagnosis of rolling bearing fault severity is particularly crucial to realize a proper maintenance decision. Aiming at the fault feature of rolling bearing, a novel double-dictionary matching pursuit (DDMP) for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity (LZC) index is proposed in this paper. In order to match the features of rolling bearing fault, the impulse time-frequency dictionary and modulation dictionary are constructed to form the double-dictionary by using the method of parameterized function model. Then a novel matching pursuit method is proposed based on the new double-dictionary. For rolling bearing vibration signals with different fault sizes, the signals are decomposed and reconstructed by the DDMP. After the noise reduced and signals reconstructed, the LZC index is introduced to realize the fault extent evaluation. The applications of this method to the fault experimental signals of bearing outer race and inner race with different degree of injury have shown that the proposed method can effectively realize the fault extent evaluation.

  8. Multiresponse modeling of variably saturated flow and isotope tracer transport for a hillslope experiment at the Landscape Evolution Observatory

    NASA Astrophysics Data System (ADS)

    Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter

    2016-10-01

    This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.

  9. The planum temporale as a computational hub.

    PubMed

    Griffiths, Timothy D; Warren, Jason D

    2002-07-01

    It is increasingly recognized that the human planum temporale is not a dedicated language processor, but is in fact engaged in the analysis of many types of complex sound. We propose a model of the human planum temporale as a computational engine for the segregation and matching of spectrotemporal patterns. The model is based on segregating the components of the acoustic world and matching these components with learned spectrotemporal representations. Spectrotemporal information derived from such a 'computational hub' would be gated to higher-order cortical areas for further processing, leading to object recognition and the perception of auditory space. We review the evidence for the model and specific predictions that follow from it.

  10. Using Simple and Complex Growth Models to Articulate Developmental Change: Matching Theory to Method

    ERIC Educational Resources Information Center

    Ram, Nilam; Grimm, Kevin

    2007-01-01

    Growth curve modeling has become a mainstay in the study of development. In this article we review some of the flexibility provided by this technique for describing and testing hypotheses about: (1) intraindividual change across multiple occasions of measurement, and (2) interindividual differences in intraindividual change. Through empirical…

  11. Matching-centrality decomposition and the forecasting of new links in networks.

    PubMed

    Rohr, Rudolf P; Naisbit, Russell E; Mazza, Christian; Bersier, Louis-Félix

    2016-02-10

    Networks play a prominent role in the study of complex systems of interacting entities in biology, sociology, and economics. Despite this diversity, we demonstrate here that a statistical model decomposing networks into matching and centrality components provides a comprehensive and unifying quantification of their architecture. The matching term quantifies the assortative structure in which node makes links with which other node, whereas the centrality term quantifies the number of links that nodes make. We show, for a diverse set of networks, that this decomposition can provide a tight fit to observed networks. Then we provide three applications. First, we show that the model allows very accurate prediction of missing links in partially known networks. Second, when node characteristics are known, we show how the matching-centrality decomposition can be related to this external information. Consequently, it offers us a simple and versatile tool to explore how node characteristics explain network architecture. Finally, we demonstrate the efficiency and flexibility of the model to forecast the links that a novel node would create if it were to join an existing network. © 2016 The Author(s).

  12. A New Model for a Carpool Matching Service.

    PubMed

    Xia, Jizhe; Curtin, Kevin M; Li, Weihong; Zhao, Yonglong

    2015-01-01

    Carpooling is an effective means of reducing traffic. A carpool team shares a vehicle for their commute, which reduces the number of vehicles on the road during rush hour periods. Carpooling is officially sanctioned by most governments, and is supported by the construction of high-occupancy vehicle lanes. A number of carpooling services have been designed in order to match commuters into carpool teams, but it known that the determination of optimal carpool teams is a combinatorially complex problem, and therefore technological solutions are difficult to achieve. In this paper, a model for carpool matching services is proposed, and both optimal and heuristic approaches are tested to find solutions for that model. The results show that different solution approaches are preferred over different ranges of problem instances. Most importantly, it is demonstrated that a new formulation and associated solution procedures can permit the determination of optimal carpool teams and routes. An instantiation of the model is presented (using the street network of Guangzhou city, China) to demonstrate how carpool teams can be determined.

  13. A New Model for a Carpool Matching Service

    PubMed Central

    Xia, Jizhe; Curtin, Kevin M.; Li, Weihong; Zhao, Yonglong

    2015-01-01

    Carpooling is an effective means of reducing traffic. A carpool team shares a vehicle for their commute, which reduces the number of vehicles on the road during rush hour periods. Carpooling is officially sanctioned by most governments, and is supported by the construction of high-occupancy vehicle lanes. A number of carpooling services have been designed in order to match commuters into carpool teams, but it known that the determination of optimal carpool teams is a combinatorially complex problem, and therefore technological solutions are difficult to achieve. In this paper, a model for carpool matching services is proposed, and both optimal and heuristic approaches are tested to find solutions for that model. The results show that different solution approaches are preferred over different ranges of problem instances. Most importantly, it is demonstrated that a new formulation and associated solution procedures can permit the determination of optimal carpool teams and routes. An instantiation of the model is presented (using the street network of Guangzhou city, China) to demonstrate how carpool teams can be determined. PMID:26125552

  14. Inhibitory motor control based on complex stopping goals relies on the same brain network as simple stopping

    PubMed Central

    Wessel, Jan R.; Aron, Adam R.

    2014-01-01

    Much research has modeled action-stopping using the stop-signal task (SST), in which an impending response has to be stopped when an explicit stop-signal occurs. A limitation of the SST is that real-world action-stopping rarely involves explicit stop-signals. Instead, the stopping-system engages when environmental features match more complex stopping goals. For example, when stepping into the street, one monitors path, velocity, size, and types of objects; and only stops if there is a vehicle approaching. Here, we developed a task in which participants compared the visual features of a multidimensional go-stimulus to a complex stopping-template, and stopped their go-response if all features matched the template. We used independent component analysis of EEG data to show that the same motor inhibition brain network that explains action-stopping in the SST also implements motor inhibition in the complex-stopping task. Furthermore, we found that partial feature overlap between go-stimulus and stopping-template lead to motor slowing, which also corresponded with greater stopping-network activity. This shows that the same brain system for action-stopping to explicit stop-signals is recruited to slow or stop behavior when stimuli match a complex stopping goal. The results imply a generalizability of the brain’s network for simple action-stopping to more ecologically valid scenarios. PMID:25270603

  15. Complexity Matching Effects in Bimanual and Interpersonal Syncopated Finger Tapping

    PubMed Central

    Coey, Charles A.; Washburn, Auriel; Hassebrock, Justin; Richardson, Michael J.

    2016-01-01

    The current study was designed to investigate complexity matching during syncopated behavioral coordination. Participants either tapped in (bimanual) syncopation using their two hands, or tapped in (interpersonal) syncopation with a partner, with each participant using one of their hands. The time series of inter-tap intervals (ITI) from each hand were submitted to fractal analysis, as well as to short-term and multi-timescale cross-correlation analyses. The results demonstrated that the fractal scaling of one hand’s ITI was strongly correlated to that of the other hand, and this complexity matching effect was stronger in the bimanual condition than in the interpersonal condition. Moreover, the degree of complexity matching was predicted by the strength of short-term cross-correlation and the stability of the asynchrony between the two tapping series. These results suggest that complexity matching is not specific to the inphase synchronization tasks used in past research, but is a general result of coordination between complex systems. PMID:26840612

  16. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  17. Stereo matching algorithm based on double components model

    NASA Astrophysics Data System (ADS)

    Zhou, Xiao; Ou, Kejun; Zhao, Jianxin; Mou, Xingang

    2018-03-01

    The tiny wires are the great threat to the safety of the UAV flight. Because they have only several pixels isolated far from the background, while most of the existing stereo matching methods require a certain area of the support region to improve the robustness, or assume the depth dependence of the neighboring pixels to meet requirement of global or semi global optimization method. So there will be some false alarms even failures when images contains tiny wires. A new stereo matching algorithm is approved in the paper based on double components model. According to different texture types the input image is decomposed into two independent component images. One contains only sparse wire texture image and another contains all remaining parts. Different matching schemes are adopted for each component image pairs. Experiment proved that the algorithm can effectively calculate the depth image of complex scene of patrol UAV, which can detect tiny wires besides the large size objects. Compared with the current mainstream method it has obvious advantages.

  18. Simulation and analysis of a model dinoflagellate predator-prey system

    NASA Astrophysics Data System (ADS)

    Mazzoleni, M. J.; Antonelli, T.; Coyne, K. J.; Rossi, L. F.

    2015-12-01

    This paper analyzes the dynamics of a model dinoflagellate predator-prey system and uses simulations to validate theoretical and experimental studies. A simple model for predator-prey interactions is derived by drawing upon analogies from chemical kinetics. This model is then modified to account for inefficiencies in predation. Simulation results are shown to closely match the model predictions. Additional simulations are then run which are based on experimental observations of predatory dinoflagellate behavior, and this study specifically investigates how the predatory dinoflagellate Karlodinium veneficum uses toxins to immobilize its prey and increase its feeding rate. These simulations account for complex dynamics that were not included in the basic models, and the results from these computational simulations closely match the experimentally observed predatory behavior of K. veneficum and reinforce the notion that predatory dinoflagellates utilize toxins to increase their feeding rate.

  19. Automated dynamic analytical model improvement for damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J. S.; Berman, A.

    1985-01-01

    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.

  20. An analytical model for light backscattering by coccoliths and coccospheres of Emiliania huxleyi.

    PubMed

    Fournier, Georges; Neukermans, Griet

    2017-06-26

    We present an analytical model for light backscattering by coccoliths and coccolithophores of the marine calcifying phytoplankter Emiliania huxleyi. The model is based on the separation of the effects of diffraction, refraction, and reflection on scattering, a valid assumption for particle sizes typical of coccoliths and coccolithophores. Our model results match closely with results from an exact scattering code that uses complex particle geometry and our model also mimics well abrupt transitions in scattering magnitude. Finally, we apply our model to predict changes in the spectral backscattering coefficient during an Emiliania huxleyi bloom with results that closely match in situ measurements. Because our model captures the key features that control the light backscattering process, it can be generalized to coccoliths and coccolithophores of different morphologies which can be obtained from size-calibrated electron microphotographs. Matlab codes of this model are provided as supplementary material.

  1. Photogrammetric Point Clouds Generation in Urban Areas from Integrated Image Matching and Segmentation

    NASA Astrophysics Data System (ADS)

    Ye, L.; Wu, B.

    2017-09-01

    High-resolution imagery is an attractive option for surveying and mapping applications due to the advantages of high quality imaging, short revisit time, and lower cost. Automated reliable and dense image matching is essential for photogrammetric 3D data derivation. Such matching, in urban areas, however, is extremely difficult, owing to the complexity of urban textures and severe occlusion problems on the images caused by tall buildings. Aimed at exploiting high-resolution imagery for 3D urban modelling applications, this paper presents an integrated image matching and segmentation approach for reliable dense matching of high-resolution imagery in urban areas. The approach is based on the framework of our existing self-adaptive triangulation constrained image matching (SATM), but incorporates three novel aspects to tackle the image matching difficulties in urban areas: 1) occlusion filtering based on image segmentation, 2) segment-adaptive similarity correlation to reduce the similarity ambiguity, 3) improved dense matching propagation to provide more reliable matches in urban areas. Experimental analyses were conducted using aerial images of Vaihingen, Germany and high-resolution satellite images in Hong Kong. The photogrammetric point clouds were generated, from which digital surface models (DSMs) were derived. They were compared with the corresponding airborne laser scanning data and the DSMs generated from the Semi-Global matching (SGM) method. The experimental results show that the proposed approach is able to produce dense and reliable matches comparable to SGM in flat areas, while for densely built-up areas, the proposed method performs better than SGM. The proposed method offers an alternative solution for 3D surface reconstruction in urban areas.

  2. Automatic orientation and 3D modelling from markerless rock art imagery

    NASA Astrophysics Data System (ADS)

    Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.

    2013-02-01

    This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.

  3. Move-by-move dynamics of the advantage in chess matches reveals population-level learning of the game.

    PubMed

    Ribeiro, Haroldo V; Mendes, Renio S; Lenzi, Ervin K; del Castillo-Mussot, Marcelo; Amaral, Luís A N

    2013-01-01

    The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player's advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player's advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent [Formula: see text] characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments.

  4. Move-by-Move Dynamics of the Advantage in Chess Matches Reveals Population-Level Learning of the Game

    PubMed Central

    Ribeiro, Haroldo V.; Mendes, Renio S.; Lenzi, Ervin K.; del Castillo-Mussot, Marcelo; Amaral, Luís A. N.

    2013-01-01

    The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player’s advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player’s advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments. PMID:23382876

  5. The applied importance of research on the matching law

    PubMed Central

    Pierce, W. David; Epling, W. Frank

    1995-01-01

    In this essay, we evaluate the applied implications of two articles related to the matching law and published in the Journal of the Experimental Analysis of Behavior, May 1994. Building on Mace's (1994) criteria for increasing the applied relevance of basic research, we evaluate the applied implications of basic research studies. Research by Elsmore and McBride (1994) and Savastano and Fantino (1994) involve an extension of the behavioral model of choice. Elsmore and McBride used rats as subjects, but arranged a multioperant environment that resembles some of the complex contingencies of human behavior. Savastino and Fantino used human subjects and extended the matching law to ratio and interval contingencies. These experiments contribute to a growing body of knowledge on the matching law and its relevance for human behavior. PMID:16795866

  6. Modeling and simulation for fewer-axis grinding of complex surface

    NASA Astrophysics Data System (ADS)

    Li, Zhengjian; Peng, Xiaoqiang; Song, Ci

    2017-10-01

    As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.

  7. Minimum-complexity helicopter simulation math model

    NASA Technical Reports Server (NTRS)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  8. Gaussian mixed model in support of semiglobal matching leveraged by ground control points

    NASA Astrophysics Data System (ADS)

    Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li

    2017-04-01

    Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.

  9. Colored petri net modeling of small interfering RNA-mediated messenger RNA degradation.

    PubMed

    Nickaeen, Niloofar; Moein, Shiva; Heidary, Zarifeh; Ghaisari, Jafar

    2016-01-01

    Mathematical modeling of biological systems is an attractive way for studying complex biological systems and their behaviors. Petri Nets, due to their ability to model systems with various levels of qualitative information, have been wildly used in modeling biological systems in which enough qualitative data may not be at disposal. These nets have been used to answer questions regarding the dynamics of different cell behaviors including the translation process. In one stage of the translation process, the RNA sequence may be degraded. In the process of degradation of RNA sequence, small-noncoding RNA molecules known as small interfering RNA (siRNA) match the target RNA sequence. As a result of this matching, the target RNA sequence is destroyed. In this context, the process of matching and destruction is modeled using Colored Petri Nets (CPNs). The model is constructed using CPNs which allow tokens to have a value or type on them. Thus, CPN is a suitable tool to model string structures in which each element of the string has a different type. Using CPNs, long RNA, and siRNA strings are modeled with a finite set of colors. The model is simulated via CPN Tools. A CPN model of the matching between RNA and siRNA strings is constructed in CPN Tools environment. In previous studies, a network of stoichiometric equations was modeled. However, in this particular study, we modeled the mechanism behind the silencing process. Modeling this kind of mechanisms provides us with a tool to examine the effects of different factors such as mutation or drugs on the process.

  10. A Unified Framework for Complex Networks with Degree Trichotomy Based on Markov Chains.

    PubMed

    Hui, David Shui Wing; Chen, Yi-Chao; Zhang, Gong; Wu, Weijie; Chen, Guanrong; Lui, John C S; Li, Yingtao

    2017-06-16

    This paper establishes a Markov chain model as a unified framework for describing the evolution processes in complex networks. The unique feature of the proposed model is its capability in addressing the formation mechanism that can reflect the "trichotomy" observed in degree distributions, based on which closed-form solutions can be derived. Important special cases of the proposed unified framework are those classical models, including Poisson, Exponential, Power-law distributed networks. Both simulation and experimental results demonstrate a good match of the proposed model with real datasets, showing its superiority over the classical models. Implications of the model to various applications including citation analysis, online social networks, and vehicular networks design, are also discussed in the paper.

  11. Interhemispheric Resource Sharing: Decreasing Benefits with Increasing Processing Efficiency

    ERIC Educational Resources Information Center

    Maertens, M.; Pollmann, S.

    2005-01-01

    Visual matches are sometimes faster when stimuli are presented across visual hemifields, compared to within-field matching. Using a cued geometric figure matching task, we investigated the influence of computational complexity vs. processing efficiency on this bilateral distribution advantage (BDA). Computational complexity was manipulated by…

  12. The effects of numerical-model complexity and observation type on estimated porosity values

    USGS Publications Warehouse

    Starn, Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.

    2015-01-01

    The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a “complex” highly parameterized porosity field and a “simple” parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.

  13. A mathematical model of medial consonant identification by cochlear implant users.

    PubMed

    Svirsky, Mario A; Sagi, Elad; Meyer, Ted A; Kaiser, Adam R; Teoh, Su Wooi

    2011-04-01

    The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects' ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects' consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech.

  14. A mathematical model of medial consonant identification by cochlear implant users

    PubMed Central

    Svirsky, Mario A.; Sagi, Elad; Meyer, Ted A.; Kaiser, Adam R.; Teoh, Su Wooi

    2011-01-01

    The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects’ ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects’ consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech. PMID:21476674

  15. Three Dimensional Object Recognition Using a Complex Autoregressive Model

    DTIC Science & Technology

    1993-12-01

    3.4.2 Template Matching Algorithm ...................... 3-16 3.4.3 K-Nearest-Neighbor ( KNN ) Techniques ................. 3-25 3.4.4 Hidden Markov Model...Neighbor ( KNN ) Test Results ...................... 4-13 4.2.1 Single-Look 1-NN Testing .......................... 4-14 4.2.2 Multiple-Look 1-NN Testing...4-15 4.2.3 Discussion of KNN Test Results ...................... 4-15 4.3 Hidden Markov Model (HMM) Test Results

  16. The Butterfly Model of Careers: Illustrating How Planning and Chance Can Be Integrated in the Careers of Secondary School Students

    ERIC Educational Resources Information Center

    Borg, Tony; Bright, Jim; Pryor, Robert

    2006-01-01

    Simple matching models of decision making are no longer sufficient as a basis for career counselling and education. The challenge for contemporary careers advisers is how to communicate some of the complexities of modern career development to their students; in particular, the apparently contradictory relationship between the need for planning and…

  17. Use of Hyperspectral Imagery to Assess Cryptic Color Matching in Sargassum Associated Crabs.

    PubMed

    Russell, Brandon J; Dierssen, Heidi M

    2015-01-01

    Mats of the pelagic macroalgae Sargassum represent a complex environment for the study of marine camouflage at the air-sea interface. Endemic organisms have convergently evolved similar colors and patterns, but quantitative assessments of camouflage strategies are lacking. Here, spectral camouflage of two crab species (Portunus sayi and Planes minutus) was assessed using hyperspectral imagery (HSI). Crabs matched Sargassum reflectance across blue and green wavelengths (400-550 nm) and diverged at longer wavelengths. Maximum discrepancy was observed in the far-red (i.e., 675 nm) where Chlorophyll a absorption occurred in Sargassum and not the crabs. In a quantum catch color model, both crabs showed effective color matching against blue/green sensitive dichromat fish, but were still discernible to tetrachromat bird predators that have visual sensitivity to far red wavelengths. The two species showed opposing trends in background matching with relation to body size. Variation in model parameters revealed that discrimination of crab and background was impacted by distance from the predator, and the ratio of cone cell types for bird predators. This is one of the first studies to detail background color matching in this unique, challenging ecosystem at the air-sea interface.

  18. Use of Hyperspectral Imagery to Assess Cryptic Color Matching in Sargassum Associated Crabs

    PubMed Central

    2015-01-01

    Mats of the pelagic macroalgae Sargassum represent a complex environment for the study of marine camouflage at the air-sea interface. Endemic organisms have convergently evolved similar colors and patterns, but quantitative assessments of camouflage strategies are lacking. Here, spectral camouflage of two crab species (Portunus sayi and Planes minutus) was assessed using hyperspectral imagery (HSI). Crabs matched Sargassum reflectance across blue and green wavelengths (400–550 nm) and diverged at longer wavelengths. Maximum discrepancy was observed in the far-red (i.e., 675 nm) where Chlorophyll a absorption occurred in Sargassum and not the crabs. In a quantum catch color model, both crabs showed effective color matching against blue/green sensitive dichromat fish, but were still discernible to tetrachromat bird predators that have visual sensitivity to far red wavelengths. The two species showed opposing trends in background matching with relation to body size. Variation in model parameters revealed that discrimination of crab and background was impacted by distance from the predator, and the ratio of cone cell types for bird predators. This is one of the first studies to detail background color matching in this unique, challenging ecosystem at the air-sea interface. PMID:26352667

  19. Exploration of a 'double-jeopardy' hypothesis within working memory profiles for children with specific language impairment.

    PubMed

    Briscoe, J; Rankin, P M

    2009-01-01

    Children with specific language impairment (SLI) often experience difficulties in the recall and repetition of verbal information. Archibald and Gathercole (2006) suggested that children with SLI are vulnerable across two separate components of a tripartite model of working memory (Baddeley and Hitch 1974). However, the hierarchical relationship between the 'slave' systems (temporary storage) and the central executive components places a particular challenge for interpreting working memory profiles within a tripartite model. This study aimed to examine whether a 'double-jeopardy' assumption is compatible with a hierarchical relationship between the phonological loop and central executive components of the working memory model in children with SLI. If a strong double-jeopardy assumption is valid for children with SLI, it was predicted that raw scores of working memory tests thought to tap phonological loop and central executive components of tripartite working memory would be lower than the scores of children matched for chronological age and those of children matched for language level, according to independent sources of constraint. In contrast, a hierarchical relationship would imply that a weakness in a slave component of working memory (the phonological loop) would also constrain performance on tests tapping a super-ordinate component (central executive). This locus of constraint would predict that scores of children with SLI on working memory tests that tap the central executive would be weaker relative to the scores of chronological age-matched controls only. Seven subtests of the Working Memory Test Battery for Children (Digit recall, Word recall, Non-word recall, Word matching, Listening recall, Backwards digit recall and Block recall; Pickering and Gathercole 2001) were administered to 14 children with SLI recruited via language resource bases and specialist schools, as well as two control groups matched on chronological age and vocabulary level, respectively. Mean group differences were ascertained by directly comparing raw scores on memory tests linked to different components of the tripartite model using a series of multivariate analyses. The majority of working memory scores of the SLI group were depressed relative to chronological age-matched controls, with the exception of spatial recall (block tapping) and word (order) matching tasks. Marked deficits in serial recall of words and digits were evident, with the SLI group scoring more poorly than the language-ability matched control group on these measures. Impairments of the SLI group on phonological loop tasks were robust, even when covariance with executive working memory scores was accounted for. There was no robust effect of group on complex working memory (central executive) tasks, despite a slight association between listening recall and phonological loop measures. A predominant feature of the working memory profile of SLI was a marked deficit on phonological loop tasks. Although scores on complex working memory tasks were also depressed, there was little evidence for a strong interpretation of double-jeopardy within working memory profiles for these children, rather these findings were consistent with an interpretation of a constraint on phonological loop for children with SLI that operated at all levels of a hierarchical tripartite model of working memory (Baddeley and Hitch 1974). These findings imply that low scores on complex working memory tasks alone do not unequivocally imply an independent deficit in central executive (domain-general) resources of working memory and should therefore be treated cautiously in a clinical context.

  20. Are Current Physical Match Performance Metrics in Elite Soccer Fit for Purpose or is the Adoption of an Integrated Approach Needed?

    PubMed

    Bradley, Paul S; Ade, Jack D

    2018-01-18

    Time-motion analysis is a valuable data-collection technique used to quantify the physical match performance of elite soccer players. For over 40 years researchers have adopted a 'traditional' approach when evaluating match demands by simply reporting the distance covered or time spent along a motion continuum of walking through to sprinting. This methodology quantifies physical metrics in isolation without integrating other factors and this ultimately leads to a one-dimensional insight into match performance. Thus, this commentary proposes a novel 'integrated' approach that focuses on a sensitive physical metric such as high-intensity running but contextualizes this in relation to key tactical activities for each position and collectively for the team. In the example presented, the 'integrated' model clearly unveils the unique high-intensity profile that exists due to distinct tactical roles, rather than one-dimensional 'blind' distances produced by 'traditional' models. Intuitively this innovative concept may aid the coaches understanding of the physical performance in relation to the tactical roles and instructions given to the players. Additionally, it will enable practitioners to more effectively translate match metrics into training and testing protocols. This innovative model may well aid advances in other team sports that incorporate similar intermittent movements with tactical purpose. Evidence of the merits and application of this new concept are needed before the scientific community accepts this model as it may well add complexity to an area that conceivably needs simplicity.

  1. A Molecular Dynamic Modeling of Hemoglobin-Hemoglobin Interactions

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Yang, Ye; Sheldon Wang, X.; Cohen, Barry; Ge, Hongya

    2010-05-01

    In this paper, we present a study of hemoglobin-hemoglobin interaction with model reduction methods. We begin with a simple spring-mass system with given parameters (mass and stiffness). With this known system, we compare the mode superposition method with Singular Value Decomposition (SVD) based Principal Component Analysis (PCA). Through PCA we are able to recover the principal direction of this system, namely the model direction. This model direction will be matched with the eigenvector derived from mode superposition analysis. The same technique will be implemented in a much more complicated hemoglobin-hemoglobin molecule interaction model, in which thousands of atoms in hemoglobin molecules are coupled with tens of thousands of T3 water molecule models. In this model, complex inter-atomic and inter-molecular potentials are replaced by nonlinear springs. We employ the same method to get the most significant modes and their frequencies of this complex dynamical system. More complex physical phenomena can then be further studied by these coarse grained models.

  2. The Implications of 3D Thermal Structure on 1D Atmospheric Retrieval

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Dobbs-Dixon, Ian; Greene, Thomas

    2017-10-01

    Using the atmospheric structure from a 3D global radiation-hydrodynamic simulation of HD 189733b and the open-source Bayesian Atmospheric Radiative Transfer (BART) code, we investigate the difference between the secondary-eclipse temperature structure produced with a 3D simulation and the best-fit 1D retrieved model. Synthetic data are generated by integrating the 3D models over the Spitzer, the Hubble Space Telescope (HST), and the James Web Space Telescope (JWST) bandpasses, covering the wavelength range between 1 and 11 μm where most spectroscopically active species have pronounced features. Using the data from different observing instruments, we present detailed comparisons between the temperature-pressure profiles recovered by BART and those from the 3D simulations. We calculate several averages of the 3D thermal structure and explore which particular thermal profile matches the retrieved temperature structure. We implement two temperature parameterizations that are commonly used in retrieval to investigate different thermal profile shapes. To assess which part of the thermal structure is best constrained by the data, we generate contribution functions for our theoretical model and each of our retrieved models. Our conclusions are strongly affected by the spectral resolution of the instruments included, their wavelength coverage, and the number of data points combined. We also see some limitations in each of the temperature parametrizations, as they are not able to fully match the complex curvatures that are usually produced in hydrodynamic simulations. The results show that our 1D retrieval is recovering a temperature and pressure profile that most closely matches the arithmetic average of the 3D thermal structure. When we use a higher resolution, more data points, and a parametrized temperature profile that allows more flexibility in the middle part of the atmosphere, we find a better match between the retrieved temperature and pressure profile and the arithmetic average. The Spitzer and HST simulated observations sample deep parts of the planetary atmosphere and provide fewer constraints on the temperature and pressure profile, while the JWST observations sample the middle part of the atmosphere, providing a good match with the middle and most complex part of the arithmetic average of the 3D temperature structure.

  3. 78 FR 33866 - Self-Regulatory Organizations; NYSE MKT LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-05

    ... Amending NYSE MKT Rule 980NY, To Remove Provisions Governing How the Complex Matching Engine Handles Electronic Complex Orders That Contain a Stock Leg May 30, 2013. Pursuant to Section 19(b)(1)\\1\\ of the... governing how the Complex Matching Engine (``CME'') handles Electronic Complex Orders that contain a stock...

  4. Mapping a Nursing Terminology Subset to openEHR Archetypes. A Case Study of the International Classification for Nursing Practice.

    PubMed

    Nogueira, J R M; Cook, T W; Cavalini, L T

    2015-01-01

    Healthcare information technologies have the potential to transform nursing care. However, healthcare information systems based on conventional software architecture are not semantically interoperable and have high maintenance costs. Health informatics standards, such as controlled terminologies, have been proposed to improve healthcare information systems, but their implementation in conventional software has not been enough to overcome the current challenge. Such obstacles could be removed by adopting a multilevel model-driven approach, such as the openEHR specifications, in nursing information systems. To create an openEHR archetype model for the Functional Status concepts as published in Nursing Outcome Indicators Catalog of the International Classification for Nursing Practice (NOIC-ICNP). Four methodological steps were followed: 1) extraction of terms from the NOIC-ICNP terminology; 2) identification of previously published openEHR archetypes; 3) assessment of the adequacy of those openEHR archetypes to represent the terms; and 4) development of new openEHR archetypes when required. The "Barthel Index" archetype was retrieved and mapped to the 68 NOIC-ICNP Functional Status terms. There were 19 exact matches between a term and the correspondent archetype node and 23 archetype nodes that matched to one or more NOIC-INCP. No matches were found between the archetype and 14 of the NOIC-ICNP terms, and nine archetype nodes did not match any of the NOIC-ICNP terms. The openEHR model was sufficient to represent the semantics of the Functional Status concept according to the NOIC-ICNP, but there were differences in data granularity between the terminology and the archetype, thus producing a significantly complex mapping, which could be difficult to implement in real healthcare information systems. However, despite the technological complexity, the present study demonstrated the feasibility of mapping nursing terminologies to openEHR archetypes, which emphasizes the importance of adopting the multilevel model-driven approach for the achievement of semantic interoperability between healthcare information systems.

  5. A study on axial and torsional resonant mode matching for a mechanical system with complex nonlinear geometries

    NASA Astrophysics Data System (ADS)

    Watson, Brett; Yeo, Leslie; Friend, James

    2010-06-01

    Making use of mechanical resonance has many benefits for the design of microscale devices. A key to successfully incorporating this phenomenon in the design of a device is to understand how the resonant frequencies of interest are affected by changes to the geometric parameters of the design. For simple geometric shapes, this is quite easy, but for complex nonlinear designs, it becomes significantly more complex. In this paper, two novel modeling techniques are demonstrated to extract the axial and torsional resonant frequencies of a complex nonlinear geometry. The first decomposes the complex geometry into easy to model components, while the second uses scaling techniques combined with the finite element method. Both models overcome problems associated with using current analytical methods as design tools, and enable a full investigation of how changes in the geometric parameters affect the resonant frequencies of interest. The benefit of such models is then demonstrated through their use in the design of a prototype piezoelectric ultrasonic resonant micromotor which has improved performance characteristics over previous prototypes.

  6. Lattice-Matched Epitaxial Graphene Grown on Boron Nitride.

    PubMed

    Davies, Andrew; Albar, Juan D; Summerfield, Alex; Thomas, James C; Cheng, Tin S; Korolkov, Vladimir V; Stapleton, Emily; Wrigley, James; Goodey, Nathan L; Mellor, Christopher J; Khlobystov, Andrei N; Watanabe, Kenji; Taniguchi, Takashi; Foxon, C Thomas; Eaves, Laurence; Novikov, Sergei V; Beton, Peter H

    2018-01-10

    Lattice-matched graphene on hexagonal boron nitride is expected to lead to the formation of a band gap but requires the formation of highly strained material and has not hitherto been realized. We demonstrate that aligned, lattice-matched graphene can be grown by molecular beam epitaxy using substrate temperatures in the range 1600-1710 °C and coexists with a topologically modified moiré pattern with regions of strained graphene which have giant moiré periods up to ∼80 nm. Raman spectra reveal narrow red-shifted peaks due to isotropic strain, while the giant moiré patterns result in complex splitting of Raman peaks due to strain variations across the moiré unit cell. The lattice-matched graphene has a lower conductance than both the Frenkel-Kontorova-type domain walls and also the topological defects where they terminate. We relate these results to theoretical models of band gap formation in graphene/boron nitride heterostructures.

  7. A Single Mechanism Can Account for Human Perception of Depth in Mixed Correlation Random Dot Stereograms

    PubMed Central

    Cumming, Bruce G.

    2016-01-01

    In order to extract retinal disparity from a visual scene, the brain must match corresponding points in the left and right retinae. This computationally demanding task is known as the stereo correspondence problem. The initial stage of the solution to the correspondence problem is generally thought to consist of a correlation-based computation. However, recent work by Doi et al suggests that human observers can see depth in a class of stimuli where the mean binocular correlation is 0 (half-matched random dot stereograms). Half-matched random dot stereograms are made up of an equal number of correlated and anticorrelated dots, and the binocular energy model—a well-known model of V1 binocular complex cells—fails to signal disparity here. This has led to the proposition that a second, match-based computation must be extracting disparity in these stimuli. Here we show that a straightforward modification to the binocular energy model—adding a point output nonlinearity—is by itself sufficient to produce cells that are disparity-tuned to half-matched random dot stereograms. We then show that a simple decision model using this single mechanism can reproduce psychometric functions generated by human observers, including reduced performance to large disparities and rapidly updating dot patterns. The model makes predictions about how performance should change with dot size in half-matched stereograms and temporal alternation in correlation, which we test in human observers. We conclude that a single correlation-based computation, based directly on already-known properties of V1 neurons, can account for the literature on mixed correlation random dot stereograms. PMID:27196696

  8. Exploration of funding models to support hybridisation of Australian primary health care organisations.

    PubMed

    Reddy, Sandeep

    2017-09-01

    Primary Health Care (PHC) funding in Australia is complex and fragmented. The focus of PHC funding in Australia has been on volume rather than comprehensive primary care and continuous quality improvement. As PHC in Australia is increasingly delivered by hybrid style organisations, an appropriate funding model that matches this set-up while addressing current issues with PHC funding is required. This article discusses and proposes an appropriate funding model for hybrid PHC organisations.

  9. Deep Learning for Lowtextured Image Matching

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Fedorenko, V. V.; Fomin, N. A.

    2018-05-01

    Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new "Multi-view Amphora" (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the "Amphora" dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.

  10. COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.

    USGS Publications Warehouse

    Hromadka, T.V.; Yen, C.C.; Guymon, G.L.

    1985-01-01

    The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.

  11. Complex organic matter in space: about the chemical composition of carriers of the Unidentified Infrared Bands (UIBs) and protoplanetary emission spectra recorded from certain astrophysical objects.

    PubMed

    Cataldo, Franco; Keheyan, Yeghis; Heymann, Dieter

    2004-02-01

    In this communication we present the basic concept that the pure PAHs (Polycyclic Aromatic Hydrocarbons) can be considered only the ideal carriers of the UIBs (Unidentified Infrared Bands), the emission spectra coming from a large variety of astronomical objects. Instead we have proposed that the carriers of UIBs and of protoplanetary nebulae (PPNe) emission spectra are much more complex molecular mixtures possessing also complex chemical structures comparable to certain petroleum fractions obtained from the petroleum refining processes. The demonstration of our proposal is based on the comparison between the emission spectra recorded from the protoplanetary nebulae (PPNe) IRAS 22272+ 5435 and the infrared absorption spectra of certain 'heavy' petroleum fractions. It is shown that the best match with the reference spectrum is achieved by highly aromatic petroleum fractions. It is shown that the selected petroleum fractions used in the present study are able to match the band pattern of anthracite coal. Coal has been proposed previously as a model for the PPNe and UIBs but presents some drawbacks which could be overcome by adopting the petroleum fractions as model for PPNe and UIBs in place of coal. A brief discussion on the formation of the petroleum-like fractions in PPNe objects is included.

  12. Mutual-Choice Placement--A Humanistic Approach to Student Teaching Assignments.

    ERIC Educational Resources Information Center

    Easterly, Jean L.

    Student teaching may be defined as a complex intermingling of roles and institutions. Few, however, would dispute that the core of student teaching is that unique relationship which occurs between two persons--the student teacher and the cooperating teacher. This relationship may be explored by examining two alternative models for matching student…

  13. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  14. Study of simple land battles using agent-based modeling: Strategy and emergent phenomena

    NASA Astrophysics Data System (ADS)

    Westley, Alexandra; de Meglio, Nicholas; Hager, Rebecca; Mok, Jorge Wu; Shanahan, Linda; Sen, Surajit

    2017-04-01

    In this paper, we expand upon our recent studies of an agent-based model of a battle between an intelligent army and an insurgent army to explore the role of modifying strategy according to the state of the battle (adaptive strategy) on battle outcomes. This model leads to surprising complexity and rich possibilities in battle outcomes, especially in battles between two well-matched sides. We contend that the use of adaptive strategies may be effective in winning battles.

  15. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    PubMed

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Structure and dynamics of complex liquid water: Molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    S, Indrajith V.; Natesan, Baskaran

    2015-06-01

    We have carried out detailed structure and dynamical studies of complex liquid water using molecular dynamics simulations. Three different model potentials, namely, TIP3P, TIP4P and SPC-E have been used in the simulations, in order to arrive at the best possible potential function that could reproduce the structure of experimental bulk water. All the simulations were performed in the NVE micro canonical ensemble using LAMMPS. The radial distribution functions, gOO, gOH and gHH and the self diffusion coefficient, Ds, were calculated for all three models. We conclude from our results that the structure and dynamical parameters obtained for SPC-E model matched well with the experimental values, suggesting that among the models studied here, the SPC-E model gives the best structure and dynamics of bulk water.

  17. Thermo-hydraulics of the Peruvian accretionary complex at 12°S

    USGS Publications Warehouse

    Kukowski, Nina; Pecher, Ingo

    1999-01-01

    The models were constrained by the thermal gradient obtained from the depth of bottomsimulating reflectors (BSRs) at the lower slope and some conventional measurements. We foundthat significant frictional heating is required to explain the observed strong landward increase ofheat flux. This is consistent with results from sandbox modelling which predict strong basalfriction at this margin. A significantly higher heat source is needed to match the observed thermalgradient in the southern line.

  18. A Primer for Model Selection: The Decisive Role of Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  19. The Implications of 3D Thermal Structure on 1D Atmospheric Retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blecic, Jasmina; Dobbs-Dixon, Ian; Greene, Thomas, E-mail: jasmina@nyu.edu

    Using the atmospheric structure from a 3D global radiation-hydrodynamic simulation of HD 189733b and the open-source Bayesian Atmospheric Radiative Transfer (BART) code, we investigate the difference between the secondary-eclipse temperature structure produced with a 3D simulation and the best-fit 1D retrieved model. Synthetic data are generated by integrating the 3D models over the Spitzer , the Hubble Space Telescope ( HST ), and the James Web Space Telescope ( JWST ) bandpasses, covering the wavelength range between 1 and 11 μ m where most spectroscopically active species have pronounced features. Using the data from different observing instruments, we present detailedmore » comparisons between the temperature–pressure profiles recovered by BART and those from the 3D simulations. We calculate several averages of the 3D thermal structure and explore which particular thermal profile matches the retrieved temperature structure. We implement two temperature parameterizations that are commonly used in retrieval to investigate different thermal profile shapes. To assess which part of the thermal structure is best constrained by the data, we generate contribution functions for our theoretical model and each of our retrieved models. Our conclusions are strongly affected by the spectral resolution of the instruments included, their wavelength coverage, and the number of data points combined. We also see some limitations in each of the temperature parametrizations, as they are not able to fully match the complex curvatures that are usually produced in hydrodynamic simulations. The results show that our 1D retrieval is recovering a temperature and pressure profile that most closely matches the arithmetic average of the 3D thermal structure. When we use a higher resolution, more data points, and a parametrized temperature profile that allows more flexibility in the middle part of the atmosphere, we find a better match between the retrieved temperature and pressure profile and the arithmetic average. The Spitzer and HST simulated observations sample deep parts of the planetary atmosphere and provide fewer constraints on the temperature and pressure profile, while the JWST observations sample the middle part of the atmosphere, providing a good match with the middle and most complex part of the arithmetic average of the 3D temperature structure.« less

  20. Impact of topographic mask models on scanner matching solutions

    NASA Astrophysics Data System (ADS)

    Tyminski, Jacek K.; Pomplun, Jan; Renwick, Stephen P.

    2014-03-01

    Of keen interest to the IC industry are advanced computational lithography applications such as Optical Proximity Correction of IC layouts (OPC), scanner matching by optical proximity effect matching (OPEM), and Source Optimization (SO) and Source-Mask Optimization (SMO) used as advanced reticle enhancement techniques. The success of these tasks is strongly dependent on the integrity of the lithographic simulators used in computational lithography (CL) optimizers. Lithographic mask models used by these simulators are key drivers impacting the accuracy of the image predications, and as a consequence, determine the validity of these CL solutions. Much of the CL work involves Kirchhoff mask models, a.k.a. thin masks approximation, simplifying the treatment of the mask near-field images. On the other hand, imaging models for hyper-NA scanner require that the interactions of the illumination fields with the mask topography be rigorously accounted for, by numerically solving Maxwell's Equations. The simulators used to predict the image formation in the hyper-NA scanners must rigorously treat the masks topography and its interaction with the scanner illuminators. Such imaging models come at a high computational cost and pose challenging accuracy vs. compute time tradeoffs. Additional complication comes from the fact that the performance metrics used in computational lithography tasks show highly non-linear response to the optimization parameters. Finally, the number of patterns used for tasks such as OPC, OPEM, SO, or SMO range from tens to hundreds. These requirements determine the complexity and the workload of the lithography optimization tasks. The tools to build rigorous imaging optimizers based on first-principles governing imaging in scanners are available, but the quantifiable benefits they might provide are not very well understood. To quantify the performance of OPE matching solutions, we have compared the results of various imaging optimization trials obtained with Kirchhoff mask models to those obtained with rigorous models involving solutions of Maxwell's Equations. In both sets of trials, we used sets of large numbers of patterns, with specifications representative of CL tasks commonly encountered in hyper-NA imaging. In this report we present OPEM solutions based on various mask models and discuss the models' impact on hyper- NA scanner matching accuracy. We draw conclusions on the accuracy of results obtained with thin mask models vs. the topographic OPEM solutions. We present various examples representative of the scanner image matching for patterns representative of the current generation of IC designs.

  1. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty

    PubMed Central

    Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.

    2017-01-01

    Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892

  2. An empirical model for estimating annual consumption by freshwater fish populations

    USGS Publications Warehouse

    Liao, H.; Pierce, C.L.; Larscheid, J.G.

    2005-01-01

    Population consumption is an important process linking predator populations to their prey resources. Simple tools are needed to enable fisheries managers to estimate population consumption. We assembled 74 individual estimates of annual consumption by freshwater fish populations and their mean annual population size, 41 of which also included estimates of mean annual biomass. The data set included 14 freshwater fish species from 10 different bodies of water. From this data set we developed two simple linear regression models predicting annual population consumption. Log-transformed population size explained 94% of the variation in log-transformed annual population consumption. Log-transformed biomass explained 98% of the variation in log-transformed annual population consumption. We quantified the accuracy of our regressions and three alternative consumption models as the mean percent difference from observed (bioenergetics-derived) estimates in a test data set. Predictions from our population-size regression matched observed consumption estimates poorly (mean percent difference = 222%). Predictions from our biomass regression matched observed consumption reasonably well (mean percent difference = 24%). The biomass regression was superior to an alternative model, similar in complexity, and comparable to two alternative models that were more complex and difficult to apply. Our biomass regression model, log10(consumption) = 0.5442 + 0.9962??log10(biomass), will be a useful tool for fishery managers, enabling them to make reasonably accurate annual population consumption predictions from mean annual biomass estimates. ?? Copyright by the American Fisheries Society 2005.

  3. Reconstruction and simplification of urban scene models based on oblique images

    NASA Astrophysics Data System (ADS)

    Liu, J.; Guo, B.

    2014-08-01

    We describe a multi-view stereo reconstruction and simplification algorithms for urban scene models based on oblique images. The complexity, diversity, and density within the urban scene, it increases the difficulty to build the city models using the oblique images. But there are a lot of flat surfaces existing in the urban scene. One of our key contributions is that a dense matching algorithm based on Self-Adaptive Patch in view of the urban scene is proposed. The basic idea of matching propagating based on Self-Adaptive Patch is to build patches centred by seed points which are already matched. The extent and shape of the patches can adapt to the objects of urban scene automatically: when the surface is flat, the extent of the patch would become bigger; while the surface is very rough, the extent of the patch would become smaller. The other contribution is that the mesh generated by Graph Cuts is 2-manifold surface satisfied the half edge data structure. It is solved by clustering and re-marking tetrahedrons in s-t graph. The purpose of getting 2- manifold surface is to simply the mesh by edge collapse algorithm which can preserve and stand out the features of buildings.

  4. A multilayer perceptron solution to the match phase problem in rule-based artificial intelligence systems

    NASA Technical Reports Server (NTRS)

    Sartori, Michael A.; Passino, Kevin M.; Antsaklis, Panos J.

    1992-01-01

    In rule-based AI planning, expert, and learning systems, it is often the case that the left-hand-sides of the rules must be repeatedly compared to the contents of some 'working memory'. The traditional approach to solve such a 'match phase problem' for production systems is to use the Rete Match Algorithm. Here, a new technique using a multilayer perceptron, a particular artificial neural network model, is presented to solve the match phase problem for rule-based AI systems. A syntax for premise formulas (i.e., the left-hand-sides of the rules) is defined, and working memory is specified. From this, it is shown how to construct a multilayer perceptron that finds all of the rules which can be executed for the current situation in working memory. The complexity of the constructed multilayer perceptron is derived in terms of the maximum number of nodes and the required number of layers. A method for reducing the number of layers to at most three is also presented.

  5. Computing quantum hashing in the model of quantum branching programs

    NASA Astrophysics Data System (ADS)

    Ablayev, Farid; Ablayev, Marat; Vasiliev, Alexander

    2018-02-01

    We investigate the branching program complexity of quantum hashing. We consider a quantum hash function that maps elements of a finite field into quantum states. We require that this function is preimage-resistant and collision-resistant. We consider two complexity measures for Quantum Branching Programs (QBP): a number of qubits and a number of compu-tational steps. We show that the quantum hash function can be computed efficiently. Moreover, we prove that such QBP construction is optimal. That is, we prove lower bounds that match the constructed quantum hash function computation.

  6. Hydrous partial melting in the sheeted dike complex at fast spreading ridges: experimental and natural observations

    NASA Astrophysics Data System (ADS)

    France, Lydéric; Koepke, Juergen; Ildefonse, Benoit; Cichy, Sarah B.; Deschamps, Fabien

    2010-11-01

    In ophiolites and in present-day oceanic crust formed at fast spreading ridges, oceanic plagiogranites are commonly observed at, or close to the base of the sheeted dike complex. They can be produced either by differentiation of mafic melts, or by hydrous partial melting of the hydrothermally altered sheeted dikes. In addition, the hydrothermally altered base of the sheeted dike complex, which is often infiltrated by plagiogranitic veins, is usually recrystallized into granoblastic dikes that are commonly interpreted as a result of prograde granulitic metamorphism. To test the anatectic origin of oceanic plagiogranites, we performed melting experiments on a natural hydrothermally altered dike, under conditions that match those prevailing at the base of the sheeted dike complex. All generated melts are water saturated, transitional between tholeiitic and calc-alkaline, and match the compositions of oceanic plagiogranites observed close to the base of the sheeted dike complex. Newly crystallized clinopyroxene and plagioclase have compositions that are characteristic of the same minerals in granoblastic dikes. Published silicic melt compositions obtained in classical MORB fractionation experiments also broadly match the compositions of oceanic plagiogranites; however, the compositions of the coexisting experimental minerals significantly deviate from those of the granoblastic dikes. Our results demonstrate that hydrous partial melting is a likely common process in the root zone of the sheeted dike complex, starting at temperatures exceeding 850°C. The newly formed melt can either crystallize to form oceanic plagiogranites or may be recycled within the melt lens resulting in hybridized and contaminated MORB melts. It represents the main MORB crustal contamination process. The residue after the partial melting event is represented by the granoblastic dikes. Our results support a model with a dynamic melt lens that has the potential to trigger hydrous partial melting reactions in the previously hydrothermally altered sheeted dikes. A new thermometer using the Al content of clinopyroxene is also elaborated.

  7. Diffusion Coefficients of Endogenous Cytosolic Proteins from Rabbit Skinned Muscle Fibers

    PubMed Central

    Carlson, Brian E.; Vigoreaux, Jim O.; Maughan, David W.

    2014-01-01

    Efflux time courses of endogenous cytosolic proteins were obtained from rabbit psoas muscle fibers skinned in oil and transferred to physiological salt solution. Proteins were separated by gel electrophoresis and compared to load-matched standards for quantitative analysis. A radial diffusion model incorporating the dissociation and dissipation of supramolecular complexes accounts for an initial lag and subsequent efflux of glycolytic and glycogenolytic enzymes. The model includes terms representing protein crowding, myofilament lattice hindrance, and binding to the cytomatrix. Optimization algorithms returned estimates of the apparent diffusion coefficients, D(r,t), that were very low at the onset of diffusion (∼10−10 cm2 s−1) but increased with time as cytosolic protein density, which was initially high, decreased. D(r,t) at later times ranged from 2.11 × 10−7 cm2 s−1 (parvalbumin) to 0.20 × 10−7 cm2 s−1 (phosphofructose kinase), values that are 3.6- to 12.3-fold lower than those predicted in bulk water. The low initial values are consistent with the presence of complexes in situ; the higher later values are consistent with molecular sieving and transient binding of dissociated proteins. Channeling of metabolic intermediates via enzyme complexes may enhance production of adenosine triphosphate at rates beyond that possible with randomly and/or sparsely distributed enzymes, thereby matching supply with demand. PMID:24559981

  8. Complexity markers in morphosyntactic productions in French-speaking children with specific language impairment (SLI).

    PubMed

    Prigent, Gaïd; Parisse, Christophe; Leclercq, Anne-Lise; Maillart, Christelle

    2015-01-01

    The usage-based theory considers that the morphosyntactic productions of children with SLI are particularly dependent on input frequency. When producing complex syntax, the language of these children is, therefore, predicted to have a lower variability and to contain fewer infrequent morphosyntactic markers than that of younger children matched on morphosyntactic abilities. Using a spontaneous language task, the current study compared the complexity of the morphological and structural productions of 20 children with SLI and 20 language-matched peers (matched on both morphosyntactic comprehension and mean length of utterance). As expected, results showed that although basic structures were produced in the same way in both groups, several complex forms (i.e. tenses such as Imperfect, Future or Conditional and Conjunctions) were less frequent in the productions of children with SLI. Finally, we attempted to highlight complex linguistic forms that could be good clinical markers for these children.

  9. PHOTOIONIZATION MODELS OF THE INNER GASEOUS DISK OF THE HERBIG BE STAR BD+65 1637

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, P.; Sigut, T. A. A.; Landstreet, J. D., E-mail: ppatel54@uwo.ca

    2016-01-20

    We attempt to constrain the physical properties of the inner, gaseous disk of the Herbig Be star BD+65 1637 using non-LTE, circumstellar disk codes and observed spectra (3700–10500 Å) from the ESPaDOnS instrument on the Canada–France–Hawaii Telescope. The photoionizing radiation of the central star is assumed to be the sole source of input energy for the disk. We model optical and near-infrared emission lines that are thought to form in this region using standard techniques that have been successful in modeling the spectra of classical Be stars. By comparing synthetic line profiles of hydrogen, helium, iron, and calcium with themore » observed line profiles, we try to constrain the geometry, density structure, and kinematics of the gaseous disk. Reasonable matches have been found for all line profiles individually; however, no disk density model based on a single power law for the equatorial density was able to simultaneously fit all of the observed emission lines. Among the emission lines, the metal lines, especially the Ca ii IR triplet, seem to require higher disk densities than the other lines. Excluding the Ca ii lines, a model in which the equatorial disk density falls as 10{sup −10} (R{sub *}/R){sup 3} g cm{sup −3} seen at an inclination of 45° for a 50 R{sub *} disk provides reasonable matches to the overall line shapes and strengths. The Ca ii lines seem to require a shallower drop-off as 10{sup −10} (R{sub *}/R){sup 2} g cm{sup −3} to match their strength. More complex disk density models are likely required to refine the match to the BD+65 1637 spectrum.« less

  10. Photoionization Models of the Inner Gaseous Disk of the Herbig Be Star BD+65 1637

    NASA Astrophysics Data System (ADS)

    Patel, P.; Sigut, T. A. A.; Landstreet, J. D.

    2016-01-01

    We attempt to constrain the physical properties of the inner, gaseous disk of the Herbig Be star BD+65 1637 using non-LTE, circumstellar disk codes and observed spectra (3700-10500 Å) from the ESPaDOnS instrument on the Canada-France-Hawaii Telescope. The photoionizing radiation of the central star is assumed to be the sole source of input energy for the disk. We model optical and near-infrared emission lines that are thought to form in this region using standard techniques that have been successful in modeling the spectra of classical Be stars. By comparing synthetic line profiles of hydrogen, helium, iron, and calcium with the observed line profiles, we try to constrain the geometry, density structure, and kinematics of the gaseous disk. Reasonable matches have been found for all line profiles individually; however, no disk density model based on a single power law for the equatorial density was able to simultaneously fit all of the observed emission lines. Among the emission lines, the metal lines, especially the Ca II IR triplet, seem to require higher disk densities than the other lines. Excluding the Ca II lines, a model in which the equatorial disk density falls as 10-10 (R*/R)3 g cm-3 seen at an inclination of 45° for a 50 R* disk provides reasonable matches to the overall line shapes and strengths. The Ca II lines seem to require a shallower drop-off as 10-10 (R*/R)2 g cm-3 to match their strength. More complex disk density models are likely required to refine the match to the BD+65 1637 spectrum.

  11. Frequency domain finite-element and spectral-element acoustic wave modeling using absorbing boundaries and perfectly matched layer

    NASA Astrophysics Data System (ADS)

    Rahimi Dalkhani, Amin; Javaherian, Abdolrahim; Mahdavi Basir, Hadi

    2018-04-01

    Wave propagation modeling as a vital tool in seismology can be done via several different numerical methods among them are finite-difference, finite-element, and spectral-element methods (FDM, FEM and SEM). Some advanced applications in seismic exploration benefit the frequency domain modeling. Regarding flexibility in complex geological models and dealing with the free surface boundary condition, we studied the frequency domain acoustic wave equation using FEM and SEM. The results demonstrated that the frequency domain FEM and SEM have a good accuracy and numerical efficiency with the second order interpolation polynomials. Furthermore, we developed the second order Clayton and Engquist absorbing boundary condition (CE-ABC2) and compared it with the perfectly matched layer (PML) for the frequency domain FEM and SEM. In spite of PML method, CE-ABC2 does not add any additional computational cost to the modeling except assembling boundary matrices. As a result, considering CE-ABC2 is more efficient than PML for the frequency domain acoustic wave propagation modeling especially when computational cost is high and high-level absorbing performance is unnecessary.

  12. Simulation Based Earthquake Forecasting with RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.

    2016-12-01

    We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.

  13. Comparison of Point Matching Techniques for Road Network Matching

    NASA Astrophysics Data System (ADS)

    Hackeloeer, A.; Klasing, K.; Krisp, J. M.; Meng, L.

    2013-05-01

    Map conflation investigates the unique identification of geographical entities across different maps depicting the same geographic region. It involves a matching process which aims to find commonalities between geographic features. A specific subdomain of conflation called Road Network Matching establishes correspondences between road networks of different maps on multiple layers of abstraction, ranging from elementary point locations to high-level structures such as road segments or even subgraphs derived from the induced graph of a road network. The process of identifying points located on different maps by means of geometrical, topological and semantical information is called point matching. This paper provides an overview of various techniques for point matching, which is a fundamental requirement for subsequent matching steps focusing on complex high-level entities in geospatial networks. Common point matching approaches as well as certain combinations of these are described, classified and evaluated. Furthermore, a novel similarity metric called the Exact Angular Index is introduced, which considers both topological and geometrical aspects. The results offer a basis for further research on a bottom-up matching process for complex map features, which must rely upon findings derived from suitable point matching algorithms. In the context of Road Network Matching, reliable point matches provide an immediate starting point for finding matches between line segments describing the geometry and topology of road networks, which may in turn be used for performing a structural high-level matching on the network level.

  14. Propensity Scores in Pharmacoepidemiology: Beyond the Horizon.

    PubMed

    Jackson, John W; Schmid, Ian; Stuart, Elizabeth A

    2017-12-01

    Propensity score methods have become commonplace in pharmacoepidemiology over the past decade. Their adoption has confronted formidable obstacles that arise from pharmacoepidemiology's reliance on large healthcare databases of considerable heterogeneity and complexity. These include identifying clinically meaningful samples, defining treatment comparisons, and measuring covariates in ways that respect sound epidemiologic study design. Additional complexities involve correctly modeling treatment decisions in the face of variation in healthcare practice, and dealing with missing information and unmeasured confounding. In this review, we examine the application of propensity score methods in pharmacoepidemiology with particular attention to these and other issues, with an eye towards standards of practice, recent methodological advances, and opportunities for future progress. Propensity score methods have matured in ways that can advance comparative effectiveness and safety research in pharmacoepidemiology. These include natural extensions for categorical treatments, matching algorithms that can optimize sample size given design constraints, weighting estimators that asymptotically target matched and overlap samples, and the incorporation of machine learning to aid in covariate selection and model building. These recent and encouraging advances should be further evaluated through simulation and empirical studies, but nonetheless represent a bright path ahead for the observational study of treatment benefits and harms.

  15. Solving the Secondary Structure Matching Problem in Cryo-EM De Novo Modeling Using a Constrained K-Shortest Path Graph Algorithm.

    PubMed

    Al Nasr, Kamal; Ranjan, Desh; Zubair, Mohammad; Chen, Lin; He, Jing

    2014-01-01

    Electron cryomicroscopy is becoming a major experimental technique in solving the structures of large molecular assemblies. More and more three-dimensional images have been obtained at the medium resolutions between 5 and 10 Å. At this resolution range, major α-helices can be detected as cylindrical sticks and β-sheets can be detected as plain-like regions. A critical question in de novo modeling from cryo-EM images is to determine the match between the detected secondary structures from the image and those on the protein sequence. We formulate this matching problem into a constrained graph problem and present an O(Δ(2)N(2)2(N)) algorithm to this NP-Hard problem. The algorithm incorporates the dynamic programming approach into a constrained K-shortest path algorithm. Our method, DP-TOSS, has been tested using α-proteins with maximum 33 helices and α-β proteins up to five helices and 12 β-strands. The correct match was ranked within the top 35 for 19 of the 20 α-proteins and all nine α-β proteins tested. The results demonstrate that DP-TOSS improves accuracy, time and memory space in deriving the topologies of the secondary structure elements for proteins with a large number of secondary structures and a complex skeleton.

  16. Sample entropy and regularity dimension in complexity analysis of cortical surface structure in early Alzheimer's disease and aging.

    PubMed

    Chen, Ying; Pham, Tuan D

    2013-05-15

    We apply for the first time the sample entropy (SampEn) and regularity dimension model for measuring signal complexity to quantify the structural complexity of the brain on MRI. The concept of the regularity dimension is based on the theory of chaos for studying nonlinear dynamical systems, where power laws and entropy measure are adopted to develop the regularity dimension for modeling a mathematical relationship between the frequencies with which information about signal regularity changes in various scales. The sample entropy and regularity dimension of MRI-based brain structural complexity are computed for early Alzheimer's disease (AD) elder adults and age and gender-matched non-demented controls, as well as for a wide range of ages from young people to elder adults. A significantly higher global cortical structure complexity is detected in AD individuals (p<0.001). The increase of SampEn and the regularity dimension are also found to be accompanied with aging which might indicate an age-related exacerbation of cortical structural irregularity. The provided model can be potentially used as an imaging bio-marker for early prediction of AD and age-related cognitive decline. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Gun bore flaw image matching based on improved SIFT descriptor

    NASA Astrophysics Data System (ADS)

    Zeng, Luan; Xiong, Wei; Zhai, You

    2013-01-01

    In order to increase the operation speed and matching ability of SIFT algorithm, the SIFT descriptor and matching strategy are improved. First, a method of constructing feature descriptor based on sector area is proposed. By computing the gradients histogram of location bins which are parted into 6 sector areas, a descriptor with 48 dimensions is constituted. It can reduce the dimension of feature vector and decrease the complexity of structuring descriptor. Second, it introduce a strategy that partitions the circular region into 6 identical sector areas starting from the dominate orientation. Consequently, the computational complexity is reduced due to cancellation of rotation operation for the area. The experimental results indicate that comparing with the OpenCV SIFT arithmetic, the average matching speed of the new method increase by about 55.86%. The matching veracity can be increased even under some variation of view point, illumination, rotation, scale and out of focus. The new method got satisfied results in gun bore flaw image matching. Keywords: Metrology, Flaw image matching, Gun bore, Feature descriptor

  18. Low frequency complex dielectric (conductivity) response of dilute clay suspensions: Modeling and experiments.

    PubMed

    Hou, Chang-Yu; Feng, Ling; Seleznev, Nikita; Freed, Denise E

    2018-09-01

    In this work, we establish an effective medium model to describe the low-frequency complex dielectric (conductivity) dispersion of dilute clay suspensions. We use previously obtained low-frequency polarization coefficients for a charged oblate spheroidal particle immersed in an electrolyte as the building block for the Maxwell Garnett mixing formula to model the dilute clay suspension. The complex conductivity phase dispersion exhibits a near-resonance peak when the clay grains have a narrow size distribution. The peak frequency is associated with the size distribution as well as the shape of clay grains and is often referred to as the characteristic frequency. In contrast, if the size of the clay grains has a broad distribution, the phase peak is broadened and can disappear into the background of the canonical phase response of the brine. To benchmark our model, the low-frequency dispersion of the complex conductivity of dilute clay suspensions is measured using a four-point impedance measurement, which can be reliably calibrated in the frequency range between 0.1 Hz and 10 kHz. By using a minimal number of fitting parameters when reliable information is available as input for the model and carefully examining the issue of potential over-fitting, we found that our model can be used to fit the measured dispersion of the complex conductivity with reasonable parameters. The good match between the modeled and experimental complex conductivity dispersion allows us to argue that our simplified model captures the essential physics for describing the low-frequency dispersion of the complex conductivity of dilute clay suspensions. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Zhang, Dongxiao; Lin, Guang

    A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a samplemore » of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching problem is captured and are used to give a reliable production prediction with uncertainty quantification. The new algorithm reveals a great improvement in terms of computational efficiency comparing previously studied approaches for the sample problem.« less

  20. ION COMPOSITION ELUCIDATION (ICE): A HIGH RESOLUTION MASS SPECTROMETRIC TECHNIQUE FOR IDENTIFYING COMPOUNDS IN COMPLEX MIXTURES

    EPA Science Inventory

    When tentatively identifying compounds in complex mixtures using mass spectral libraries, multiple matches or no plausible matches due to a high level of chemical noise or interferences can occur. Worse yet, most analytes are not in the libraries. In each case, Ion Composition El...

  1. A Model of Compound Heterozygous, Loss-of-Function Alleles Is Broadly Consistent with Observations from Complex-Disease GWAS Datasets

    PubMed Central

    Sanjak, Jaleal S.; Long, Anthony D.; Thornton, Kevin R.

    2017-01-01

    The genetic component of complex disease risk in humans remains largely unexplained. A corollary is that the allelic spectrum of genetic variants contributing to complex disease risk is unknown. Theoretical models that relate population genetic processes to the maintenance of genetic variation for quantitative traits may suggest profitable avenues for future experimental design. Here we use forward simulation to model a genomic region evolving under a balance between recurrent deleterious mutation and Gaussian stabilizing selection. We consider multiple genetic and demographic models, and several different methods for identifying genomic regions harboring variants associated with complex disease risk. We demonstrate that the model of gene action, relating genotype to phenotype, has a qualitative effect on several relevant aspects of the population genetic architecture of a complex trait. In particular, the genetic model impacts genetic variance component partitioning across the allele frequency spectrum and the power of statistical tests. Models with partial recessivity closely match the minor allele frequency distribution of significant hits from empirical genome-wide association studies without requiring homozygous effect sizes to be small. We highlight a particular gene-based model of incomplete recessivity that is appealing from first principles. Under that model, deleterious mutations in a genomic region partially fail to complement one another. This model of gene-based recessivity predicts the empirically observed inconsistency between twin and SNP based estimated of dominance heritability. Furthermore, this model predicts considerable levels of unexplained variance associated with intralocus epistasis. Our results suggest a need for improved statistical tools for region based genetic association and heritability estimation. PMID:28103232

  2. Refractive index and solubility control of para-cymene solutions for index-matched fluid-structure interaction studies

    NASA Astrophysics Data System (ADS)

    Fort, Charles; Fu, Christopher D.; Weichselbaum, Noah A.; Bardet, Philippe M.

    2015-12-01

    To deploy optical diagnostics such as particle image velocimetry or planar laser-induced fluorescence (PLIF) in complex geometries, it is beneficial to use index-matched facilities. A binary mixture of para-cymene and cinnamaldehyde provides a viable option for matching the refractive index of acrylic, a common material for scaled models and test sections. This fluid is particularly appropriate for large-scale facilities and when a low-density and low-viscosity fluid is sought, such as in fluid-structure interaction studies. This binary solution has relatively low kinematic viscosity and density; its use enables the experimentalist to select operating temperature and to increase fluorescence signal in PLIF experiments. Measurements of spectral and temperature dependence of refractive index, density, and kinematic viscosity are reported. The effect of the binary mixture on solubility control of Rhodamine 6G is also characterized.

  3. An image understanding system using attributed symbolic representation and inexact graph-matching

    NASA Astrophysics Data System (ADS)

    Eshera, M. A.; Fu, K.-S.

    1986-09-01

    A powerful image understanding system using a semantic-syntactic representation scheme consisting of attributed relational graphs (ARGs) is proposed for the analysis of the global information content of images. A multilayer graph transducer scheme performs the extraction of ARG representations from images, with ARG nodes representing the global image features, and the relations between features represented by the attributed branches between corresponding nodes. An efficient dynamic programming technique is employed to derive the distance between two ARGs and the inexact matching of their respective components. Noise, distortion and ambiguity in real-world images are handled through modeling in the transducer mapping rules and through the appropriate cost of error-transformation for the inexact matching of the representation. The system is demonstrated for the case of locating objects in a scene composed of complex overlapped objects, and the case of target detection in noisy and distorted synthetic aperture radar image.

  4. Escape Distance in Ground-Nesting Birds Differs with Individual Level of Camouflage.

    PubMed

    Wilson-Aggarwal, Jared K; Troscianko, Jolyon T; Stevens, Martin; Spottiswoode, Claire N

    2016-08-01

    Camouflage is one of the most widespread antipredator strategies in the animal kingdom, yet no animal can match its background perfectly in a complex environment. Therefore, selection should favor individuals that use information on how effective their camouflage is in their immediate habitat when responding to an approaching threat. In a field study of African ground-nesting birds (plovers, coursers, and nightjars), we tested the hypothesis that individuals adaptively modulate their escape behavior in relation to their degree of background matching. We used digital imaging and models of predator vision to quantify differences in color, luminance, and pattern between eggs and their background, as well as the plumage of incubating adult nightjars. We found that plovers and coursers showed greater escape distances when their eggs were a poorer pattern match to the background. Nightjars sit on their eggs until a potential threat is nearby, and, correspondingly, they showed greater escape distances when the pattern and color match of the incubating adult's plumage-rather than its eggs-was a poorer match to the background. Finally, escape distances were shorter in the middle of the day, suggesting that escape behavior is mediated by both camouflage and thermoregulation.

  5. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection

    PubMed Central

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-01-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393

  6. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection.

    PubMed

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-12-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.

  7. Stability and structural properties of gene regulation networks with coregulation rules.

    PubMed

    Warrell, Jonathan; Mhlanga, Musa

    2017-05-07

    Coregulation of the expression of groups of genes has been extensively demonstrated empirically in bacterial and eukaryotic systems. Such coregulation can arise through the use of shared regulatory motifs, which allow the coordinated expression of modules (and module groups) of functionally related genes across the genome. Coregulation can also arise through the physical association of multi-gene complexes through chromosomal looping, which are then transcribed together. We present a general formalism for modeling coregulation rules in the framework of Random Boolean Networks (RBN), and develop specific models for transcription factor networks with modular structure (including module groups, and multi-input modules (MIM) with autoregulation) and multi-gene complexes (including hierarchical differentiation between multi-gene complex members). We develop a mean-field approach to analyse the dynamical stability of large networks incorporating coregulation, and show that autoregulated MIM and hierarchical gene-complex models can achieve greater stability than networks without coregulation whose rules have matching activation frequency. We provide further analysis of the stability of small networks of both kinds through simulations. We also characterize several general properties of the transients and attractors in the hierarchical coregulation model, and show using simulations that the steady-state distribution factorizes hierarchically as a Bayesian network in a Markov Jump Process analogue of the RBN model. Copyright © 2017. Published by Elsevier Ltd.

  8. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Transition Features from Simplicity-Universality to Complexity-Diversification Under UHNTF

    NASA Astrophysics Data System (ADS)

    Fang, Jin-Qing; Li, Yong

    2010-02-01

    A large unified hybrid network model with a variable speed growth (LUHNM-VSG) is proposed as third model of the unified hybrid network theoretical framework (UHNTF). A hybrid growth ratio vg of deterministic linking number to random linking number and variable speed growth index α are introduced in it. The main effects of vg and α on topological transition features of the LUHNM-VSG are revealed. For comparison with the other models, we construct a type of the network complexity pyramid with seven levels, in which from the bottom level-1 to the top level-7 of the pyramid simplicity-universality is increasing but complexity-diversity is decreasing. The transition relations between them depend on matching of four hybrid ratios (dr, fd, gr, vg). Thus the most of network models can be investigated in the unification way via four hybrid ratios (dr, fd, gr, vg). The LUHNM-VSG as the level-1 of the pyramid is much better and closer to description of real-world networks as well as has potential application.

  9. Cluster of Serogroup W135 Meningococci, Southeastern Florida, 2008–2009

    PubMed Central

    Mejia-Echeverry, Alvaro; Fiorella, Paul; Leguen, Fermin; Livengood, John; Kay, Robyn; Hopkins, Richard

    2010-01-01

    Recently, 14 persons in southeastern Florida were identified with Neisseria meningitidis serogroup W135 invasive infections. All isolates tested had matching or near-matching pulsed-field gel electrophoresis patterns and belonged to the multilocus sequence type 11 clonal complex. The epidemiologic investigation suggested recent endemic transmission of this clonal complex in southeastern Florida. PMID:20031054

  10. Optimized design and analysis of preclinical intervention studies in vivo

    PubMed Central

    Laajala, Teemu D.; Jumppanen, Mikael; Huhtaniemi, Riikka; Fey, Vidal; Kaur, Amanpreet; Knuuttila, Matias; Aho, Eija; Oksala, Riikka; Westermarck, Jukka; Mäkelä, Sari; Poutanen, Matti; Aittokallio, Tero

    2016-01-01

    Recent reports have called into question the reproducibility, validity and translatability of the preclinical animal studies due to limitations in their experimental design and statistical analysis. To this end, we implemented a matching-based modelling approach for optimal intervention group allocation, randomization and power calculations, which takes full account of the complex animal characteristics at baseline prior to interventions. In prostate cancer xenograft studies, the method effectively normalized the confounding baseline variability, and resulted in animal allocations which were supported by RNA-seq profiling of the individual tumours. The matching information increased the statistical power to detect true treatment effects at smaller sample sizes in two castration-resistant prostate cancer models, thereby leading to saving of both animal lives and research costs. The novel modelling approach and its open-source and web-based software implementations enable the researchers to conduct adequately-powered and fully-blinded preclinical intervention studies, with the aim to accelerate the discovery of new therapeutic interventions. PMID:27480578

  11. Optimized design and analysis of preclinical intervention studies in vivo.

    PubMed

    Laajala, Teemu D; Jumppanen, Mikael; Huhtaniemi, Riikka; Fey, Vidal; Kaur, Amanpreet; Knuuttila, Matias; Aho, Eija; Oksala, Riikka; Westermarck, Jukka; Mäkelä, Sari; Poutanen, Matti; Aittokallio, Tero

    2016-08-02

    Recent reports have called into question the reproducibility, validity and translatability of the preclinical animal studies due to limitations in their experimental design and statistical analysis. To this end, we implemented a matching-based modelling approach for optimal intervention group allocation, randomization and power calculations, which takes full account of the complex animal characteristics at baseline prior to interventions. In prostate cancer xenograft studies, the method effectively normalized the confounding baseline variability, and resulted in animal allocations which were supported by RNA-seq profiling of the individual tumours. The matching information increased the statistical power to detect true treatment effects at smaller sample sizes in two castration-resistant prostate cancer models, thereby leading to saving of both animal lives and research costs. The novel modelling approach and its open-source and web-based software implementations enable the researchers to conduct adequately-powered and fully-blinded preclinical intervention studies, with the aim to accelerate the discovery of new therapeutic interventions.

  12. Geological and geothermal investigations for HCMM-derived data. [hydrothermally altered areas in Yerington, Nevada

    NASA Technical Reports Server (NTRS)

    Lyon, R. J. P.; Prelat, A. E.; Kirk, R. (Principal Investigator)

    1981-01-01

    An attempt was made to match HCMM- and U2HCMR-derived temperature data over two test sites of very local size to similar data collected in the field at nearly the same times. Results indicate that HCMM investigations using resolutions cells of 500 m or so are best conducted with areally-extensive sites, rather than point observations. The excellent quality day-VIS imagery is particularly useful for lineament studies, as is the DELTA-T imagery. Attempts to register the ground observed temperatures (even for 0.5 sq mile targets) were unsuccessful due to excessive pixel-to-pixel noise on the HCMM data. Several computer models were explored and related to thermal parameter value changes with observed data. Unless quite complex models, with many parameters which can be observed (perhaps not even measured (perhaps not even measured) only under remote sensing conditions (e.g., roughness, wind shear, etc) are used, the model outputs do not match the observed data. Empirical relationship may be most readily studied.

  13. In vitro psoriasis models with focus on reconstructed skin models as promising tools in psoriasis research

    PubMed Central

    Desmet, Eline; Ramadhas, Anesh; Lambert, Jo

    2017-01-01

    Psoriasis is a complex chronic immune-mediated inflammatory cutaneous disease associated with the development of inflammatory plaques on the skin. Studies proved that the disease results from a deregulated interplay between skin keratinocytes, immune cells and the environment leading to a persisting inflammatory process modulated by pro-inflammatory cytokines and activation of T cells. However, a major hindrance to study the pathogenesis of psoriasis more in depth and subsequent development of novel therapies is the lack of suitable pre-clinical models mimicking the complex phenotype of this skin disorder. Recent advances in and optimization of three-dimensional skin equivalent models have made them attractive and promising alternatives to the simplistic monolayer cultures, immunological different in vivo models and scarce ex vivo skin explants. Moreover, human skin equivalents are increasing in complexity level to match human biology as closely as possible. Here, we critically review the different types of three-dimensional skin models of psoriasis with relevance to their application potential and advantages over other models. This will guide researchers in choosing the most suitable psoriasis skin model for therapeutic drug testing (including gene therapy via siRNA molecules), or to examine biological features contributing to the pathology of psoriasis. However, the addition of T cells (as recently applied to a de-epidermized dermis-based psoriatic skin model) or other immune cells would make them even more attractive models and broaden their application potential. Eventually, the ultimate goal would be to substitute animal models by three-dimensional psoriatic skin models in the pre-clinical phases of anti-psoriasis candidate drugs. Impact statement The continuous development of novel in vitro models mimicking the psoriasis phenotype is important in the field of psoriasis research, as currently no model exists that completely matches the in vivo psoriasis skin or the disease pathology. This work provides a complete overview of the different available in vitro psoriasis models and suggests improvements for future models. Moreover, a focus was given to psoriatic skin equivalent models, as they offer several advantages over the other models, including commercial availability and validity. The potential and reported applicability of these models in psoriasis pre-clinical research is extensively discussed. As such, this work offers a guide to researchers in their choice of pre-clinical psoriasis model depending on their type of research question. PMID:28585891

  14. In vitro psoriasis models with focus on reconstructed skin models as promising tools in psoriasis research.

    PubMed

    Desmet, Eline; Ramadhas, Anesh; Lambert, Jo; Van Gele, Mireille

    2017-06-01

    Psoriasis is a complex chronic immune-mediated inflammatory cutaneous disease associated with the development of inflammatory plaques on the skin. Studies proved that the disease results from a deregulated interplay between skin keratinocytes, immune cells and the environment leading to a persisting inflammatory process modulated by pro-inflammatory cytokines and activation of T cells. However, a major hindrance to study the pathogenesis of psoriasis more in depth and subsequent development of novel therapies is the lack of suitable pre-clinical models mimicking the complex phenotype of this skin disorder. Recent advances in and optimization of three-dimensional skin equivalent models have made them attractive and promising alternatives to the simplistic monolayer cultures, immunological different in vivo models and scarce ex vivo skin explants. Moreover, human skin equivalents are increasing in complexity level to match human biology as closely as possible. Here, we critically review the different types of three-dimensional skin models of psoriasis with relevance to their application potential and advantages over other models. This will guide researchers in choosing the most suitable psoriasis skin model for therapeutic drug testing (including gene therapy via siRNA molecules), or to examine biological features contributing to the pathology of psoriasis. However, the addition of T cells (as recently applied to a de-epidermized dermis-based psoriatic skin model) or other immune cells would make them even more attractive models and broaden their application potential. Eventually, the ultimate goal would be to substitute animal models by three-dimensional psoriatic skin models in the pre-clinical phases of anti-psoriasis candidate drugs. Impact statement The continuous development of novel in vitro models mimicking the psoriasis phenotype is important in the field of psoriasis research, as currently no model exists that completely matches the in vivo psoriasis skin or the disease pathology. This work provides a complete overview of the different available in vitro psoriasis models and suggests improvements for future models. Moreover, a focus was given to psoriatic skin equivalent models, as they offer several advantages over the other models, including commercial availability and validity. The potential and reported applicability of these models in psoriasis pre-clinical research is extensively discussed. As such, this work offers a guide to researchers in their choice of pre-clinical psoriasis model depending on their type of research question.

  15. Application of the perfectly matched layer in 2.5D marine controlled-source electromagnetic modeling

    NASA Astrophysics Data System (ADS)

    Li, Gang; Han, Bo

    2017-09-01

    For the traditional framework of EM modeling algorithms, the Dirichlet boundary is usually used which assumes the field values are zero at the boundaries. This crude condition requires that the boundaries should be sufficiently far away from the area of interest. Although cell sizes could become larger toward the boundaries as electromagnetic wave is propagated diffusively, a large modeling area may still be necessary to mitigate the boundary artifacts. In this paper, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 2.5D frequency-domain marine controlled-source electromagnetic (CSEM) field modeling. By using this PML boundary, one can restrict the modeling area of interest to the target region. Only a few absorbing layers surrounding the computational area can effectively depress the artificial boundary effect without losing the numerical accuracy. A 2.5D marine CSEM modeling scheme with the CFS-PML is developed by using the staggered finite-difference discretization. This modeling algorithm using the CFS-PML is of high accuracy, and shows advantages in computational time and memory saving than that using the Dirichlet boundary. For 3D problem, this computation time and memory saving should be more significant.

  16. Met or matched expectations: what accounts for a successful back pain consultation in primary care?

    PubMed Central

    Georgy, Ehab E.; Carr, Eloise C.J.; Breen, Alan C.

    2011-01-01

    Abstract Background  Patients’ as well as doctors’ expectations might be key elements for improving the quality of health care; however, previous conceptual and theoretical frameworks related to expectations often overlook such complex and complementary relationship between patients’ and doctors’ expectations. The concept of ‘matched patient–doctor expectations’ is not properly investigated, and there is lack of literature exploring such aspect of the consultation. Aim  The paper presents a preliminary conceptual model for the relationship between patients’ and doctors’ expectations with specific reference to back pain management in primary care. Methods  The methods employed in this study are integrative literature review, examination of previous theoretical frameworks, identification of conceptual issues in existing literature, and synthesis and development of a preliminary pragmatic conceptual framework. Outcome  A simple preliminary model explaining the formation of expectations in relation to specific antecedents and consequences was developed; the model incorporates several stages and filters (influencing factors, underlying reactions, judgement, formed reactions, outcome and significance) to explain the development and anticipated influence of expectations on the consultation outcome. Conclusion  The newly developed model takes into account several important dynamics that might be key elements for more successful back pain consultation in primary care, mainly the importance of matching patients’ and doctors’ expectations as well as the importance of addressing unmet expectations. PMID:21679288

  17. Constrained Surface Complexation Modeling: Rutile in RbCl, NaCl, and NaCF 3SO 3 Media to 250 °C

    DOE PAGES

    Machesky, Michael L.; Předota, Milan; Ridley, Moira K.; ...

    2015-06-01

    In this paper, a comprehensive set of molecular-level results, primarily from classical molecular dynamics (CMD) simulations, are used to constrain CD-MUSIC surface complexation model (SCM) parameters describing rutile powder titrations conducted in RbCl, NaCl, and NaTr (Tr = triflate, CF 3SO 3 –) electrolyte media from 25 to 250 °C. Rb + primarily occupies the innermost tetradentate binding site on the rutile (110) surface at all temperatures (25, 150, 250 °C) and negative charge conditions (-0.1 and -0.2 C/m 2) probed via CMD simulations, reflecting the small hydration energy of this large, monovalent cation. Consequently, variable SCM parameters (Stern-layer capacitancemore » values and intrinsic Rb + binding constants) were adjusted relatively easily to satisfactorily match the CMD and titration data. The larger hydration energy of Na + results in a more complex inner-sphere distribution, which shifts from bidentate to tetradentate binding with increasing negative charge and temperature, and this distribution was not matched well for both negative charge conditions, which may reflect limitations in the CMD and/or SCM approaches. Finally, in particular, the CMD axial density profiles for Rb + and Na + reveal that peak binding distances shift toward the surface with increasing negative charge, suggesting that the CD-MUSIC framework may be improved by incorporating CD or Stern-layer capacitance values that vary with charge.« less

  18. Constrained Surface Complexation Modeling: Rutile in RbCl, NaCl, and NaCF 3SO 3 Media to 250 °C

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machesky, Michael L.; Předota, Milan; Ridley, Moira K.

    In this paper, a comprehensive set of molecular-level results, primarily from classical molecular dynamics (CMD) simulations, are used to constrain CD-MUSIC surface complexation model (SCM) parameters describing rutile powder titrations conducted in RbCl, NaCl, and NaTr (Tr = triflate, CF 3SO 3 –) electrolyte media from 25 to 250 °C. Rb + primarily occupies the innermost tetradentate binding site on the rutile (110) surface at all temperatures (25, 150, 250 °C) and negative charge conditions (-0.1 and -0.2 C/m 2) probed via CMD simulations, reflecting the small hydration energy of this large, monovalent cation. Consequently, variable SCM parameters (Stern-layer capacitancemore » values and intrinsic Rb + binding constants) were adjusted relatively easily to satisfactorily match the CMD and titration data. The larger hydration energy of Na + results in a more complex inner-sphere distribution, which shifts from bidentate to tetradentate binding with increasing negative charge and temperature, and this distribution was not matched well for both negative charge conditions, which may reflect limitations in the CMD and/or SCM approaches. Finally, in particular, the CMD axial density profiles for Rb + and Na + reveal that peak binding distances shift toward the surface with increasing negative charge, suggesting that the CD-MUSIC framework may be improved by incorporating CD or Stern-layer capacitance values that vary with charge.« less

  19. Geometric state space uncertainty as a new type of uncertainty addressing disparity in ';emergent properties' between real and modeled systems

    NASA Astrophysics Data System (ADS)

    Montero, J. T.; Lintz, H. E.; Sharp, D.

    2013-12-01

    Do emergent properties that result from models of complex systems match emergent properties from real systems? This question targets a type of uncertainty that we argue requires more attention in system modeling and validation efforts. We define an ';emergent property' to be an attribute or behavior of a modeled or real system that can be surprising or unpredictable and result from complex interactions among the components of a system. For example, thresholds are common across diverse systems and scales and can represent emergent system behavior that is difficult to predict. Thresholds or other types of emergent system behavior can be characterized by their geometry in state space (where state space is the space containing the set of all states of a dynamic system). One way to expedite our growing mechanistic understanding of how emergent properties emerge from complex systems is to compare the geometry of surfaces in state space between real and modeled systems. Here, we present an index (threshold strength) that can quantify a geometric attribute of a surface in state space. We operationally define threshold strength as how strongly a surface in state space resembles a step or an abrupt transition between two system states. First, we validated the index for application in greater than three dimensions of state space using simulated data. Then, we demonstrated application of the index in measuring geometric state space uncertainty between a real system and a deterministic, modeled system. In particular, we looked at geometric space uncertainty between climate behavior in 20th century and modeled climate behavior simulated by global climate models (GCMs) in the Coupled Model Intercomparison Project phase 5 (CMIP5). Surfaces from the climate models came from running the models over the same domain as the real data. We also created response surfaces from a real, climate data based on an empirical model that produces a geometric surface of predicted values in state space. We used a kernel regression method designed to capture the geometry of real data pattern without imposing shape assumptions a priori on the data; this kernel regression method is known as Non-parametric Multiplicative Regression (NPMR). We found that quantifying and comparing a geometric attribute in more than three dimensions of state space can discern whether the emergent nature of complex interactions in modeled systems matches that of real systems. Further, this method has potentially wider application in contexts where searching for abrupt change or ';action' in any hyperspace is desired.

  20. Definition and Measurement of Complexity in the Context of Safety Assurance

    DTIC Science & Technology

    2016-11-01

    design for each sys- tem and on a larger design from a NASA report. The complexity measurement must be matched to available review time to determine...ARP4754A to Flight Critical Systems.” NASA , 2015. http://ntrs.nasa.gov/search.jsp?R=20160001634 [Rayner 2016] Rayner, Keith; Schotter, Elizabeth R...systems. We tested it on a second design for each system and on a larger design from a NASA report. The complexity measurement must be matched to

  1. The gap-prepulse inhibition deficit of the cortical N1-P2 complex in patients with tinnitus: The effect of gap duration.

    PubMed

    Ku, Yunseo; Ahn, Joong Woo; Kwon, Chiheon; Kim, Do Youn; Suh, Myung-Whan; Park, Moo Kyun; Lee, Jun Ho; Oh, Seung Ha; Kim, Hee Chan

    2017-05-01

    The present study aimed to investigate whether gap-prepulse inhibition (GPI) deficit in patients with tinnitus occurred in the N1-P2 complex of the cortical auditory evoked potential. Auditory late responses to the intense sound of the GPI paradigm were obtained from 16 patients with tinnitus and 18 age- and hearing loss-matched controls without tinnitus. The inhibition degrees of the N1-P2 complex were assessed at 100-, 50-, and 20-ms gap durations with tinnitus-pitch-matched and non-matched frequency background noises. At the 20-ms gap condition with the tinnitus-pitch-matched frequency background noise, only the tinnitus group showed an inhibition deficit of the N1-P2 complex. The inhibition deficits were absent in both groups with longer gap durations. These findings suggested that the effect of tinnitus emerged depending on the cue onset timing and duration of the gap-prepulse. Since inhibition deficits were observed in both groups at the same 20-ms gap condition, but with the tinnitus-pitch-non-matched frequency background noise, the present study did not offer proof of concept for tinnitus filling in the gap. Additional studies on the intrinsic effects of different background frequencies on the gap processing are required in the future. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Social network supported process recommender system.

    PubMed

    Ye, Yanming; Yin, Jianwei; Xu, Yueshen

    2014-01-01

    Process recommendation technologies have gained more and more attention in the field of intelligent business process modeling to assist the process modeling. However, most of the existing technologies only use the process structure analysis and do not take the social features of processes into account, while the process modeling is complex and comprehensive in most situations. This paper studies the feasibility of social network research technologies on process recommendation and builds a social network system of processes based on the features similarities. Then, three process matching degree measurements are presented and the system implementation is discussed subsequently. Finally, experimental evaluations and future works are introduced.

  3. 3-D frequency-domain seismic wave modelling in heterogeneous, anisotropic media using a Gaussian quadrature grid approach

    NASA Astrophysics Data System (ADS)

    Zhou, Bing; Greenhalgh, S. A.

    2011-01-01

    We present an extension of the 3-D spectral element method (SEM), called the Gaussian quadrature grid (GQG) approach, to simulate in the frequency-domain seismic waves in 3-D heterogeneous anisotropic media involving a complex free-surface topography and/or sub-surface geometry. It differs from the conventional SEM in two ways. The first is the replacement of the hexahedral element mesh with 3-D Gaussian quadrature abscissae to directly sample the physical properties or model parameters. This gives a point-gridded model which more exactly and easily matches the free-surface topography and/or any sub-surface interfaces. It does not require that the topography be highly smooth, a condition required in the curved finite difference method and the spectral method. The second is the derivation of a complex-valued elastic tensor expression for the perfectly matched layer (PML) model parameters for a general anisotropic medium, whose imaginary parts are determined by the PML formulation rather than having to choose a specific class of viscoelastic material. Furthermore, the new formulation is much simpler than the time-domain-oriented PML implementation. The specified imaginary parts of the density and elastic moduli are valid for arbitrary anisotropic media. We give two numerical solutions in full-space homogeneous, isotropic and anisotropic media, respectively, and compare them with the analytical solutions, as well as show the excellent effectiveness of the PML model parameters. In addition, we perform numerical simulations for 3-D seismic waves in a heterogeneous, anisotropic model incorporating a free-surface ridge topography and validate the results against the 2.5-D modelling solution, and demonstrate the capability of the approach to handle realistic situations.

  4. Analysis and improvement of the quantum image matching

    NASA Astrophysics Data System (ADS)

    Dang, Yijie; Jiang, Nan; Hu, Hao; Zhang, Wenyin

    2017-11-01

    We investigate the quantum image matching algorithm proposed by Jiang et al. (Quantum Inf Process 15(9):3543-3572, 2016). Although the complexity of this algorithm is much better than the classical exhaustive algorithm, there may be an error in it: After matching the area between two images, only the pixel at the upper left corner of the matched area played part in following steps. That is to say, the paper only matched one pixel, instead of an area. If more than one pixels in the big image are the same as the one at the upper left corner of the small image, the algorithm will randomly measure one of them, which causes the error. In this paper, an improved version is presented which takes full advantage of the whole matched area to locate a small image in a big image. The theoretical analysis indicates that the network complexity is higher than the previous algorithm, but it is still far lower than the classical algorithm. Hence, this algorithm is still efficient.

  5. Real-Time Robust Adaptive Modeling and Scheduling for an Electronic Commerce Server

    NASA Astrophysics Data System (ADS)

    Du, Bing; Ruan, Chun

    With the increasing importance and pervasiveness of Internet services, it is becoming a challenge for the proliferation of electronic commerce services to provide performance guarantees under extreme overload. This paper describes a real-time optimization modeling and scheduling approach for performance guarantee of electronic commerce servers. We show that an electronic commerce server may be simulated as a multi-tank system. A robust adaptive server model is subject to unknown additive load disturbances and uncertain model matching. Overload control techniques are based on adaptive admission control to achieve timing guarantees. We evaluate the performance of the model using a complex simulation that is subjected to varying model parameters and massive overload.

  6. Discovering new PI3Kα inhibitors with a strategy of combining ligand-based and structure-based virtual screening

    NASA Astrophysics Data System (ADS)

    Yu, Miao; Gu, Qiong; Xu, Jun

    2018-02-01

    PI3Kα is a promising drug target for cancer chemotherapy. In this paper, we report a strategy of combing ligand-based and structure-based virtual screening to identify new PI3Kα inhibitors. First, naïve Bayesian (NB) learning models and a 3D-QSAR pharmacophore model were built based upon known PI3Kα inhibitors. Then, the SPECS library was screened by the best NB model. This resulted in virtual hits, which were validated by matching the structures against the pharmacophore models. The pharmacophore matched hits were then docked into PI3Kα crystal structures to form ligand-receptor complexes, which are further validated by the Glide-XP program to result in structural validated hits. The structural validated hits were examined by PI3Kα inhibitory assay. With this screening protocol, ten PI3Kα inhibitors with new scaffolds were discovered with IC50 values ranging 0.44-31.25 μM. The binding affinities for the most active compounds 33 and 74 were estimated through molecular dynamics simulations and MM-PBSA analyses.

  7. Matched field localization based on CS-MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng

    2016-04-01

    The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.

  8. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1987-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.

  9. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System.

    PubMed

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-02-20

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  10. Invariant recognition drives neural representations of action sequences

    PubMed Central

    Poggio, Tomaso

    2017-01-01

    Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864

  11. Mapping surrogate gasoline compositions into RON/MON space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, Neal; Kraft, Markus; Smallbone, Andrew

    2010-06-15

    In this paper, new experimentally determined octane numbers (RON and MON) of blends of a tri-component surrogate consisting of toluene, n-heptane, i-octane (called toluene reference fuel TRF) arranged in an augmented simplex design are used to derive a simple response surface model for the octane number of any arbitrary TRF mixture. The model is second-order in its complexity and is shown to be more accurate to the standard ''linear-by-volume'' (LbV) model which is often used when no other information is available. Such observations are due to the existence of both synergistic and antagonistic blending of the octane numbers between themore » three components. In particular, antagonistic blending of toluene and iso-octane leads to a maximum in sensitivity that lies on the toluene/iso-octane line. The model equations are inverted so as to map from RON/MON space back into composition space. Enabling one to use two simple formulae to determine, for a given fuel with known RON and MON, the volume fractions of toluene, n-heptane and iso-octane to be blended in order to emulate that fuel. HCCI engine simulations using gasoline with a RON of 98.5 and a MON of 88 were simulated using a TRF fuel, blended according to the derived equations to match the RON and MON. The simulations matched the experimentally obtained pressure profiles well, especially when compared to simulations using only PRF fuels which matched the RON or MON. This suggested that the mapping is accurate and that to emulate a refinery gasoline, it is necessary to match not only the RON but also the MON of the fuel. (author)« less

  12. Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan

    2018-01-01

    In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.

  13. Matching of energetic, mechanic and control characteristics of positioning actuator

    NASA Astrophysics Data System (ADS)

    Y Nosova, N.; Misyurin, S. Yu; Kreinin, G. V.

    2017-12-01

    The problem of preliminary choice of parameters of the automated drive power channel is discussed. The drive of the mechatronic complex divides into two main units - power and control. The first determines the energy capabilities and, as a rule, the overall dimensions of the complex. The sufficient capacity of the power unit is a necessary condition for successful solution of control tasks without excessive complication of the control system structure. Preliminary selection of parameters is carried out based on the condition of providing the necessary drive power. The proposed approach is based on: a research of a sufficiently developed but not excessive dynamic model of the power block with the help of a conditional test control system; a transition to a normalized model with the formation of similarity criteria; constructing the synthesis procedure.

  14. A Feature-based Approach to Big Data Analysis of Medical Images

    PubMed Central

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685

  15. A Feature-Based Approach to Big Data Analysis of Medical Images.

    PubMed

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.

  16. Unconditionally stable WLP-FDTD method for the modeling of electromagnetic wave propagation in gyrotropic materials.

    PubMed

    Li, Zheng-Wei; Xi, Xiao-Li; Zhang, Jin-Sheng; Liu, Jiang-fan

    2015-12-14

    The unconditional stable finite-difference time-domain (FDTD) method based on field expansion with weighted Laguerre polynomials (WLPs) is applied to model electromagnetic wave propagation in gyrotropic materials. The conventional Yee cell is modified to have the tightly coupled current density components located at the same spatial position. The perfectly matched layer (PML) is formulated in a stretched-coordinate (SC) system with the complex-frequency-shifted (CFS) factor to achieve good absorption performance. Numerical examples are shown to validate the accuracy and efficiency of the proposed method.

  17. A knowledge based approach to matching human neurodegenerative disease and animal models

    PubMed Central

    Maynard, Sarah M.; Mungall, Christopher J.; Lewis, Suzanna E.; Imam, Fahim T.; Martone, Maryann E.

    2013-01-01

    Neurodegenerative diseases present a wide and complex range of biological and clinical features. Animal models are key to translational research, yet typically only exhibit a subset of disease features rather than being precise replicas of the disease. Consequently, connecting animal to human conditions using direct data-mining strategies has proven challenging, particularly for diseases of the nervous system, with its complicated anatomy and physiology. To address this challenge we have explored the use of ontologies to create formal descriptions of structural phenotypes across scales that are machine processable and amenable to logical inference. As proof of concept, we built a Neurodegenerative Disease Phenotype Ontology (NDPO) and an associated Phenotype Knowledge Base (PKB) using an entity-quality model that incorporates descriptions for both human disease phenotypes and those of animal models. Entities are drawn from community ontologies made available through the Neuroscience Information Framework (NIF) and qualities are drawn from the Phenotype and Trait Ontology (PATO). We generated ~1200 structured phenotype statements describing structural alterations at the subcellular, cellular and gross anatomical levels observed in 11 human neurodegenerative conditions and associated animal models. PhenoSim, an open source tool for comparing phenotypes, was used to issue a series of competency questions to compare individual phenotypes among organisms and to determine which animal models recapitulate phenotypic aspects of the human disease in aggregate. Overall, the system was able to use relationships within the ontology to bridge phenotypes across scales, returning non-trivial matches based on common subsumers that were meaningful to a neuroscientist with an advanced knowledge of neuroanatomy. The system can be used both to compare individual phenotypes and also phenotypes in aggregate. This proof of concept suggests that expressing complex phenotypes using formal ontologies provides considerable benefit for comparing phenotypes across scales and species. PMID:23717278

  18. Transfer of the nationwide Czech soil survey data to a foreign soil classification - generating input parameters for a process-based soil erosion modelling approach

    NASA Astrophysics Data System (ADS)

    Beitlerová, Hana; Hieke, Falk; Žížala, Daniel; Kapička, Jiří; Keiser, Andreas; Schmidt, Jürgen; Schindewolf, Marcus

    2017-04-01

    Process-based erosion modelling is a developing and adequate tool to assess, simulate and understand the complex mechanisms of soil loss due to surface runoff. While the current state of available models includes powerful approaches, a major drawback is given by complex parametrization. A major input parameter for the physically based soil loss and deposition model EROSION 3D is represented by soil texture. However, as the model has been developed in Germany it is dependent on the German soil classification. To exploit data generated during a massive nationwide soil survey campaign taking place in the 1960s across the entire Czech Republic, a transfer from the Czech to the German or at least international (e.g. WRB) system is mandatory. During the survey the internal differentiation of grain sizes was realized in a two fractions approach, separating texture into solely above and below 0.01 mm rather than into clayey, silty and sandy textures. Consequently, the Czech system applies a classification of seven different textures based on the respective percentage of large and small particles, while in Germany 31 groups are essential. The followed approach of matching Czech soil survey data to the German system focusses on semi-logarithmic interpolation of the cumulative soil texture curve additionally on a regression equation based on a recent database of 128 soil pits. Furthermore, for each of the seven Czech texture classes a group of typically suitable classes of the German system was derived. A GIS-based spatial analysis to test approaches of interpolation the soil texture was carried out. First results show promising matches and pave the way to a Czech model application of EROSION 3D.

  19. Effect of Chunk Strength on the Performance of Children with Developmental Dyslexia on Artificial Grammar Learning Task May Be Related to Complexity

    ERIC Educational Resources Information Center

    Schiff, Rachel; Katan, Pesia; Sasson, Ayelet; Kahta, Shani

    2017-01-01

    There is a long held view that chunks play a crucial role in artificial grammar learning performance. We compared chunk strength influences on performance, in high and low topological entropy (a measure of complexity) grammar systems, with dyslexic children, age-matched and reading-level-matched control participants. Findings show that age-matched…

  20. Modeling complex aquifer systems: a case study in Baton Rouge, Louisiana (USA)

    NASA Astrophysics Data System (ADS)

    Pham, Hai V.; Tsai, Frank T.-C.

    2017-05-01

    This study targets two challenges in groundwater model development: grid generation and model calibration for aquifer systems that are fluvial in origin. Realistic hydrostratigraphy can be developed using a large quantity of well log data to capture the complexity of an aquifer system. However, generating valid groundwater model grids to be consistent with the complex hydrostratigraphy is non-trivial. Model calibration can also become intractable for groundwater models that intend to match the complex hydrostratigraphy. This study uses the Baton Rouge aquifer system, Louisiana (USA), to illustrate a technical need to cope with grid generation and model calibration issues. A grid generation technique is introduced based on indicator kriging to interpolate 583 wireline well logs in the Baton Rouge area to derive a hydrostratigraphic architecture with fine vertical discretization. Then, an upscaling procedure is developed to determine a groundwater model structure with 162 layers that captures facies geometry in the hydrostratigraphic architecture. To handle model calibration for such a large model, this study utilizes a derivative-free optimization method in parallel computing to complete parameter estimation in a few months. The constructed hydrostratigraphy indicates the Baton Rouge aquifer system is fluvial in origin. The calibration result indicates hydraulic conductivity for Miocene sands is higher than that for Pliocene to Holocene sands and indicates the Baton Rouge fault and the Denham Springs-Scotlandville fault to be low-permeability leaky aquifers. The modeling result shows significantly low groundwater level in the "2,000-foot" sand due to heavy pumping, indicating potential groundwater upward flow from the "2,400-foot" sand.

  1. The galaxy-dark matter halo connection: which galaxy properties are correlated with the host halo mass?

    NASA Astrophysics Data System (ADS)

    Contreras, S.; Baugh, C. M.; Norberg, P.; Padilla, N.

    2015-09-01

    We demonstrate how the properties of a galaxy depend on the mass of its host dark matter subhalo, using two independent models of galaxy formation. For the cases of stellar mass and black hole mass, the median property value displays a monotonic dependence on subhalo mass. The slope of the relation changes for subhalo masses for which heating by active galactic nuclei becomes important. The median property values are predicted to be remarkably similar for central and satellite galaxies. The two models predict considerable scatter around the median property value, though the size of the scatter is model dependent. There is only modest evolution with redshift in the median galaxy property at a fixed subhalo mass. Properties such as cold gas mass and star formation rate, however, are predicted to have a complex dependence on subhalo mass. In these cases, subhalo mass is not a good indicator of the value of the galaxy property. We illustrate how the predictions in the galaxy property-subhalo mass plane differ from the assumptions made in some empirical models of galaxy clustering by reconstructing the model output using a basic subhalo abundance matching scheme. In its simplest form, abundance matching generally does not reproduce the clustering predicted by the models, typically resulting in an overprediction of the clustering signal. Using the predictions of the galaxy formation model for the correlations between pairs of galaxy properties, the basic abundance matching scheme can be extended to reproduce the model predictions more faithfully for a wider range of galaxy properties. Our results have implications for the analysis of galaxy clustering, particularly for low abundance samples.

  2. A contrast-sensitive channelized-Hotelling observer to predict human performance in a detection task using lumpy backgrounds and Gaussian signals

    NASA Astrophysics Data System (ADS)

    Park, Subok; Badano, Aldo; Gallas, Brandon D.; Myers, Kyle J.

    2007-03-01

    Previously, a non-prewhitening matched filter (NPWMF) incorporating a model for the contrast sensitivity of the human visual system was introduced for modeling human performance in detection tasks with different viewing angles and white-noise backgrounds by Badano et al. But NPWMF observers do not perform well detection tasks involving complex backgrounds since they do not account for random backgrounds. A channelized-Hotelling observer (CHO) using difference-of-Gaussians (DOG) channels has been shown to track human performance well in detection tasks using lumpy backgrounds. In this work, a CHO with DOG channels, incorporating the model of the human contrast sensitivity, was developed similarly. We call this new observer a contrast-sensitive CHO (CS-CHO). The Barten model was the basis of our human contrast sensitivity model. A scalar was multiplied to the Barten model and varied to control the thresholding effect of the contrast sensitivity on luminance-valued images and hence the performance-prediction ability of the CS-CHO. The performance of the CS-CHO was compared to the average human performance from the psychophysical study by Park et al., where the task was to detect a known Gaussian signal in non-Gaussian distributed lumpy backgrounds. Six different signal-intensity values were used in this study. We chose the free parameter of our model to match the mean human performance in the detection experiment at the strongest signal intensity. Then we compared the model to the human at five different signal-intensity values in order to see if the performance of the CS-CHO matched human performance. Our results indicate that the CS-CHO with the chosen scalar for the contrast sensitivity predicts human performance closely as a function of signal intensity.

  3. Chain-Wise Generalization of Road Networks Using Model Selection

    NASA Astrophysics Data System (ADS)

    Bulatov, D.; Wenzel, S.; Häufel, G.; Meidow, J.

    2017-05-01

    Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using circlePeucker and finally, model selection is used to decide whether two neighboring segments should be fused to a new geometric entity. Thereby, we consider both variance-covariance analysis of residuals and model complexity. The results on a complex data-set with many traffic roundabouts indicate the benefits of the proposed procedure.

  4. Reconstruction of Twist Torque in Main Parachute Risers

    NASA Technical Reports Server (NTRS)

    Day, Joshua D.

    2015-01-01

    The reconstruction of twist torque in the Main Parachute Risers of the Capsule Parachute Assembly System (CPAS) has been successfully used to validate CPAS Model Memo conservative twist torque equations. Reconstruction of basic, one degree of freedom drop tests was used to create a functional process for the evaluation of more complex, rigid body simulation. The roll, pitch, and yaw of the body, the fly-out angles of the parachutes, and the relative location of the parachutes to the body are inputs to the torque simulation. The data collected by the Inertial Measurement Unit (IMU) was used to calculate the true torque. The simulation then used photogrammetric and IMU data as inputs into the Model Memo equations. The results were then compared to the true torque results to validate the Model Memo equations. The Model Memo parameters were based off of steel risers and the parameters will need to be re-evaluated for different materials. Photogrammetric data was found to be more accurate than the inertial data in accounting for the relative rotation between payload and cluster. The Model Memo equations were generally a good match and when not matching were generally conservative.

  5. Multiscale modelling for tokamak pedestals

    NASA Astrophysics Data System (ADS)

    Abel, I. G.

    2018-04-01

    Pedestal modelling is crucial to predict the performance of future fusion devices. Current modelling efforts suffer either from a lack of kinetic physics, or an excess of computational complexity. To ameliorate these problems, we take a first-principles multiscale approach to the pedestal. We will present three separate sets of equations, covering the dynamics of edge localised modes (ELMs), the inter-ELM pedestal and pedestal turbulence, respectively. Precisely how these equations should be coupled to each other is covered in detail. This framework is completely self-consistent; it is derived from first principles by means of an asymptotic expansion of the fundamental Vlasov-Landau-Maxwell system in appropriate small parameters. The derivation exploits the narrowness of the pedestal region, the smallness of the thermal gyroradius and the low plasma (the ratio of thermal to magnetic pressures) typical of current pedestal operation to achieve its simplifications. The relationship between this framework and gyrokinetics is analysed, and possibilities to directly match our systems of equations onto multiscale gyrokinetics are explored. A detailed comparison between our model and other models in the literature is performed. Finally, the potential for matching this framework onto an open-field-line region is briefly discussed.

  6. Animal Models of Lymphangioleiomyomatosis (LAM) and Tuberous Sclerosis Complex (TSC)

    PubMed Central

    2010-01-01

    Abstract Animal models of lymphangioleiomyomatosis (LAM) and tuberous sclerosis complex (TSC) are highly desired to enable detailed investigation of the pathogenesis of these diseases. Multiple rats and mice have been generated in which a mutation similar to that occurring in TSC patients is present in an allele of Tsc1 or Tsc2. Unfortunately, these mice do not develop pathologic lesions that match those seen in LAM or TSC. However, these Tsc rodent models have been useful in confirming the two-hit model of tumor development in TSC, and in providing systems in which therapeutic trials (e.g., rapamycin) can be performed. In addition, conditional alleles of both Tsc1 and Tsc2 have provided the opportunity to target loss of these genes to specific tissues and organs, to probe the in vivo function of these genes, and attempt to generate better models. Efforts to generate an authentic LAM model are impeded by a lack of understanding of the cell of origin of this process. However, ongoing studies provide hope that such a model will be generated in the coming years. PMID:20235887

  7. Drop formation, pinch-off dynamics and liquid transfer of simple and complex fluids

    NASA Astrophysics Data System (ADS)

    Dinic, Jelena; Sharma, Vivek

    Liquid transfer and drop formation processes underlying jetting, spraying, coating, and printing - inkjet, screen, roller-coating, gravure, nanoimprint hot embossing, 3D - often involve formation of unstable columnar necks. Capillary-driven thinning of such necks and their pinchoff dynamics are determined by a complex interplay of inertial, viscous and capillary stresses for simple, Newtonian fluids. Micro-structural changes in response to extensional flow field that arises within the thinning neck give rise to additional viscoelastic stresses in complex, non- Newtonian fluids. Using FLOW-3D, we simulate flows realized in prototypical geometries (dripping and liquid bridge stretched between two parallel plates) used for studying pinch-off dynamics and influence of microstructure and viscoelasticity. In contrast with often-used 1D or 2D models, FLOW-3D allows a robust evaluation of the magnitude of the underlying stresses and extensional flow field (both uniformity and magnitude). We find that the simulated radius evolution profiles match the pinch-off dynamics that are experimentally-observed and theoretically-predicted for model Newtonian fluids and complex fluids.

  8. Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    PubMed Central

    Antolík, Ján; Bednar, James A.

    2011-01-01

    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067

  9. Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks

    NASA Astrophysics Data System (ADS)

    Leube, P.; Nowak, W.; Sanchez-Vila, X.

    2013-12-01

    High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.

  10. High performance embedded system for real-time pattern matching

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-02-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device.

  11. Exploring resting-state EEG complexity before migraine attacks.

    PubMed

    Cao, Zehong; Lai, Kuan-Lin; Lin, Chin-Teng; Chuang, Chun-Hsiang; Chou, Chien-Chen; Wang, Shuu-Jiun

    2018-06-01

    Objective Entropy-based approaches to understanding the temporal dynamics of complexity have revealed novel insights into various brain activities. Herein, electroencephalogram complexity before migraine attacks was examined using an inherent fuzzy entropy approach, allowing the development of an electroencephalogram-based classification model to recognize the difference between interictal and preictal phases. Methods Forty patients with migraine without aura and 40 age-matched normal control subjects were recruited, and the resting-state electroencephalogram signals of their prefrontal and occipital areas were prospectively collected. The migraine phases were defined based on the headache diary, and the preictal phase was defined as within 72 hours before a migraine attack. Results The electroencephalogram complexity of patients in the preictal phase, which resembled that of normal control subjects, was significantly higher than that of patients in the interictal phase in the prefrontal area (FDR-adjusted p < 0.05) but not in the occipital area. The measurement of test-retest reliability (n = 8) using the intra-class correlation coefficient was good with r1 = 0.73 ( p = 0.01). Furthermore, the classification model, support vector machine, showed the highest accuracy (76 ± 4%) for classifying interictal and preictal phases using the prefrontal electroencephalogram complexity. Conclusion Entropy-based analytical methods identified enhancement or "normalization" of frontal electroencephalogram complexity during the preictal phase compared with the interictal phase. This classification model, using this complexity feature, may have the potential to provide a preictal alert to migraine without aura patients.

  12. Ring modulators with enhanced efficiency based on standing-wave operation on a field-matched, interdigitated p-n junction.

    PubMed

    Pavanello, Fabio; Zeng, Xiaoge; Wade, Mark T; Popović, Miloš A

    2016-11-28

    We propose ring modulators based on interdigitated p-n junctions that exploit standing rather than traveling-wave resonant modes to improve modulation efficiency, insertion loss and speed. Matching the longitudinal nodes and antinodes of a standing-wave mode with high (contacts) and low (depletion regions) carrier density regions, respectively, simultaneously lowers loss and increases sensitivity significantly. This approach permits further to relax optical constraints on contacts placement and can lead to lower device capacitance. Such structures are well-matched to fabrication in advanced microelectronics CMOS processes. Device architectures that exploit this concept are presented along with their benefits and drawbacks. A temporal coupled mode theory model is used to investigate the static and dynamic response. We show that modulation efficiencies or loss Q factors up to 2 times higher than in previous traveling-wave geometries can be achieved leading to much larger extinction ratios. Finally, we discuss more complex doping geometries that can improve carrier dynamics for higher modulation speeds in this context.

  13. Laparoscopic approach is feasible in Crohn's complex enterovisceral fistulas: a case-match review.

    PubMed

    Beyer-Berjot, Laura; Mancini, Julien; Bege, Thierry; Moutardier, Vincent; Brunet, Christian; Grimaud, Jean-Charles; Berdah, Stéphane

    2013-02-01

    Complex enterovisceral fistulas are internal fistulas joining a "diseased" organ to any intra-abdominal "victim" organ, with the exception of ileoileal fistulas. Few publications have addressed laparoscopic surgery for complex fistulas in Crohn's disease. The aim of this study was to evaluate the feasibility of such an approach. This study is a retrospective, case-match review. This study was conducted at a tertiary academic hospital. : All patients who underwent a laparoscopic ileocecal resection for complex enterovisceral fistulas between January 2004 and August 2011 were included. They were matched to a control group undergoing operation for nonfistulizing Crohn's disease according to age, sex, nutritional state, preoperative use of steroids, and type of resection performed. Matching was performed blind to the peri- and postoperative results of each patient. The 2 groups were compared in terms of operative time, conversion to open surgery, morbidity and mortality rates, and length of stay. Eleven patients presenting with 13 complex fistulas were included and matched with 22 controls. Group 1 contained 5 ileosigmoid fistulas (38%), 3 ileotransverse fistulas (23%), 3 ileovesical fistulas (23%), 1 colocolic fistula (8%), and 1 ileosalpingeal fistula (8%). There were no significant differences between the groups in terms of operative time (120 (range, 75-270) vs 120 (range, 50-160) minutes, p = 0.65), conversion to open surgery (9% vs 0%, p = 0.33), stoma creation (9% vs 14%, p = 1), global postoperative morbidity (18% vs 32%, p = 0.68), and major complications (Dindo III: 0% vs 9%, p = 0.54; Dindo IV: 0% vs 0%, p = 1), as well as in terms of length of stay (8 (range, 7-32) vs 9 (range, 5-17) days, p = 0.72). No patients died. This is a retrospective review with a small sample size. A laparoscopic approach for complex fistulas is feasible in Crohn's disease, with outcomes similar to those reported for nonfistulizing forms.

  14. Simulating the Composite Propellant Manufacturing Process

    NASA Technical Reports Server (NTRS)

    Williamson, Suzanne; Love, Gregory

    2000-01-01

    There is a strategic interest in understanding how the propellant manufacturing process contributes to military capabilities outside the United States. The paper will discuss how system dynamics (SD) has been applied to rapidly assess the capabilities and vulnerabilities of a specific composite propellant production complex. These facilities produce a commonly used solid propellant with military applications. The authors will explain how an SD model can be configured to match a specific production facility followed by a series of scenarios designed to analyze operational vulnerabilities. By using the simulation model to rapidly analyze operational risks, the analyst gains a better understanding of production complexities. There are several benefits of developing SD models to simulate chemical production. SD is an effective tool for characterizing complex problems, especially the production process where the cascading effect of outages quickly taxes common understanding. By programming expert knowledge into an SD application, these tools are transformed into a knowledge management resource that facilitates rapid learning without requiring years of experience in production operations. It also permits the analyst to rapidly respond to crisis situations and other time-sensitive missions. Most importantly, the quantitative understanding gained from applying the SD model lends itself to strategic analysis and planning.

  15. Comparison of four modeling tools for the prediction of potential distribution for non-indigenous weeds in the United States

    USGS Publications Warehouse

    Magarey, Roger; Newton, Leslie; Hong, Seung C.; Takeuchi, Yu; Christie, Dave; Jarnevich, Catherine S.; Kohl, Lisa; Damus, Martin; Higgins, Steven I.; Miller, Leah; Castro, Karen; West, Amanda; Hastings, John; Cook, Gericke; Kartesz, John; Koop, Anthony

    2018-01-01

    This study compares four models for predicting the potential distribution of non-indigenous weed species in the conterminous U.S. The comparison focused on evaluating modeling tools and protocols as currently used for weed risk assessment or for predicting the potential distribution of invasive weeds. We used six weed species (three highly invasive and three less invasive non-indigenous species) that have been established in the U.S. for more than 75 years. The experiment involved providing non-U. S. location data to users familiar with one of the four evaluated techniques, who then developed predictive models that were applied to the United States without knowing the identity of the species or its U.S. distribution. We compared a simple GIS climate matching technique known as Proto3, a simple climate matching tool CLIMEX Match Climates, the correlative model MaxEnt, and a process model known as the Thornley Transport Resistance (TTR) model. Two experienced users ran each modeling tool except TTR, which had one user. Models were trained with global species distribution data excluding any U.S. data, and then were evaluated using the current known U.S. distribution. The influence of weed species identity and modeling tool on prevalence and sensitivity effects was compared using a generalized linear mixed model. Each modeling tool itself had a low statistical significance, while weed species alone accounted for 69.1 and 48.5% of the variance for prevalence and sensitivity, respectively. These results suggest that simple modeling tools might perform as well as complex ones in the case of predicting potential distribution for a weed not yet present in the United States. Considerations of model accuracy should also be balanced with those of reproducibility and ease of use. More important than the choice of modeling tool is the construction of robust protocols and testing both new and experienced users under blind test conditions that approximate operational conditions.

  16. Beat Keeping in a Sea Lion As Coupled Oscillation: Implications for Comparative Understanding of Human Rhythm.

    PubMed

    Rouse, Andrew A; Cook, Peter F; Large, Edward W; Reichmuth, Colleen

    2016-01-01

    Human capacity for entraining movement to external rhythms-i.e., beat keeping-is ubiquitous, but its evolutionary history and neural underpinnings remain a mystery. Recent findings of entrainment to simple and complex rhythms in non-human animals pave the way for a novel comparative approach to assess the origins and mechanisms of rhythmic behavior. The most reliable non-human beat keeper to date is a California sea lion, Ronan, who was trained to match head movements to isochronous repeating stimuli and showed spontaneous generalization of this ability to novel tempos and to the complex rhythms of music. Does Ronan's performance rely on the same neural mechanisms as human rhythmic behavior? In the current study, we presented Ronan with simple rhythmic stimuli at novel tempos. On some trials, we introduced "perturbations," altering either tempo or phase in the middle of a presentation. Ronan quickly adjusted her behavior following all perturbations, recovering her consistent phase and tempo relationships to the stimulus within a few beats. Ronan's performance was consistent with predictions of mathematical models describing coupled oscillation: a model relying solely on phase coupling strongly matched her behavior, and the model was further improved with the addition of period coupling. These findings are the clearest evidence yet for parity in human and non-human beat keeping and support the view that the human ability to perceive and move in time to rhythm may be rooted in broadly conserved neural mechanisms.

  17. Deuterium Labeling Strategies for Creating Contrast in Structure-Function Studies of Model Bacterial Outer Membranes Using Neutron Reflectometry.

    PubMed

    Le Brun, Anton P; Clifton, Luke A; Holt, Stephen A; Holden, Peter J; Lakey, Jeremy H

    2016-01-01

    Studying the outer membrane of Gram-negative bacteria is challenging due to the complex nature of its structure. Therefore, simplified models are required to undertake structure-function studies of processes that occur at the outer membrane/fluid interface. Model membranes can be created by immobilizing bilayers to solid supports such as gold or silicon surfaces, or as monolayers on a liquid support where the surface pressure and fluidity of the lipids can be controlled. Both model systems are amenable to having their structure probed by neutron reflectometry, a technique that provides a one-dimensional depth profile through a membrane detailing its thickness and composition. One of the strengths of neutron scattering is the ability to use contrast matching, allowing molecules containing hydrogen and those enriched with deuterium to be highlighted or matched out against the bulk isotopic composition of the solvent. Lipopolysaccharides, a major component of the outer membrane, can be isolated for incorporation into model membranes. Here, we describe the deuteration of lipopolysaccharides from rough strains of Escherichia coli for incorporation into model outer membranes, and how the use of deuterated materials enhances structural analysis of model membranes by neutron reflectometry. © 2016 Elsevier Inc. All rights reserved.

  18. Social Network Supported Process Recommender System

    PubMed Central

    Ye, Yanming; Yin, Jianwei; Xu, Yueshen

    2014-01-01

    Process recommendation technologies have gained more and more attention in the field of intelligent business process modeling to assist the process modeling. However, most of the existing technologies only use the process structure analysis and do not take the social features of processes into account, while the process modeling is complex and comprehensive in most situations. This paper studies the feasibility of social network research technologies on process recommendation and builds a social network system of processes based on the features similarities. Then, three process matching degree measurements are presented and the system implementation is discussed subsequently. Finally, experimental evaluations and future works are introduced. PMID:24672309

  19. Tackling the challenges of matching biomedical ontologies.

    PubMed

    Faria, Daniel; Pesquita, Catia; Mott, Isabela; Martins, Catarina; Couto, Francisco M; Cruz, Isabel F

    2018-01-15

    Biomedical ontologies pose several challenges to ontology matching due both to the complexity of the biomedical domain and to the characteristics of the ontologies themselves. The biomedical tracks in the Ontology Matching Evaluation Initiative (OAEI) have spurred the development of matching systems able to tackle these challenges, and benchmarked their general performance. In this study, we dissect the strategies employed by matching systems to tackle the challenges of matching biomedical ontologies and gauge the impact of the challenges themselves on matching performance, using the AgreementMakerLight (AML) system as the platform for this study. We demonstrate that the linear complexity of the hash-based searching strategy implemented by most state-of-the-art ontology matching systems is essential for matching large biomedical ontologies efficiently. We show that accounting for all lexical annotations (e.g., labels and synonyms) in biomedical ontologies leads to a substantial improvement in F-measure over using only the primary name, and that accounting for the reliability of different types of annotations generally also leads to a marked improvement. Finally, we show that cross-references are a reliable source of information and that, when using biomedical ontologies as background knowledge, it is generally more reliable to use them as mediators than to perform lexical expansion. We anticipate that translating traditional matching algorithms to the hash-based searching paradigm will be a critical direction for the future development of the field. Improving the evaluation carried out in the biomedical tracks of the OAEI will also be important, as without proper reference alignments there is only so much that can be ascertained about matching systems or strategies. Nevertheless, it is clear that, to tackle the various challenges posed by biomedical ontologies, ontology matching systems must be able to efficiently combine multiple strategies into a mature matching approach.

  20. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System

    PubMed Central

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-01-01

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequency-domain and achieves computational complexity reduction. PMID:28230763

  1. Computational Design of Self-Assembling Protein Nanomaterials with Atomic Level Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Neil P.; Sheffler, William; Sawaya, Michael R.

    2015-09-17

    We describe a general computational method for designing proteins that self-assemble to a desired symmetric architecture. Protein building blocks are docked together symmetrically to identify complementary packing arrangements, and low-energy protein-protein interfaces are then designed between the building blocks in order to drive self-assembly. We used trimeric protein building blocks to design a 24-subunit, 13-nm diameter complex with octahedral symmetry and a 12-subunit, 11-nm diameter complex with tetrahedral symmetry. The designed proteins assembled to the desired oligomeric states in solution, and the crystal structures of the complexes revealed that the resulting materials closely match the design models. The method canmore » be used to design a wide variety of self-assembling protein nanomaterials.« less

  2. Translating concepts of complexity to the field of ergonomics.

    PubMed

    Walker, Guy H; Stanton, Neville A; Salmon, Paul M; Jenkins, Daniel P; Rafferty, Laura

    2010-10-01

    Since 1958 more than 80 journal papers from the mainstream ergonomics literature have used either the words 'complex' or 'complexity' in their titles. Of those, more than 90% have been published in only the past 20 years. This observation communicates something interesting about the way in which contemporary ergonomics problems are being understood. The study of complexity itself derives from non-linear mathematics but many of its core concepts have found analogies in numerous non-mathematical domains. Set against this cross-disciplinary background, the current paper aims to provide a similar initial mapping to the field of ergonomics. In it, the ergonomics problem space, complexity metrics and powerful concepts such as emergence raise complexity to the status of an important contingency factor in achieving a match between ergonomics problems and ergonomics methods. The concept of relative predictive efficiency is used to illustrate how this match could be achieved in practice. What is clear overall is that a major source of, and solution to, complexity are the humans in systems. Understanding complexity on its own terms offers the potential to leverage disproportionate effects from ergonomics interventions and to tighten up the often loose usage of the term in the titles of ergonomics papers. STATEMENT OF RELEVANCE: This paper reviews and discusses concepts from the study of complexity and maps them to ergonomics problems and methods. It concludes that humans are a major source of and solution to complexity in systems and that complexity is a powerful contingency factor, which should be considered to ensure that ergonomics approaches match the true nature of ergonomics problems.

  3. Inter-image matching

    NASA Technical Reports Server (NTRS)

    Wolfe, R. H., Jr.; Juday, R. D.

    1982-01-01

    Interimage matching is the process of determining the geometric transformation required to conform spatially one image to another. In principle, the parameters of that transformation are varied until some measure of some difference between the two images is minimized or some measure of sameness (e.g., cross-correlation) is maximized. The number of such parameters to vary is faily large (six for merely an affine transformation), and it is customary to attempt an a priori transformation reducing the complexity of the residual transformation or subdivide the image into small enough match zones (control points or patches) that a simple transformation (e.g., pure translation) is applicable, yet large enough to facilitate matching. In the latter case, a complex mapping function is fit to the results (e.g., translation offsets) in all the patches. The methods reviewed have all chosen one or both of the above options, ranging from a priori along-line correction for line-dependent effects (the high-frequency correction) to a full sensor-to-geobase transformation with subsequent subdivision into a grid of match points.

  4. A network flow model for load balancing in circuit-switched multicomputers

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1990-01-01

    In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.

  5. Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature

    PubMed Central

    Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat

    2014-01-01

    It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185

  6. Robust model-based 3d/3D fusion using sparse matching for minimally invasive surgery.

    PubMed

    Neumann, Dominik; Grbic, Sasa; John, Matthias; Navab, Nassir; Hornegger, Joachim; Ionasec, Razvan

    2013-01-01

    Classical surgery is being disrupted by minimally invasive and transcatheter procedures. As there is no direct view or access to the affected anatomy, advanced imaging techniques such as 3D C-arm CT and C-arm fluoroscopy are routinely used for intra-operative guidance. However, intra-operative modalities have limited image quality of the soft tissue and a reliable assessment of the cardiac anatomy can only be made by injecting contrast agent, which is harmful to the patient and requires complex acquisition protocols. We propose a novel sparse matching approach for fusing high quality pre-operative CT and non-contrasted, non-gated intra-operative C-arm CT by utilizing robust machine learning and numerical optimization techniques. Thus, high-quality patient-specific models can be extracted from the pre-operative CT and mapped to the intra-operative imaging environment to guide minimally invasive procedures. Extensive quantitative experiments demonstrate that our model-based fusion approach has an average execution time of 2.9 s, while the accuracy lies within expert user confidence intervals.

  7. Matching–centrality decomposition and the forecasting of new links in networks

    PubMed Central

    Rohr, Rudolf P.; Naisbit, Russell E.; Mazza, Christian; Bersier, Louis-Félix

    2016-01-01

    Networks play a prominent role in the study of complex systems of interacting entities in biology, sociology, and economics. Despite this diversity, we demonstrate here that a statistical model decomposing networks into matching and centrality components provides a comprehensive and unifying quantification of their architecture. The matching term quantifies the assortative structure in which node makes links with which other node, whereas the centrality term quantifies the number of links that nodes make. We show, for a diverse set of networks, that this decomposition can provide a tight fit to observed networks. Then we provide three applications. First, we show that the model allows very accurate prediction of missing links in partially known networks. Second, when node characteristics are known, we show how the matching–centrality decomposition can be related to this external information. Consequently, it offers us a simple and versatile tool to explore how node characteristics explain network architecture. Finally, we demonstrate the efficiency and flexibility of the model to forecast the links that a novel node would create if it were to join an existing network. PMID:26842568

  8. Discovering functional interdependence relationship in PPI networks for protein complex identification.

    PubMed

    Lam, Winnie W M; Chan, Keith C C

    2012-04-01

    Protein molecules interact with each other in protein complexes to perform many vital functions, and different computational techniques have been developed to identify protein complexes in protein-protein interaction (PPI) networks. These techniques are developed to search for subgraphs of high connectivity in PPI networks under the assumption that the proteins in a protein complex are highly interconnected. While these techniques have been shown to be quite effective, it is also possible that the matching rate between the protein complexes they discover and those that are previously determined experimentally be relatively low and the "false-alarm" rate can be relatively high. This is especially the case when the assumption of proteins in protein complexes being more highly interconnected be relatively invalid. To increase the matching rate and reduce the false-alarm rate, we have developed a technique that can work effectively without having to make this assumption. The name of the technique called protein complex identification by discovering functional interdependence (PCIFI) searches for protein complexes in PPI networks by taking into consideration both the functional interdependence relationship between protein molecules and the network topology of the network. The PCIFI works in several steps. The first step is to construct a multiple-function protein network graph by labeling each vertex with one or more of the molecular functions it performs. The second step is to filter out protein interactions between protein pairs that are not functionally interdependent of each other in the statistical sense. The third step is to make use of an information-theoretic measure to determine the strength of the functional interdependence between all remaining interacting protein pairs. Finally, the last step is to try to form protein complexes based on the measure of the strength of functional interdependence and the connectivity between proteins. For performance evaluation, PCIFI was used to identify protein complexes in real PPI network data and the protein complexes it found were matched against those that were previously known in MIPS. The results show that PCIFI can be an effective technique for the identification of protein complexes. The protein complexes it found can match more known protein complexes with a smaller false-alarm rate and can provide useful insights into the understanding of the functional interdependence relationships between proteins in protein complexes.

  9. A fast complex domain-matching pursuit algorithm and its application to deep-water gas reservoir detection

    NASA Astrophysics Data System (ADS)

    Zeng, Jing; Huang, Handong; Li, Huijie; Miao, Yuxin; Wen, Junxiang; Zhou, Fei

    2017-12-01

    The main emphasis of exploration and development is shifting from simple structural reservoirs to complex reservoirs, which all have the characteristics of complex structure, thin reservoir thickness and large buried depth. Faced with these complex geological features, hydrocarbon detection technology is a direct indication of changes in hydrocarbon reservoirs and a good approach for delimiting the distribution of underground reservoirs. It is common to utilize the time-frequency (TF) features of seismic data in detecting hydrocarbon reservoirs. Therefore, we research the complex domain-matching pursuit (CDMP) method and propose some improvements. First is the introduction of a scale parameter, which corrects the defect that atomic waveforms only change with the frequency parameter. Its introduction not only decomposes seismic signal with high accuracy and high efficiency but also reduces iterations. We also integrate jumping search with ergodic search to improve computational efficiency while maintaining the reasonable accuracy. Then we combine the improved CDMP with the Wigner-Ville distribution to obtain a high-resolution TF spectrum. A one-dimensional modeling experiment has proved the validity of our method. Basing on the low-frequency domain reflection coefficient in fluid-saturated porous media, we finally get an approximation formula for the mobility attributes of reservoir fluid. This approximation formula is used as a hydrocarbon identification factor to predict deep-water gas-bearing sand of the M oil field in the South China Sea. The results are consistent with the actual well test results and our method can help inform the future exploration of deep-water gas reservoirs.

  10. Structure-Based Characterization of Multiprotein Complexes

    PubMed Central

    Wiederstein, Markus; Gruber, Markus; Frank, Karl; Melo, Francisco; Sippl, Manfred J.

    2014-01-01

    Summary Multiprotein complexes govern virtually all cellular processes. Their 3D structures provide important clues to their biological roles, especially through structural correlations among protein molecules and complexes. The detection of such correlations generally requires comprehensive searches in databases of known protein structures by means of appropriate structure-matching techniques. Here, we present a high-speed structure search engine capable of instantly matching large protein oligomers against the complete and up-to-date database of biologically functional assemblies of protein molecules. We use this tool to reveal unseen structural correlations on the level of protein quaternary structure and demonstrate its general usefulness for efficiently exploring complex structural relationships among known protein assemblies. PMID:24954616

  11. A high powered radar interference mitigation technique for communications signal recovery with fpga implementation

    DTIC Science & Technology

    2017-03-01

    2016.7485263.] 14. SUBJECT TERMS parameter estimation; matched- filter detection; QPSK; radar; interference; LSE, cyber, electronic warfare 15. NUMBER OF...signal is routed through a maximum-likelihood detector (MLD), which is a bank of four filters matched to the four symbols of the QPSK constellation... filters matched for each of the QPSK symbols is used to demodulate the signal after cancellation. The matched filters are defined as the complex

  12. Compact continuum brain model for human electroencephalogram

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Shin, H.-B.; Robinson, P. A.

    2007-12-01

    A low-dimensional, compact brain model has recently been developed based on physiologically based mean-field continuum formulation of electric activity of the brain. The essential feature of the new compact model is a second order time-delayed differential equation that has physiologically plausible terms, such as rapid corticocortical feedback and delayed feedback via extracortical pathways. Due to its compact form, the model facilitates insight into complex brain dynamics via standard linear and nonlinear techniques. The model successfully reproduces many features of previous models and experiments. For example, experimentally observed typical rhythms of electroencephalogram (EEG) signals are reproduced in a physiologically plausible parameter region. In the nonlinear regime, onsets of seizures, which often develop into limit cycles, are illustrated by modulating model parameters. It is also shown that a hysteresis can occur when the system has multiple attractors. As a further illustration of this approach, power spectra of the model are fitted to those of sleep EEGs of two subjects (one with apnea, the other with narcolepsy). The model parameters obtained from the fittings show good matches with previous literature. Our results suggest that the compact model can provide a theoretical basis for analyzing complex EEG signals.

  13. [Construction and validation of a three-dimensional finite element model of cranio-maxillary complex with sutures in unilateral cleft lip and palate patient].

    PubMed

    Wu, Zhi-fang; Lei, Yong-hua; Li, Wen-jie; Liao, Sheng-hui; Zhao, Zi-jin

    2013-02-01

    To explore an effective method to construct and validate a finite element model of the unilateral cleft lip and palate(UCLP) craniomaxillary complex with sutures, which could be applied in further three-dimensional finite element analysis (FEA). One male patient aged 9 with left complete lip and palate cleft was selected and CT scan was taken at 0.75mm intervals on the skull. The CT data was saved in Dicom format, which was, afterwards, imported into Software Mimics 10.0 to generate a three-dimensional anatomic model. Then Software Geomagic Studio 12.0 was used to match, smoothen and transfer the anatomic model into a CAD model with NURBS patches. Then, 12 circum-maxillary sutures were integrated into the CAD model by Solidworks (2011 version). Finally meshing by E-feature Biomedical Modeler was done and a three-dimensional finite element model with sutures was obtained. A maxillary protraction force (500 g per side, 20° downward and forward from the occlusal plane) was applied. Displacement and stress distribution of some important craniofacial structures were measured and compared with the results of related researches in the literature. A three-dimensional finite element model of UCLP craniomaxillary complex with 12 sutures was established from the CT scan data. This simulation model consisted of 206 753 individual elements with 260 662 nodes, which was a more precise simulation and a better representation of human craniomaxillary complex than the formerly available FEA models. By comparison, this model was proved to be valid. It is an effective way to establish the three-dimensional finite element model of UCLP cranio-maxillary complex with sutures from CT images with the help of the following softwares: Mimics 10.0, Geomagic Studio 12.0, Solidworks and E-feature Biomedical Modeler.

  14. A Semantic Analysis of XML Schema Matching for B2B Systems Integration

    ERIC Educational Resources Information Center

    Kim, Jaewook

    2011-01-01

    One of the most critical steps to integrating heterogeneous e-Business applications using different XML schemas is schema matching, which is known to be costly and error-prone. Many automatic schema matching approaches have been proposed, but the challenge is still daunting because of the complexity of schemas and immaturity of technologies in…

  15. Classification of ligand molecules in PDB with fast heuristic graph match algorithm COMPLIG.

    PubMed

    Saito, Mihoko; Takemura, Naomi; Shirai, Tsuyoshi

    2012-12-14

    A fast heuristic graph-matching algorithm, COMPLIG, was devised to classify the small-molecule ligands in the Protein Data Bank (PDB), which are currently not properly classified on structure basis. By concurrently classifying proteins and ligands, we determined the most appropriate parameter for categorizing ligands to be more than 60% identity of atoms and bonds between molecules, and we classified 11,585 types of ligands into 1946 clusters. Although the large clusters were composed of nucleotides or amino acids, a significant presence of drug compounds was also observed. Application of the system to classify the natural ligand status of human proteins in the current database suggested that, at most, 37% of the experimental structures of human proteins were in complex with natural ligands. However, protein homology- and/or ligand similarity-based modeling was implied to provide models of natural interactions for an additional 28% of the total, which might be used to increase the knowledge of intrinsic protein-metabolite interactions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. The effects of particulate air pollution on daily deaths: a multi-city case crossover analysis

    PubMed Central

    Schwartz, J

    2004-01-01

    Background: Numerous studies have reported that day-to-day changes in particulate air pollution are associated with day-to-day changes in deaths. Recently, several reports have indicated that the software used to control for season and weather in some of these studies had deficiencies. Aims: To investigate the use of the case-crossover design as an alternative. Methods: This approach compares the exposure of each case to their exposure on a nearby day, when they did not die. Hence it controls for seasonal patterns and for all slowly varying covariates (age, smoking, etc) by matching rather than complex modelling. A key feature is that temperature can also be controlled by matching. This approach was applied to a study of 14 US cities. Weather and day of the week were controlled for in the regression. Results: A 10 µg/m3 increase in PM10 was associated with a 0.36% increase in daily deaths from internal causes (95% CI 0.22% to 0.50%). Results were little changed if, instead of symmetrical sampling of control days the time stratified method was applied, when control days were matched on temperature, or when more lags of winter time temperatures were used. Similar results were found using a Poisson regression, but the case-crossover method has the advantage of simplicity in modelling, and of combining matched strata across multiple locations in a single stage analysis. Conclusions: Despite the considerable differences in analytical design, the previously reported associations of particles with mortality persisted in this study. The association appeared quite linear. Case-crossover designs represent an attractive method to control for season and weather by matching. PMID:15550600

  17. Cytogenetic landscape of paired neurospheres and traditional monolayer cultures in pediatric malignant brain tumors.

    PubMed

    Zhao, Xiumei; Zhao, Yi-Jue; Lin, Qi; Yu, Litian; Liu, Zhigang; Lindsay, Holly; Kogiso, Mari; Rao, Pulivarthi; Li, Xiao-Nan; Lu, Xinyan

    2015-07-01

    New therapeutic targets are needed to eliminate cancer stem cells (CSCs). We hypothesize that direct comparison of paired CSCs and nonstem tumor cells (NSTCs) will facilitate identification of primary "driver" chromosomal aberrations that can serve as diagnostic markers and/or therapeutic targets. We applied spectral karyotyping and G-banding to matched pairs of neurospheres (CSC-enriched cultures) and fetal bovine serum-based monolayer cultures (enriched with NSTCs) from 16 patient-derived orthotopic xenograft mouse models, including 9 medulloblastomas (MBs) and 7 high-grade gliomas (HGGs), followed by direct comparison of their numerical and structural abnormalities. Chromosomal aberrations were detected in neurospheres of all 16 models, and 82.0% numerical and 82.4% structural abnormalities were maintained in their matching monolayer cultures. Among the shared abnormalities, recurrent clonal changes were identified including gain of chromosomes 18 and 7 and loss of chromosome 10/10q (5/16 models), isochromosome 17q in 2 MBs, and a new breakpoint of 13q14 in 3 HGGs. Chromothripsis-like evidence was also observed in 3 HGG pairs. Additionally, we noted 20 numerical and 15 structural aberrations that were lost from the neurospheres and found 26 numerical and 23 structural aberrations that were only present in the NSTCs. Compared with MBs, the neurosphere karyotypes of HGG were more complex, with fewer chromosomal aberrations preserved in their matching NSTCs. Self-renewing CSCs in MBs and pediatric HGGs harbor recurrent numerical and structural aberrations that were maintained in the matching monolayer cultures. These primary chromosomal changes may represent new markers for anti-CSC therapies. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Design and Characterization of an Acoustically and Structurally Matched 3-D-Printed Model for Transcranial Ultrasound Imaging.

    PubMed

    Bai, Chen; Ji, Meiling; Bouakaz, Ayache; Zong, Yujin; Wan, Mingxi

    2018-05-01

    For investigating human transcranial ultrasound imaging (TUI) through the temporal bone, an intact human skull is needed. Since it is complex and expensive to obtain one, it requires that experiments are performed without excision or abrasion of the skull. Besides, to mimic blood circulation for the vessel target, cellulose tubes generally fit the vessel simulation with straight linear features. These issues, which limit experimental studies, can be overcome by designing a 3-D-printed skull model with acoustic and dimensional properties that match a real skull and a vessel model with curve and bifurcation. First, the optimal printing material which matched a real skull in terms of the acoustic attenuation coefficient and sound propagation velocity was identified at 2-MHz frequency, i.e., 7.06 dB/mm and 2168.71 m/s for the skull while 6.98 dB/mm and 2114.72 m/s for the printed material, respectively. After modeling, the average thickness of the temporal bone in the printed skull was about 1.8 mm, while it was to 1.7 mm in the real skull. Then, a vascular phantom was designed with 3-D-printed vessels of low acoustic attenuation (0.6 dB/mm). It was covered with a porcine brain tissue contained within a transparent polyacrylamide gel. After characterizing the acoustic consistency, based on the designed skull model and vascular phantom, vessels with inner diameters of 1 and 0.7 mm were distinguished by resolution enhanced imaging with low frequency. Measurements and imaging results proved that the model and phantom are authentic and viable alternatives, and will be of interest for TUI, high intensity focused ultrasound, or other therapy studies.

  19. Radiative decay engineering 5: metal-enhanced fluorescence and plasmon emission

    PubMed Central

    Lakowicz, Joseph R.

    2009-01-01

    Metallic particles and surfaces display diverse and complex optical properties. Examples include the intense colors of noble metal colloids, surface plasmon resonance absorption by thin metal films, and quenching of excited fluorophores near the metal surfaces. Recently, the interactions of fluorophores with metallic particles and surfaces (metals) have been used to obtain increased fluorescence intensities, to develop assays based on fluorescence quenching by gold colloids, and to obtain directional radiation from fluorophores near thin metal films. For metal-enhanced fluorescence it is difficult to predict whether a particular metal structure, such as a colloid, fractal, or continuous surface, will quench or enhance fluorescence. In the present report we suggest how the effects of metals on fluorescence can be explained using a simple concept, based on radiating plasmons (RPs). The underlying physics may be complex but the concept is simple to understand. According to the RP model, the emission or quenching of a fluorophore near the metal can be predicted from the optical properties of the metal structures as calculated from electrodynamics, Mie theory, and/or Maxwell’s equations. For example, according to Mie theory and the size and shape of the particle, the extinction of metal colloids can be due to either absorption or scattering. Incident energy is dissipated by absorption. Far-field radiation is created by scattering. Based on our model small colloids are expected to quench fluorescence because absorption is dominant over scattering. Larger colloids are expected to enhance fluorescence because the scattering component is dominant over absorption. The ability of a metal’s surface to absorb or reflect light is due to wavenumber matching requirements at the metal–sample interface. Wavenumber matching considerations can also be used to predict whether fluorophores at a given distance from a continuous planar surface will be emitted or quenched. These considerations suggest that the so called “lossy surface waves” which quench fluorescence are due to induced electron oscillations which cannot radiate to the far-field because wavevector matching is not possible. We suggest that the energy from the fluorophores thought to be lost by lossy surface waves can be recovered as emission by adjustment of the sample to allow wavevector matching. The RP model provides a rational approach for designing fluorophore–metal configurations with the desired emissive properties and a basis for nanophotonic fluorophore technology. PMID:15691498

  20. Detecting drawdowns masked by environmental stresses with water-level models

    USGS Publications Warehouse

    Garcia, C.A.; Halford, K.J.; Fenelon, J.M.

    2013-01-01

    Detecting and quantifying small drawdown at observation wells distant from the pumping well greatly expands the characterized aquifer volume. However, this detection is often obscured by water level fluctuations such as barometric and tidal effects. A reliable analytical approach for distinguishing drawdown from nonpumping water-level fluctuations is presented and tested here. Drawdown is distinguished by analytically simulating all pumping and nonpumping water-level stresses simultaneously during the period of record. Pumping signals are generated with Theis models, where the pumping schedule is translated into water-level change with the Theis solution. This approach closely matched drawdowns simulated with a complex three-dimensional, hypothetical model and reasonably estimated drawdowns from an aquifer test conducted in a complex hydrogeologic system. Pumping-induced changes generated with a numerical model and analytical Theis model agreed (RMS as low as 0.007 m) in cases where pumping signals traveled more than 1 km across confining units and fault structures. Maximum drawdowns of about 0.05 m were analytically estimated from field investigations where environmental fluctuations approached 0.2 m during the analysis period.

  1. A generalized linear integrate-and-fire neural model produces diverse spiking behaviors.

    PubMed

    Mihalaş, Stefan; Niebur, Ernst

    2009-03-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.

  2. Delayed matching to two-picture samples by individuals with and without disabilities: an analysis of the role of naming.

    PubMed

    Gutowski, Stanley J; Stromer, Robert

    2003-01-01

    Delayed matching to complex, two-picture samples (e.g., cat-dog) may be improved when the samples occasion differential verbal behavior. In Experiment 1, individuals with mental retardation matched picture comparisons to identical single-picture samples or to two-picture samples, one of which was identical to a comparison. Accuracy scores were typically high on single-picture trials under both simultaneous and delayed matching conditions. Scores on two-picture trials were also high during the simultaneous condition but were lower during the delay condition. However, scores improved on delayed two-picture trials when each of the sample pictures was named aloud before comparison responding. Experiment 2 replicated these results with preschoolers with typical development and a youth with mental retardation. Sample naming also improved the preschoolers' matching when the samples were pairs of spoken names and the correct comparison picture matched one of the names. Collectively, the participants could produce the verbal behavior that might have improved performance, but typically did not do so unless the procedure required it. The success of the naming intervention recommends it for improving the observing and remembering of multiple elements of complex instructional stimuli.

  3. Outcomes Following Three-Factor Inactive Prothrombin Complex Concentrate Versus Recombinant Activated Factor VII Administration During Cardiac Surgery.

    PubMed

    Harper, Patrick C; Smith, Mark M; Brinkman, Nathan J; Passe, Melissa A; Schroeder, Darrell R; Said, Sameh M; Nuttall, Gregory A; Oliver, William C; Barbara, David W

    2018-02-01

    To compare outcomes following inactive prothrombin complex concentrate (PCC) or recombinant activated factor VII (rFVIIa) administration during cardiac surgery. Retrospective propensity-matched analysis. Academic tertiary-care center. Patients undergoing cardiac surgery requiring cardiopulmonary bypass who received either rFVIIa or the inactive 3-factor PCC. Outcomes following intraoperative administration of rFVIIa (263) or factor IX complex (72) as rescue therapy to treat bleeding. In the 24 hours after surgery, propensity-matched patients receiving PCC versus rFVIIa had significantly less chest tube outputs (median difference -464 mL, 95% confidence interval [CI] -819 mL to -110 mL), fresh frozen plasma transfusion rates (17% v 38%, p = 0.028), and platelet transfusion rates (26% v 49%, p = 0.027). There were no significant differences between propensity-matched groups in postoperative stroke, deep venous thrombosis, pulmonary embolism, myocardial infarction, or intracardiac thrombus. Postoperative dialysis was significantly less likely in patients administered PCC versus rFVIIa following propensity matching (odds ratio = 0.3, 95% CI 0.1-0.7). No significant difference in 30-day mortality in patients receiving PCC versus rFVIIa was present following propensity matching. Use of rFVIIa versus inactive PCCs was significantly associated with renal failure requiring dialysis and increased postoperative bleeding and transfusions. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Fast history matching of time-lapse seismic and production data for high resolution models

    NASA Astrophysics Data System (ADS)

    Jimenez Arismendi, Eduardo Antonio

    Integrated reservoir modeling has become an important part of day-to-day decision analysis in oil and gas management practices. A very attractive and promising technology is the use of time-lapse or 4D seismic as an essential component in subsurface modeling. Today, 4D seismic is enabling oil companies to optimize production and increase recovery through monitoring fluid movements throughout the reservoir. 4D seismic advances are also being driven by an increased need by the petroleum engineering community to become more quantitative and accurate in our ability to monitor reservoir processes. Qualitative interpretations of time-lapse anomalies are being replaced by quantitative inversions of 4D seismic data to produce accurate maps of fluid saturations, pore pressure, temperature, among others. Within all steps involved in this subsurface modeling process, the most demanding one is integrating the geologic model with dynamic field data, including 4Dseismic when available. The validation of the geologic model with observed dynamic data is accomplished through a "history matching" (HM) process typically carried out with well-based measurements. Due to low resolution of production data, the validation process is severely limited in its reservoir areal coverage, compromising the quality of the model and any subsequent predictive exercise. This research will aim to provide a novel history matching approach that can use information from high-resolution seismic data to supplement the areally sparse production data. The proposed approach will utilize streamline-derived sensitivities as means of relating the forward model performance with the prior geologic model. The essential ideas underlying this approach are similar to those used for high-frequency approximations in seismic wave propagation. In both cases, this leads to solutions that are defined along "streamlines" (fluid flow), or "rays" (seismic wave propagation). Synthetic and field data examples will be used extensively to demonstrate the value and contribution of this work. Our results show that the problem of non-uniqueness in this complex history matching problem is greatly reduced when constraints in the form of saturation maps from spatially closely sampled seismic data are included. Further on, our methodology can be used to quickly identify discrepancies between static and dynamic modeling. Reducing this gap will ensure robust and reliable models leading to accurate predictions and ultimately an optimum hydrocarbon extraction.

  5. Structure-based characterization of multiprotein complexes.

    PubMed

    Wiederstein, Markus; Gruber, Markus; Frank, Karl; Melo, Francisco; Sippl, Manfred J

    2014-07-08

    Multiprotein complexes govern virtually all cellular processes. Their 3D structures provide important clues to their biological roles, especially through structural correlations among protein molecules and complexes. The detection of such correlations generally requires comprehensive searches in databases of known protein structures by means of appropriate structure-matching techniques. Here, we present a high-speed structure search engine capable of instantly matching large protein oligomers against the complete and up-to-date database of biologically functional assemblies of protein molecules. We use this tool to reveal unseen structural correlations on the level of protein quaternary structure and demonstrate its general usefulness for efficiently exploring complex structural relationships among known protein assemblies. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Effect of chunk strength on the performance of children with developmental dyslexia on artificial grammar learning task may be related to complexity.

    PubMed

    Schiff, Rachel; Katan, Pesia; Sasson, Ayelet; Kahta, Shani

    2017-07-01

    There's a long held view that chunks play a crucial role in artificial grammar learning performance. We compared chunk strength influences on performance, in high and low topological entropy (a measure of complexity) grammar systems, with dyslexic children, age-matched and reading-level-matched control participants. Findings show that age-matched control participants' performance reflected equivalent influence of chunk strength in the two topological entropy conditions, as typically found in artificial grammar learning experiments. By contrast, dyslexic children and reading-level-matched controls' performance reflected knowledge of chunk strength only under the low topological entropy condition. In the low topological entropy grammar system, they appeared completely unable to utilize chunk strength to make appropriate test item selections. In line with previous research, this study suggests that for typically developing children, it is the chunks that are attended during artificial grammar learning and create a foundation on which implicit associative learning mechanisms operate, and these chunks are unitized to different strengths. However, for children with dyslexia, it is complexity that may influence the subsequent memorability of chunks, independently of their strength.

  7. ReMatch: a web-based tool to construct, store and share stoichiometric metabolic models with carbon maps for metabolic flux analysis.

    PubMed

    Pitkänen, Esa; Akerlund, Arto; Rantanen, Ari; Jouhten, Paula; Ukkonen, Esko

    2008-08-25

    ReMatch is a web-based, user-friendly tool that constructs stoichiometric network models for metabolic flux analysis, integrating user-developed models into a database collected from several comprehensive metabolic data resources, including KEGG, MetaCyc and CheBI. Particularly, ReMatch augments the metabolic reactions of the model with carbon mappings to facilitate (13)C metabolic flux analysis. The construction of a network model consisting of biochemical reactions is the first step in most metabolic modelling tasks. This model construction can be a tedious task as the required information is usually scattered to many separate databases whose interoperability is suboptimal, due to the heterogeneous naming conventions of metabolites in different databases. Another, particularly severe data integration problem is faced in (13)C metabolic flux analysis, where the mappings of carbon atoms from substrates into products in the model are required. ReMatch has been developed to solve the above data integration problems. First, ReMatch matches the imported user-developed model against the internal ReMatch database while considering a comprehensive metabolite name thesaurus. This, together with wild card support, allows the user to specify the model quickly without having to look the names up manually. Second, ReMatch is able to augment reactions of the model with carbon mappings, obtained either from the internal database or given by the user with an easy-touse tool. The constructed models can be exported into 13C-FLUX and SBML file formats. Further, a stoichiometric matrix and visualizations of the network model can be generated. The constructed models of metabolic networks can be optionally made available to the other users of ReMatch. Thus, ReMatch provides a common repository for metabolic network models with carbon mappings for the needs of metabolic flux analysis community. ReMatch is freely available for academic use at http://www.cs.helsinki.fi/group/sysfys/software/rematch/.

  8. Dense Matching Comparison Between Census and a Convolutional Neural Network Algorithm for Plant Reconstruction

    NASA Astrophysics Data System (ADS)

    Xia, Y.; Tian, J.; d'Angelo, P.; Reinartz, P.

    2018-05-01

    3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  9. Evaluating Flight Crew Operator Manual Documentation

    NASA Technical Reports Server (NTRS)

    Sherry, Lance; Feary, Michael

    1998-01-01

    Aviation and cognitive science researchers have identified situations in which the pilot s expectations for the behavior of the avionics are not matched by the actual behavior of the avionics. Researchers have attributed these "automation surprises" to the complexity of the avionics mode logic, the absence of complete training, limitations in cockpit displays, and ad-hoc conceptual models of the avionics. Complete canonical rule-based descriptions of the behavior of the autopilot provide the basis for understanding the perceived complexity of the autopilots, the differences between the pilot s and autopilot s conceptual models, and the limitations in training materials and cockpit displays. This paper compares the behavior of the autopilot Vertical Speed/Flight Path Angle (VS-FPA) mode as described in the Flight Crew Operators Manual (FCOM) and the actual behavior of the VS-FPA mode defined in the autopilot software. This example demonstrates the use of the Operational Procedure Model (OPM) as a method for using the requirements specification for the design of the software logic as information requirements for training.

  10. Field-scale prediction of enhanced DNAPL dissolution based on partitioning tracers.

    PubMed

    Wang, Fang; Annable, Michael D; Jawitz, James W

    2013-09-01

    The equilibrium streamtube model (EST) has demonstrated the ability to accurately predict dense nonaqueous phase liquid (DNAPL) dissolution in laboratory experiments and numerical simulations. Here the model is applied to predict DNAPL dissolution at a tetrachloroethylene (PCE)-contaminated dry cleaner site, located in Jacksonville, Florida. The EST model is an analytical solution with field-measurable input parameters. Measured data from a field-scale partitioning tracer test were used to parameterize the EST model and the predicted PCE dissolution was compared to measured data from an in-situ ethanol flood. In addition, a simulated partitioning tracer test from a calibrated, three-dimensional, spatially explicit multiphase flow model (UTCHEM) was also used to parameterize the EST analytical solution. The EST ethanol prediction based on both the field partitioning tracer test and the simulation closely matched the total recovery well field ethanol data with Nash-Sutcliffe efficiency E=0.96 and 0.90, respectively. The EST PCE predictions showed a peak shift to earlier arrival times for models based on either field-measured or simulated partitioning tracer tests, resulting in poorer matches to the field PCE data in both cases. The peak shifts were concluded to be caused by well screen interval differences between the field tracer test and ethanol flood. Both the EST model and UTCHEM were also used to predict PCE aqueous dissolution under natural gradient conditions, which has a much less complex flow pattern than the forced-gradient double five spot used for the ethanol flood. The natural gradient EST predictions based on parameters determined from tracer tests conducted with a complex flow pattern underestimated the UTCHEM-simulated natural gradient total mass removal by 12% after 170 pore volumes of water flushing indicating that some mass was not detected by the tracers likely due to stagnation zones in the flow field. These findings highlight the important influence of well configuration and the associated flow patterns on dissolution. © 2013.

  11. Field-scale prediction of enhanced DNAPL dissolution based on partitioning tracers

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Annable, Michael D.; Jawitz, James W.

    2013-09-01

    The equilibrium streamtube model (EST) has demonstrated the ability to accurately predict dense nonaqueous phase liquid (DNAPL) dissolution in laboratory experiments and numerical simulations. Here the model is applied to predict DNAPL dissolution at a tetrachloroethylene (PCE)-contaminated dry cleaner site, located in Jacksonville, Florida. The EST model is an analytical solution with field-measurable input parameters. Measured data from a field-scale partitioning tracer test were used to parameterize the EST model and the predicted PCE dissolution was compared to measured data from an in-situ ethanol flood. In addition, a simulated partitioning tracer test from a calibrated, three-dimensional, spatially explicit multiphase flow model (UTCHEM) was also used to parameterize the EST analytical solution. The EST ethanol prediction based on both the field partitioning tracer test and the simulation closely matched the total recovery well field ethanol data with Nash-Sutcliffe efficiency E = 0.96 and 0.90, respectively. The EST PCE predictions showed a peak shift to earlier arrival times for models based on either field-measured or simulated partitioning tracer tests, resulting in poorer matches to the field PCE data in both cases. The peak shifts were concluded to be caused by well screen interval differences between the field tracer test and ethanol flood. Both the EST model and UTCHEM were also used to predict PCE aqueous dissolution under natural gradient conditions, which has a much less complex flow pattern than the forced-gradient double five spot used for the ethanol flood. The natural gradient EST predictions based on parameters determined from tracer tests conducted with a complex flow pattern underestimated the UTCHEM-simulated natural gradient total mass removal by 12% after 170 pore volumes of water flushing indicating that some mass was not detected by the tracers likely due to stagnation zones in the flow field. These findings highlight the important influence of well configuration and the associated flow patterns on dissolution.

  12. Genetic Signatures of Exceptional Longevity in Humans

    PubMed Central

    Sebastiani, Paola; Solovieff, Nadia; DeWan, Andrew T.; Walsh, Kyle M.; Puca, Annibale; Hartley, Stephen W.; Melista, Efthymia; Andersen, Stacy; Dworkis, Daniel A.; Wilk, Jemma B.; Myers, Richard H.; Steinberg, Martin H.; Montano, Monty; Baldwin, Clinton T.; Hoh, Josephine; Perls, Thomas T.

    2012-01-01

    Like most complex phenotypes, exceptional longevity is thought to reflect a combined influence of environmental (e.g., lifestyle choices, where we live) and genetic factors. To explore the genetic contribution, we undertook a genome-wide association study of exceptional longevity in 801 centenarians (median age at death 104 years) and 914 genetically matched healthy controls. Using these data, we built a genetic model that includes 281 single nucleotide polymorphisms (SNPs) and discriminated between cases and controls of the discovery set with 89% sensitivity and specificity, and with 58% specificity and 60% sensitivity in an independent cohort of 341 controls and 253 genetically matched nonagenarians and centenarians (median age 100 years). Consistent with the hypothesis that the genetic contribution is largest with the oldest ages, the sensitivity of the model increased in the independent cohort with older and older ages (71% to classify subjects with an age at death>102 and 85% to classify subjects with an age at death>105). For further validation, we applied the model to an additional, unmatched 60 centenarians (median age 107 years) resulting in 78% sensitivity, and 2863 unmatched controls with 61% specificity. The 281 SNPs include the SNP rs2075650 in TOMM40/APOE that reached irrefutable genome wide significance (posterior probability of association = 1) and replicated in the independent cohort. Removal of this SNP from the model reduced the accuracy by only 1%. Further in-silico analysis suggests that 90% of centenarians can be grouped into clusters characterized by different “genetic signatures” of varying predictive values for exceptional longevity. The correlation between 3 signatures and 3 different life spans was replicated in the combined replication sets. The different signatures may help dissect this complex phenotype into sub-phenotypes of exceptional longevity. PMID:22279548

  13. Dynamical System Modeling to Simulate Donor T Cell Response to Whole Exome Sequencing-Derived Recipient Peptides Demonstrates Different Alloreactivity Potential in HLA-Matched and -Mismatched Donor-Recipient Pairs.

    PubMed

    Abdul Razzaq, Badar; Scalora, Allison; Koparde, Vishal N; Meier, Jeremy; Mahmood, Musa; Salman, Salman; Jameson-Lee, Max; Serrano, Myrna G; Sheth, Nihar; Voelkner, Mark; Kobulnicky, David J; Roberts, Catherine H; Ferreira-Gonzalez, Andrea; Manjili, Masoud H; Buck, Gregory A; Neale, Michael C; Toor, Amir A

    2016-05-01

    Immune reconstitution kinetics and subsequent clinical outcomes in HLA-matched recipients of allogeneic stem cell transplantation (SCT) are variable and difficult to predict. Considering SCT as a dynamical system may allow sequence differences across the exomes of the transplant donors and recipients to be used to simulate an alloreactive T cell response, which may allow better clinical outcome prediction. To accomplish this, whole exome sequencing was performed on 34 HLA-matched SCT donor-recipient pairs (DRPs) and the nucleotide sequence differences translated to peptides. The binding affinity of the peptides to the relevant HLA in each DRP was determined. The resulting array of peptide-HLA binding affinity values in each patient was considered as an operator modifying a hypothetical T cell repertoire vector, in which each T cell clone proliferates in accordance with the logistic equation of growth. Using an iterating system of matrices, each simulated T cell clone's growth was calculated with the steady-state population being proportional to the magnitude of the binding affinity of the driving HLA-peptide complex. Incorporating competition between T cell clones responding to different HLA-peptide complexes reproduces a number of features of clinically observed T cell clonal repertoire in the simulated repertoire, including sigmoidal growth kinetics of individual T cell clones and overall repertoire, Power Law clonal frequency distribution, increase in repertoire complexity over time with increasing clonal diversity, and alteration of clonal dominance when a different antigen array is encountered, such as in SCT. The simulated, alloreactive T cell repertoire was markedly different in HLA-matched DRPs. The patterns were differentiated by rate of growth and steady-state magnitude of the simulated T cell repertoire and demonstrate a possible correlation with survival. In conclusion, exome wide sequence differences in DRPs may allow simulation of donor alloreactive T cell response to recipient antigens and may provide a quantitative basis for refining donor selection and titration of immunosuppression after SCT. Copyright © 2016 American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.

  14. A novel CFS-PML boundary condition for transient electromagnetic simulation using a fictitious wave domain method

    NASA Astrophysics Data System (ADS)

    Hu, Yanpu; Egbert, Gary; Ji, Yanju; Fang, Guangyou

    2017-01-01

    In this study, we apply fictitious wave domain (FWD) methods, based on the correspondence principle for the wave and diffusion fields, to finite difference (FD) modeling of transient electromagnetic (TEM) diffusion problems for geophysical applications. A novel complex frequency shifted perfectly matched layer (PML) boundary condition is adapted to the FWD to truncate the computational domain, with the maximum electromagnetic wave propagation velocity in the FWD used to set the absorbing parameters for the boundary layers. Using domains of varying spatial extent we demonstrate that these boundary conditions offer significant improvements over simpler PML approaches, which can result in spurious reflections and large errors in the FWD solutions, especially for low frequencies and late times. In our development, resistive air layers are directly included in the FWD, allowing simulation of TEM responses in the presence of topography, as is commonly encountered in geophysical applications. We compare responses obtained by our new FD-FWD approach and with the spectral Lanczos decomposition method on 3-D resistivity models of varying complexity. The comparisons demonstrate that our absorbing boundary condition in FWD for the TEM diffusion problems works well even in complex high-contrast conductivity models.

  15. Characterizing Adsorption Performance of Granular Activated Carbon with Permittivity.

    PubMed

    Yang, Yang; Shi, Chao; Zhang, Yi; Ye, Jinghua; Zhu, Huacheng; Huang, Kama

    2017-03-07

    A number of studies have achieved the consensus that microwave thermal technology can regenerate the granular activated carbon (GAC) more efficiently and energy-conservatively than other technologies. In particular, in the microwave heating industry, permittivity is a crucial parameter. This paper developed two equivalent models to establish the relationship between effective complex permittivity and pore volume of the GAC. It is generally based on Maxwell-Garnett approximation (MGA) theory. With two different assumptions in the model, two quantificational expressions were derived, respectively. Permittivity measurements and Brunauer-Emmett-Teller (BET) testing had been introduced in the experiments. Results confirmed the two expressions, which were extremely similar. Theoretical and experimental graphs were matched. This paper set up a bridge which links effective complex permittivity and pore volume of the GAC. Furthermore, it provides a potential and convenient method for the rapid assisted characterization of the GAC in its adsorption performance.

  16. Characterizing Adsorption Performance of Granular Activated Carbon with Permittivity

    PubMed Central

    Yang, Yang; Shi, Chao; Zhang, Yi; Ye, Jinghua; Zhu, Huacheng; Huang, Kama

    2017-01-01

    A number of studies have achieved the consensus that microwave thermal technology can regenerate the granular activated carbon (GAC) more efficiently and energy-conservatively than other technologies. In particular, in the microwave heating industry, permittivity is a crucial parameter. This paper developed two equivalent models to establish the relationship between effective complex permittivity and pore volume of the GAC. It is generally based on Maxwell-Garnett approximation (MGA) theory. With two different assumptions in the model, two quantificational expressions were derived, respectively. Permittivity measurements and Brunauer–Emmett–Teller (BET) testing had been introduced in the experiments. Results confirmed the two expressions, which were extremely similar. Theoretical and experimental graphs were matched. This paper set up a bridge which links effective complex permittivity and pore volume of the GAC. Furthermore, it provides a potential and convenient method for the rapid assisted characterization of the GAC in its adsorption performance. PMID:28772628

  17. Robust position estimation of a mobile vehicle

    NASA Astrophysics Data System (ADS)

    Conan, Vania; Boulanger, Pierre; Elgazzar, Shadia

    1994-11-01

    The ability to estimate the position of a mobile vehicle is a key task for navigation over large distances in complex indoor environments such as nuclear power plants. Schematics of the plants are available, but they are incomplete, as real settings contain many objects, such as pipes, cables or furniture, that mask part of the model. The position estimation method described in this paper matches 3-D data with a simple schematic of a plant. It is basically independent of odometry information and viewpoint, robust to noisy data and spurious points and largely insensitive to occlusions. The method is based on a hypothesis/verification paradigm and its complexity is polynomial; it runs in (Omicron) (m4n4), where m represents the number of model patches and n the number of scene patches. Heuristics are presented to speed up the algorithm. Results on real 3-D data show good behavior even when the scene is very occluded.

  18. Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.

    PubMed

    Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D

    2011-05-01

    Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.

  19. A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors

    PubMed Central

    Mihalaş, Ştefan; Niebur, Ernst

    2010-01-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model’s rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation. PMID:18928368

  20. Fast adaptive diamond search algorithm for block-matching motion estimation using spatial correlation

    NASA Astrophysics Data System (ADS)

    Park, Sang-Gon; Jeong, Dong-Seok

    2000-12-01

    In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.

  1. Th-1 polarization is regulated by dendritic-cell comparison of MHC class I and class II antigens

    PubMed Central

    Xing, Dongxia; Li, Sufang; Robinson, Simon N.; Yang, Hong; Steiner, David; Komanduri, Krishna V.; Shpall, Elizabeth J.

    2009-01-01

    In the control of T-helper type I (Th-1) polarization, dendritic cells (DCs) must interpret a complex array of stimuli, many of which are poorly understood. Here we demonstrate that Th-1 polarization is heavily influenced by DC-autonomous phenomena triggered by the loading of DCs with antigenically matched major histocompatibility complex (MHC) class I and class II determinants, that is, class I and II peptide epitopes exhibiting significant amino acid sequence overlap (such as would be physiologically present during infectious processes requiring Th-1 immunity for clearance). Data were derived from 13 independent antigenic models including whole-cell systems, single-protein systems, and 3 different pairs of overlapping class I and II binding epitopes. Once loaded with matched class I and II antigens, these “Th-1 DCs” exhibited differential cytokine secretion and surface marker expression, a distinct transcriptional signature, and acquired the ability to enhance generation of CD8+ T lymphocytes. Mechanistically, tRNA-synthetases were implicated as components of a putative sensor complex involved in the comparison of class I and II epitopes. These data provide rigorous conceptual explanations for the process of Th-1 polarization and the antigenic specificity of cognate T-cell help, enhance the understanding of Th-1 responses, and should contribute to the formulation of more effective vaccination strategies. PMID:19171878

  2. Tunable impedance matching network fundamental limits and practical considerations

    NASA Astrophysics Data System (ADS)

    Allen, Wesley N.

    As wireless devices continue to increase in utility while decreasing in dimension, design of the RF front-end becomes more complex. It is common for a single handheld device to operate on a plethora of frequency bands, utilize multiple antennae, and be subjected to a variety of environments. One complexity in particular which arises from these factors is that of impedance mismatch. Recently, tunable impedance matching networks have begun to be implemented to address this problem. This dissertation presents the first in-depth study on the frequency tuning range of tunable impedance matching networks. Both the fundamental limitations of ideal networks as well as practical considerations for design and implementation are addressed. Specifically, distributed matching networks with a single tuning element are investigated for use with parallel resistor-capacitor and series resistor-inductor loads. Analytical formulas are developed to directly calculate the frequency tuning range TR of ideal topologies. The theoretical limit of TR for these topologies is presented and discussed. Additional formulas are developed which address limitations in transmission line characteristic impedance and varactor range. Equations to predict loss due to varactor quality factor are demonstrated and the ability of parasitics to both increase and decrease TR are shown. Measured results exemplify i) the potential to develop matching networks with a small impact from parasitics, ii) the need for accurate knowledge of parasitics when designing near transition points in optimal parameters, iii) the importance of using a transmission line with the right characteristic impedance, and iv) the ability to achieve extremely low loss at the design frequency with a lossy varactor under the right conditions (measured loss of -0.07 dB). In the area of application, tunable matching networks are designed and measured for mobile handset antennas, demonstrating up to a 3 dB improvement in power delivered to a planar inverted-F antenna and up to 4--5.6 dB improvement in power delivered to the iPhone(TM) antenna. Additionally, a single-varactor matching network is measured to achieve greater tuning range than a two-varactor matching network (> 824--960 MHz versus 850--915 MHz) and yield higher power handling. Addressing miniaturization, an accurate model of metal loss in planar integrated inductors for low-loss substrates is developed and demonstrated. Finally, immediate future research directions are suggested: i) expanding the topologies, tuning elements, and loads analyzed; ii) performing a deep study into parasitics; and iii) investigating power handling with various varactor technologies.

  3. ODMedit: uniform semantic annotation for data integration in medicine based on a public metadata repository.

    PubMed

    Dugas, Martin; Meidt, Alexandra; Neuhaus, Philipp; Storck, Michael; Varghese, Julian

    2016-06-01

    The volume and complexity of patient data - especially in personalised medicine - is steadily increasing, both regarding clinical data and genomic profiles: Typically more than 1,000 items (e.g., laboratory values, vital signs, diagnostic tests etc.) are collected per patient in clinical trials. In oncology hundreds of mutations can potentially be detected for each patient by genomic profiling. Therefore data integration from multiple sources constitutes a key challenge for medical research and healthcare. Semantic annotation of data elements can facilitate to identify matching data elements in different sources and thereby supports data integration. Millions of different annotations are required due to the semantic richness of patient data. These annotations should be uniform, i.e., two matching data elements shall contain the same annotations. However, large terminologies like SNOMED CT or UMLS don't provide uniform coding. It is proposed to develop semantic annotations of medical data elements based on a large-scale public metadata repository. To achieve uniform codes, semantic annotations shall be re-used if a matching data element is available in the metadata repository. A web-based tool called ODMedit ( https://odmeditor.uni-muenster.de/ ) was developed to create data models with uniform semantic annotations. It contains ~800,000 terms with semantic annotations which were derived from ~5,800 models from the portal of medical data models (MDM). The tool was successfully applied to manually annotate 22 forms with 292 data items from CDISC and to update 1,495 data models of the MDM portal. Uniform manual semantic annotation of data models is feasible in principle, but requires a large-scale collaborative effort due to the semantic richness of patient data. A web-based tool for these annotations is available, which is linked to a public metadata repository.

  4. Real-time biomimetic Central Pattern Generators in an FPGA for hybrid experiments

    PubMed Central

    Ambroise, Matthieu; Levi, Timothée; Joucla, Sébastien; Yvert, Blaise; Saïghi, Sylvain

    2013-01-01

    This investigation of the leech heartbeat neural network system led to the development of a low resources, real-time, biomimetic digital hardware for use in hybrid experiments. The leech heartbeat neural network is one of the simplest central pattern generators (CPG). In biology, CPG provide the rhythmic bursts of spikes that form the basis for all muscle contraction orders (heartbeat) and locomotion (walking, running, etc.). The leech neural network system was previously investigated and this CPG formalized in the Hodgkin–Huxley neural model (HH), the most complex devised to date. However, the resources required for a neural model are proportional to its complexity. In response to this issue, this article describes a biomimetic implementation of a network of 240 CPGs in an FPGA (Field Programmable Gate Array), using a simple model (Izhikevich) and proposes a new synapse model: activity-dependent depression synapse. The network implementation architecture operates on a single computation core. This digital system works in real-time, requires few resources, and has the same bursting activity behavior as the complex model. The implementation of this CPG was initially validated by comparing it with a simulation of the complex model. Its activity was then matched with pharmacological data from the rat spinal cord activity. This digital system opens the way for future hybrid experiments and represents an important step toward hybridization of biological tissue and artificial neural networks. This CPG network is also likely to be useful for mimicking the locomotion activity of various animals and developing hybrid experiments for neuroprosthesis development. PMID:24319408

  5. Systemic risk: the dynamics of model banking systems

    PubMed Central

    May, Robert M.; Arinaminpathy, Nimalan

    2010-01-01

    The recent banking crises have made it clear that increasingly complex strategies for managing risk in individual banks have not been matched by corresponding attention to overall systemic risks. We explore some simple mathematical caricatures for ‘banking ecosystems’, with emphasis on the interplay between the characteristics of individual banks (capital reserves in relation to total assets, etc.) and the overall dynamical behaviour of the system. The results are discussed in relation to potential regulations aimed at reducing systemic risk. PMID:19864264

  6. Marketing responsible drinking behavior: comparing the effectiveness of responsible drinking messages tailored to three possible "personality" conceptualizations.

    PubMed

    York, Valerie K; Brannon, Laura A; Miller, Megan M

    2012-01-01

    We investigated whether a thoroughly personalized message (tailored to a person's "Big Five" personality traits) or a message matched to an alternate form of self-schema (ideal self-schema) would be more influential than a self-schema matched message (that has been found to be effective) at marketing responsible drinking. We expected the more thoroughly personalized Big Five matched message to be more effective than the self-schema matched message. However, neither the Big Five message nor the ideal self-schema message was more effective than the actual self-schema message. Therefore, research examining self-schema matching should be pursued rather than more complex Big Five matching.

  7. The Hyper-Envelope Modeling Interface (HEMI): A Novel Approach Illustrated Through Predicting Tamarisk (Tamarix spp.) Habitat in the Western USA

    USGS Publications Warehouse

    Graham, Jim; Young, Nick; Jarnevich, Catherine S.; Newman, Greg; Evangelista, Paul; Stohlgren, Thomas J.

    2013-01-01

    Habitat suitability maps are commonly created by modeling a species’ environmental niche from occurrences and environmental characteristics. Here, we introduce the hyper-envelope modeling interface (HEMI), providing a new method for creating habitat suitability models using Bezier surfaces to model a species niche in environmental space. HEMI allows modeled surfaces to be visualized and edited in environmental space based on expert knowledge and does not require absence points for model development. The modeled surfaces require relatively few parameters compared to similar modeling approaches and may produce models that better match ecological niche theory. As a case study, we modeled the invasive species tamarisk (Tamarix spp.) in the western USA. We compare results from HEMI with those from existing similar modeling approaches (including BioClim, BioMapper, and Maxent). We used synthetic surfaces to create visualizations of the various models in environmental space and used modified area under the curve (AUC) statistic and akaike information criterion (AIC) as measures of model performance. We show that HEMI produced slightly better AUC values, except for Maxent and better AIC values overall. HEMI created a model with only ten parameters while Maxent produced a model with over 100 and BioClim used only eight. Additionally, HEMI allowed visualization and editing of the model in environmental space to develop alternative potential habitat scenarios. The use of Bezier surfaces can provide simple models that match our expectations of biological niche models and, at least in some cases, out-perform more complex approaches.

  8. A Deficit in Face-Voice Integration in Developing Vervet Monkeys Exposed to Ethanol during Gestation

    PubMed Central

    Zangenehpour, Shahin; Javadi, Pasha; Ervin, Frank R.; Palmour, Roberta M.; Ptito, Maurice

    2014-01-01

    Children with fetal alcohol spectrum disorders display behavioural and intellectual impairments that strongly implicate dysfunction within the frontal cortex. Deficits in social behaviour and cognition are amongst the most pervasive outcomes of prenatal ethanol exposure. Our naturalistic vervet monkey model of fetal alcohol exposure (FAE) provides an unparalleled opportunity to study the neurobehavioral outcomes of prenatal ethanol exposure in a controlled experimental setting. Recent work has revealed a significant reduction of the neuronal population in the frontal lobes of these monkeys. We used an intersensory matching procedure to investigate audiovisual perception of socially relevant stimuli in young FAE vervet monkeys. Here we show a domain-specific deficit in audiovisual integration of socially relevant stimuli. When FAE monkeys were shown a pair of side-by-side videos of a monkey concurrently presenting two different calls along with a single audio track matching the content of one of the calls, they were not able to match the correct video to the single audio track. This was manifest by their average looking time being equally spent towards both the matching and non-matching videos. However, a group of normally developing monkeys exhibited a significant preference for the non-matching video. This inability to integrate and thereby discriminate audiovisual stimuli was confined to the integration of faces and voices as revealed by the monkeys' ability to match a dynamic face to a complex tone or a black-and-white checkerboard to a pure tone, presumably based on duration and/or onset-offset synchrony. Together, these results suggest that prenatal ethanol exposure negatively affects a specific domain of audiovisual integration. This deficit is confined to the integration of information that is presented by the face and the voice and does not affect more elementary aspects of sensory integration. PMID:25470725

  9. A deficit in face-voice integration in developing vervet monkeys exposed to ethanol during gestation.

    PubMed

    Zangenehpour, Shahin; Javadi, Pasha; Ervin, Frank R; Palmour, Roberta M; Ptito, Maurice

    2014-01-01

    Children with fetal alcohol spectrum disorders display behavioural and intellectual impairments that strongly implicate dysfunction within the frontal cortex. Deficits in social behaviour and cognition are amongst the most pervasive outcomes of prenatal ethanol exposure. Our naturalistic vervet monkey model of fetal alcohol exposure (FAE) provides an unparalleled opportunity to study the neurobehavioral outcomes of prenatal ethanol exposure in a controlled experimental setting. Recent work has revealed a significant reduction of the neuronal population in the frontal lobes of these monkeys. We used an intersensory matching procedure to investigate audiovisual perception of socially relevant stimuli in young FAE vervet monkeys. Here we show a domain-specific deficit in audiovisual integration of socially relevant stimuli. When FAE monkeys were shown a pair of side-by-side videos of a monkey concurrently presenting two different calls along with a single audio track matching the content of one of the calls, they were not able to match the correct video to the single audio track. This was manifest by their average looking time being equally spent towards both the matching and non-matching videos. However, a group of normally developing monkeys exhibited a significant preference for the non-matching video. This inability to integrate and thereby discriminate audiovisual stimuli was confined to the integration of faces and voices as revealed by the monkeys' ability to match a dynamic face to a complex tone or a black-and-white checkerboard to a pure tone, presumably based on duration and/or onset-offset synchrony. Together, these results suggest that prenatal ethanol exposure negatively affects a specific domain of audiovisual integration. This deficit is confined to the integration of information that is presented by the face and the voice and does not affect more elementary aspects of sensory integration.

  10. Robust Observation Detection for Single Object Tracking: Deterministic and Probabilistic Patch-Based Approaches

    PubMed Central

    Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill

    2012-01-01

    In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226

  11. A graph lattice approach to maintaining and learning dense collections of subgraphs as image features.

    PubMed

    Saund, Eric

    2013-10-01

    Effective object and scene classification and indexing depend on extraction of informative image features. This paper shows how large families of complex image features in the form of subgraphs can be built out of simpler ones through construction of a graph lattice—a hierarchy of related subgraphs linked in a lattice. Robustness is achieved by matching many overlapping and redundant subgraphs, which allows the use of inexpensive exact graph matching, instead of relying on expensive error-tolerant graph matching to a minimal set of ideal model graphs. Efficiency in exact matching is gained by exploitation of the graph lattice data structure. Additionally, the graph lattice enables methods for adaptively growing a feature space of subgraphs tailored to observed data. We develop the approach in the domain of rectilinear line art, specifically for the practical problem of document forms recognition. We are especially interested in methods that require only one or very few labeled training examples per category. We demonstrate two approaches to using the subgraph features for this purpose. Using a bag-of-words feature vector we achieve essentially single-instance learning on a benchmark forms database, following an unsupervised clustering stage. Further performance gains are achieved on a more difficult dataset using a feature voting method and feature selection procedure.

  12. Are face representations depth cue invariant?

    PubMed

    Dehmoobadsharifabadi, Armita; Farivar, Reza

    2016-06-01

    The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, S; Larsen, S; Wagoner, J

    Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization of full three-dimensional (3D)more » finite difference modeling, as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project, in support of LLNL's national-security mission, benefits the U.S. military and intelligence community. Fiscal year (FY) 2003 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A three-seismic-array vehicle tracking testbed was installed on site at LLNL for testing real-time seismic tracking methods. A field experiment was conducted over a tunnel at the Nevada Site that quantified the tunnel reflection signal and, coupled with modeling, identified key needs and requirements in experimental layout of sensors. A large field experiment was conducted at the Lake Lynn Laboratory, a mine safety research facility in Pennsylvania, over a tunnel complex in realistic, difficult conditions. This experiment gathered the necessary data for a full 3D attempt to apply the methodology. The experiment also collected data to analyze the capabilities to detect and locate in-tunnel explosions for mine safety and other applications. In FY03 specifically, a large and complex simulation experiment was conducted that tested the full modeling-based approach to geological characterization using E2D, the K-L statistical methodology, and matched field processing applied to tunnel detection with surface seismic sensors. The simulation validated the full methodology and the need for geological heterogeneity to be accounted for in the overall approach. The Lake Lynn site area was geologically modeled using the code Earthvision to produce a 32 million node 3D model grid for E3D. Model linking issues were resolved and a number of full 3D model runs were accomplished using shot locations that matched the data. E3D-generated wavefield movies showed the reflection signal would be too small to be observed in the data due to trapped and attenuated energy in the weathered layer. An analysis of the few sensors coupled to bedrock did not improve the reflection signal strength sufficiently because the shots, though buried, were within the surface layer and hence attenuated. Ability to model a complex 3D geological structure and calculate synthetic seismograms that are in good agreement with actual data (especially for surface waves and below the complex weathered layer) was demonstrated. We conclude that E3D is a powerful tool for assessing the conditions under which a tunnel could be detected in a specific geological setting. Finally, the Lake Lynn tunnel explosion data were analyzed using standard array processing techniques. The results showed that single detonations could be detected and located but simultaneous detonations would require a strategic placement of arrays.« less

  14. Bilinearity, Rules, and Prefrontal Cortex

    PubMed Central

    Dayan, Peter

    2007-01-01

    Humans can be instructed verbally to perform computationally complex cognitive tasks; their performance then improves relatively slowly over the course of practice. Many skills underlie these abilities; in this paper, we focus on the particular question of a uniform architecture for the instantiation of habitual performance and the storage, recall, and execution of simple rules. Our account builds on models of gated working memory, and involves a bilinear architecture for representing conditional input-output maps and for matching rules to the state of the input and working memory. We demonstrate the performance of our model on two paradigmatic tasks used to investigate prefrontal and basal ganglia function. PMID:18946523

  15. Information for Successful Interaction with Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Johnson, Kathy A.

    2003-01-01

    Interaction in heterogeneous mission operations teams is not well matched to classical models of coordination with autonomous systems. We describe methods of loose coordination and information management in mission operations. We describe an information agent and information management tool suite for managing information from many sources, including autonomous agents. We present an integrated model of levels of complexity of agent and human behavior, which shows types of information processing and points of potential error in agent activities. We discuss the types of information needed for diagnosing problems and planning interactions with an autonomous system. We discuss types of coordination for which designs are needed for autonomous system functions.

  16. Dosimetry and field matching for radiotherapy to the breast and superclavicular fossa

    NASA Astrophysics Data System (ADS)

    Winfield, Elizabeth

    Radiotherapy for early breast cancer aims to achieve local disease control and decrease loco-regional recurrence rates. Treatment may be directed to breast or chest wall alone or, include regional lymph nodes. When using tangential fields to treat the breast a separate anterior field directed to the axilla and supraclavicular fossa (SCF) is needed to treat nodal areas. The complex geometry of this region necessitates matching of adjacent radiation fields in three dimensions. The potential exists for zones of overdosage or underdosage along the match line. Cosmetic results may be compromised if treatment fields are not accurately aligned. Techniques for field matching vary between centres in the UK. A study of dosimetry across the match line region using different techniques, as reported in the multi-centre START Trial Quality Assurance (QA) programme, was undertaken. A custom-made anthropomorphic phantom was designed to assess dose distribution in three dimensions using film dosimetry. Methods with varying degrees of complexity were employed to match tangential and SCF beams. Various techniques combined half beam blocking and machine rotations to achieve geometric alignment. Matching of asymmetric beams allowed a single isocentre technique to be used. Where field matching was not undertaken a gap between tangential and SCF fields was employed. Results demonstrated differences between techniques in addition to variations within the same technique between different centres. Geometric alignment techniques produced more homogenous dose distributions in the match region than gap techniques or those techniques not correcting for field divergence. For this multi-centre assessment of match plane techniques film dosimetry used in conjunction with a breast shaped phantom provided relative dose information. This study has highlighted the difficulties of matching treatment fields to achieve homogenous dose distribution through the region of the match plane and the degree of inhomogeneity as a consequence of a gap between treatment fields.

  17. A hybrid approach to estimate the complex motions of clouds in sky images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong

    Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less

  18. A hybrid approach to estimate the complex motions of clouds in sky images

    DOE PAGES

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...

    2016-09-14

    Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less

  19. A finite element model to study the effect of tissue anisotropy on ex vivo arterial shear wave elastography measurements

    NASA Astrophysics Data System (ADS)

    Shcherbakova, D. A.; Debusschere, N.; Caenen, A.; Iannaccone, F.; Pernot, M.; Swillens, A.; Segers, P.

    2017-07-01

    Shear wave elastography (SWE) is an ultrasound (US) diagnostic method for measuring the stiffness of soft tissues based on generated shear waves (SWs). SWE has been applied to bulk tissues, but in arteries it is still under investigation. Previously performed studies in arteries or arterial phantoms demonstrated the potential of SWE to measure arterial wall stiffness—a relevant marker in prediction of cardiovascular diseases. This study is focused on numerical modelling of SWs in ex vivo equine aortic tissue, yet based on experimental SWE measurements with the tissue dynamically loaded while rotating the US probe to investigate the sensitivity of SWE to the anisotropic structure. A good match with experimental shear wave group speed results was obtained. SWs were sensitive to the orthotropy and nonlinearity of the material. The model also allowed to study the nature of the SWs by performing 2D FFT-based and analytical phase analyses. A good match between numerical group velocities derived using the time-of-flight algorithm and derived from the dispersion curves was found in the cross-sectional and axial arterial views. The complexity of solving analytical equations for nonlinear orthotropic stressed plates was discussed.

  20. Generation of synthetic image sequences for the verification of matching and tracking algorithms for deformation analysis

    NASA Astrophysics Data System (ADS)

    Bethmann, F.; Jepping, C.; Luhmann, T.

    2013-04-01

    This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.

  1. A novel image registration approach via combining local features and geometric invariants

    PubMed Central

    Lu, Yan; Gao, Kun; Zhang, Tinghua; Xu, Tingfa

    2018-01-01

    Image registration is widely used in many fields, but the adaptability of the existing methods is limited. This work proposes a novel image registration method with high precision for various complex applications. In this framework, the registration problem is divided into two stages. First, we detect and describe scale-invariant feature points using modified computer vision-oriented fast and rotated brief (ORB) algorithm, and a simple method to increase the performance of feature points matching is proposed. Second, we develop a new local constraint of rough selection according to the feature distances. Evidence shows that the existing matching techniques based on image features are insufficient for the images with sparse image details. Then, we propose a novel matching algorithm via geometric constraints, and establish local feature descriptions based on geometric invariances for the selected feature points. Subsequently, a new price function is constructed to evaluate the similarities between points and obtain exact matching pairs. Finally, we employ the progressive sample consensus method to remove wrong matches and calculate the space transform parameters. Experimental results on various complex image datasets verify that the proposed method is more robust and significantly reduces the rate of false matches while retaining more high-quality feature points. PMID:29293595

  2. Direct pore-scale reactive transport modelling of dynamic wettability changes induced by surface complexation

    NASA Astrophysics Data System (ADS)

    Maes, Julien; Geiger, Sebastian

    2018-01-01

    Laboratory experiments have shown that oil production from sandstone and carbonate reservoirs by waterflooding could be significantly increased by manipulating the composition of the injected water (e.g. by lowering the ionic strength). Recent studies suggest that a change of wettability induced by a change in surface charge is likely to be one of the driving mechanism of the so-called low-salinity effect. In this case, the potential increase of oil recovery during waterflooding at low ionic strength would be strongly impacted by the inter-relations between flow, transport and chemical reaction at the pore-scale. Hence, a new numerical model that includes two-phase flow, solute reactive transport and wettability alteration is implemented based on the Direct Numerical Simulation of the Navier-Stokes equations and surface complexation modelling. Our model is first used to match experimental results of oil droplet detachment from clay patches. We then study the effect of wettability change on the pore-scale displacement for simple 2D calcite micro-models and evaluate the impact of several parameters such as water composition and injected velocity. Finally, we repeat the simulation experiments on a larger and more complex pore geometry representing a carbonate rock. Our simulations highlight two different effects of low-salinity on oil production from carbonate rocks: a smaller number of oil clusters left in the pores after invasion, and a greater number of pores invaded.

  3. Practical aspects of complex permittivity reconstruction with neural-network-controlled FDTD modeling of a two-port fixture.

    PubMed

    Eves, E Eugene; Murphy, Ethan K; Yakovlev, Vadim V

    2007-01-01

    The paper discusses characteristics of a new modeling-based technique for determining dielectric properties of materials. Complex permittivity is found with an optimization algorithm designed to match complex S-parameters obtained from measurements and from 3D FDTD simulation. The method is developed on a two-port (waveguide-type) fixture and deals with complex reflection and transmission characteristics at the frequency of interest. A computational part is constructed as an inverse-RBF-network-based procedure that reconstructs dielectric constant and the loss factor of the sample from the FDTD modeling data sets and the measured reflection and transmission coefficients. As such, it is applicable to samples and cavities of arbitrary configurations provided that the geometry of the experimental setup is adequately represented by the FDTD model. The practical implementation of the method considered in this paper is a section of a WR975 waveguide containing a sample of a liquid in a cylindrical cutout of a rectangular Teflon cup. The method is run in two stages and employs two databases--first, built for a sparse grid on the complex permittivity plane, in order to locate a domain with an anticipated solution and, second, made as a denser grid covering the determined domain, for finding an exact location of the complex permittivity point. Numerical tests demonstrate that the computational part of the method is highly accurate even when the modeling data is represented by relatively small data sets. When working with reflection and transmission coefficients measured in an actual experimental fixture and reconstructing a low dielectric constant and the loss factor the technique may be less accurate. It is shown that the employed neural network is capable of finding complex permittivity of the sample when experimental data on the reflection and transmission coefficients are numerically dispersive (noise-contaminated). A special modeling test is proposed for validating the results; it confirms that the values of complex permittivity for several liquids (including salt water acetone and three types of alcohol) at 915 MHz are reconstructed with satisfactory accuracy.

  4. Extending SME to Handle Large-Scale Cognitive Modeling.

    PubMed

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  5. Stressor-layer-induced elastic strain sharing in SrTiO 3 complex oxide sheets

    DOE PAGES

    Tilka, J. A.; Park, J.; Ahn, Y.; ...

    2018-02-26

    A precisely selected elastic strain can be introduced in submicron-thick single-crystal SrTiO 3 sheets using a silicon nitride stressor layer. A conformal stressor layer deposited using plasma-enhanced chemical vapor deposition produces an elastic strain in the sheet consistent with the magnitude of the nitride residual stress. Synchrotron x-ray nanodiffraction reveals that the strain introduced in the SrTiO 3 sheets is on the order of 10 -4, matching the predictions of an elastic model. Using this approach to elastic strain sharing in complex oxides allows the strain to be selected within a wide and continuous range of values, an effect notmore » achievable in heteroepitaxy on rigid substrates.« less

  6. Stressor-layer-induced elastic strain sharing in SrTiO 3 complex oxide sheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tilka, J. A.; Park, J.; Ahn, Y.

    A precisely selected elastic strain can be introduced in submicron-thick single-crystal SrTiO 3 sheets using a silicon nitride stressor layer. A conformal stressor layer deposited using plasma-enhanced chemical vapor deposition produces an elastic strain in the sheet consistent with the magnitude of the nitride residual stress. Synchrotron x-ray nanodiffraction reveals that the strain introduced in the SrTiO 3 sheets is on the order of 10 -4, matching the predictions of an elastic model. Using this approach to elastic strain sharing in complex oxides allows the strain to be selected within a wide and continuous range of values, an effect notmore » achievable in heteroepitaxy on rigid substrates.« less

  7. Using a system of differential equations that models cattle growth to uncover the genetic basis of complex traits.

    PubMed

    Freua, Mateus Castelani; Santana, Miguel Henrique de Almeida; Ventura, Ricardo Vieira; Tedeschi, Luis Orlindo; Ferraz, José Bento Sterman

    2017-08-01

    The interplay between dynamic models of biological systems and genomics is based on the assumption that genetic variation of the complex trait (i.e., outcome of model behavior) arises from component traits (i.e., model parameters) in lower hierarchical levels. In order to provide a proof of concept of this statement for a cattle growth model, we ask whether model parameters map genomic regions that harbor quantitative trait loci (QTLs) already described for the complex trait. We conducted a genome-wide association study (GWAS) with a Bayesian hierarchical LASSO method in two parameters of the Davis Growth Model, a system of three ordinary differential equations describing DNA accretion, protein synthesis and degradation, and fat synthesis. Phenotypic and genotypic data were available for 893 Nellore (Bos indicus) cattle. Computed values for parameter k 1 (DNA accretion rate) ranged from 0.005 ± 0.003 and for α (constant for energy for maintenance requirement) 0.134 ± 0.024. The expected biological interpretation of the parameters is confirmed by QTLs mapped for k 1 and α. QTLs within genomic regions mapped for k 1 are expected to be correlated with the DNA pool: body size and weight. Single nucleotide polymorphisms (SNPs) which were significant for α mapped QTLs that had already been associated with residual feed intake, feed conversion ratio, average daily gain (ADG), body weight, and also dry matter intake. SNPs identified for k 1 were able to additionally explain 2.2% of the phenotypic variability of the complex ADG, even when SNPs for k 1 did not match the genomic regions associated with ADG. Although improvements are needed, our findings suggest that genomic analysis on component traits may help to uncover the genetic basis of more complex traits, particularly when lower biological hierarchies are mechanistically described by mathematical simulation models.

  8. Individual Colorimetric Observer Model

    PubMed Central

    Asano, Yuta; Fairchild, Mark D.; Blondé, Laurent

    2016-01-01

    This study proposes a vision model for individual colorimetric observers. The proposed model can be beneficial in many color-critical applications such as color grading and soft proofing to assess ranges of color matches instead of a single average match. We extended the CIE 2006 physiological observer by adding eight additional physiological parameters to model individual color-normal observers. These eight parameters control lens pigment density, macular pigment density, optical densities of L-, M-, and S-cone photopigments, and λmax shifts of L-, M-, and S-cone photopigments. By identifying the variability of each physiological parameter, the model can simulate color matching functions among color-normal populations using Monte Carlo simulation. The variabilities of the eight parameters were identified through two steps. In the first step, extensive reviews of past studies were performed for each of the eight physiological parameters. In the second step, the obtained variabilities were scaled to fit a color matching dataset. The model was validated using three different datasets: traditional color matching, applied color matching, and Rayleigh matches. PMID:26862905

  9. Excitation transfer and trapping kinetics in plant photosystem I probed by two-dimensional electronic spectroscopy.

    PubMed

    Akhtar, Parveen; Zhang, Cheng; Liu, Zhengtang; Tan, Howe-Siang; Lambrev, Petar H

    2018-03-01

    Photosystem I is a robust and highly efficient biological solar engine. Its capacity to utilize virtually every absorbed photon's energy in a photochemical reaction generates great interest in the kinetics and mechanisms of excitation energy transfer and charge separation. In this work, we have employed room-temperature coherent two-dimensional electronic spectroscopy and time-resolved fluorescence spectroscopy to follow exciton equilibration and excitation trapping in intact Photosystem I complexes as well as core complexes isolated from Pisum sativum. We performed two-dimensional electronic spectroscopy measurements with low excitation pulse energies to record excited-state kinetics free from singlet-singlet annihilation. Global lifetime analysis resolved energy transfer and trapping lifetimes closely matches the time-correlated single-photon counting data. Exciton energy equilibration in the core antenna occurred on a timescale of 0.5 ps. We further observed spectral equilibration component in the core complex with a 3-4 ps lifetime between the bulk Chl states and a state absorbing at 700 nm. Trapping in the core complex occurred with a 20 ps lifetime, which in the supercomplex split into two lifetimes, 16 ps and 67-75 ps. The experimental data could be modelled with two alternative models resulting in equally good fits-a transfer-to-trap-limited model and a trap-limited model. However, the former model is only possible if the 3-4 ps component is ascribed to equilibration with a "red" core antenna pool absorbing at 700 nm. Conversely, if these low-energy states are identified with the P 700 reaction centre, the transfer-to-trap-model is ruled out in favour of a trap-limited model.

  10. On combination of strict Bayesian principles with model reduction technique or how stochastic model calibration can become feasible for large-scale applications

    NASA Astrophysics Data System (ADS)

    Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.

    2013-12-01

    Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of accurate filtering become very feasible for our suggested aPC-based calibration framework. However, the power of aPC-based Bayesian updating strongly depends on the accuracy of prior information. In the current study, the prior assumptions on the model parameters were not satisfactory and strongly underestimate the reservoir pressure. Thus, the aPC-based response surface used in Bootstrap filtering is fitted to a distant and poorly chosen region within the parameter space. Thanks to the iterative procedure suggested in [2] we overcome this drawback with small computational costs. The iteration successively improves the accuracy of the expansion around the current estimation of the posterior distribution. The final result is a calibrated model of the site that can be used for further studies, with an excellent match to the data. References [1] Oladyshkin S. and Nowak W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering and System Safety, 106:179-190, 2012. [2] Oladyshkin S., Class H., Nowak W. Bayesian updating via Bootstrap filtering combined with data-driven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Computational Geosciences, 17 (4), 671-687, 2013.

  11. Landmine detection using two-tapped joint orthogonal matching pursuits

    NASA Astrophysics Data System (ADS)

    Goldberg, Sean; Glenn, Taylor; Wilson, Joseph N.; Gader, Paul D.

    2012-06-01

    Joint Orthogonal Matching Pursuits (JOMP) is used here in the context of landmine detection using data obtained from an electromagnetic induction (EMI) sensor. The response from an object containing metal can be decomposed into a discrete spectrum of relaxation frequencies (DSRF) from which we construct a dictionary. A greedy iterative algorithm is proposed for computing successive residuals of a signal by subtracting away the highest matching dictionary element at each step. The nal condence of a particular signal is a combination of the reciprocal of this residual and the mean of the complex component. A two-tap approach comparing signals on opposite sides of the geometric location of the sensor is examined and found to produce better classication. It is found that using only a single pursuit does a comparable job, reducing complexity and allowing for real-time implementation in automated target recognition systems. JOMP is particularly highlighted in comparison with a previous EMI detection algorithm known as String Match.

  12. Velocimetry with refractive index matching for complex flow configurations, phase 1

    NASA Technical Reports Server (NTRS)

    Thompson, B. E.; Vafidis, C.; Whitelaw, J. H.

    1987-01-01

    The feasibility of obtaining detailed velocity field measurements in large Reynolds number flow of the Space Shuttle Main Engine (SSME) main injector bowl was demonstrated using laser velocimetry and the developed refractive-index-matching technique. An experimental system to provide appropriate flow rates and temperature control of refractive-index-matching fluid was designed and tested. Test results are presented to establish the feasibility of obtaining accurate velocity measurements that map the entire field including the flow through the LOX post bundles: sample mean velocity, turbulence intensity, and spectral results are presented. The results indicate that a suitable fluid and control system is feasible for the representation of complex rocket-engine configurations and that measurements of velocity characteristics can be obtained without the optical access restrictions normally associated with laser velocimetry. The refractive-index-matching technique considered needs to be further developed and extended to represent other rocket-engine flows where current methods either cannot measure with adequate accuracy or they fail.

  13. A Context-Recognition-Aided PDR Localization Method Based on the Hidden Markov Model

    PubMed Central

    Lu, Yi; Wei, Dongyan; Lai, Qifeng; Li, Wen; Yuan, Hong

    2016-01-01

    Indoor positioning has recently become an important field of interest because global navigation satellite systems (GNSS) are usually unavailable in indoor environments. Pedestrian dead reckoning (PDR) is a promising localization technique for indoor environments since it can be implemented on widely used smartphones equipped with low cost inertial sensors. However, the PDR localization severely suffers from the accumulation of positioning errors, and other external calibration sources should be used. In this paper, a context-recognition-aided PDR localization model is proposed to calibrate PDR. The context is detected by employing particular human actions or characteristic objects and it is matched to the context pre-stored offline in the database to get the pedestrian’s location. The Hidden Markov Model (HMM) and Recursive Viterbi Algorithm are used to do the matching, which reduces the time complexity and saves the storage. In addition, the authors design the turn detection algorithm and take the context of corner as an example to illustrate and verify the proposed model. The experimental results show that the proposed localization method can fix the pedestrian’s starting point quickly and improves the positioning accuracy of PDR by 40.56% at most with perfect stability and robustness at the same time. PMID:27916922

  14. [COMPUTER ASSISTED DESIGN AND ELECTRON BEAMMELTING RAPID PROTOTYPING METAL THREE-DIMENSIONAL PRINTING TECHNOLOGY FOR PREPARATION OF INDIVIDUALIZED FEMORAL PROSTHESIS].

    PubMed

    Liu, Hongwei; Weng, Yiping; Zhang, Yunkun; Xu, Nanwei; Tong, Jing; Wang, Caimei

    2015-09-01

    To study the feasibility of preparation of the individualized femoral prosthesis through computer assisted design and electron beammelting rapid prototyping (EBM-RP) metal three-dimensional (3D) printing technology. One adult male left femur specimen was used for scanning with 64-slice spiral CT; tomographic image data were imported into Mimics15.0 software to reconstruct femoral 3D model, then the 3D model of individualized femoral prosthesis was designed through UG8.0 software. Finally the 3D model data were imported into EBM-RP metal 3D printer to print the individualized sleeve. According to the 3D model of individualized prosthesis, customized sleeve was successfully prepared through the EBM-RP metal 3D printing technology, assembled with the standard handle component of SR modular femoral prosthesis to make the individualized femoral prosthesis. Customized femoral prosthesis accurately matching with metaphyseal cavity can be designed through the thin slice CT scanning and computer assisted design technology. Titanium alloy personalized prosthesis with complex 3D shape, pore surface, and good matching with metaphyseal cavity can be manufactured by the technology of EBM-RP metal 3D printing, and the technology has convenient, rapid, and accurate advantages.

  15. Communication Channel Estimation and Waveform Design: Time Delay Estimation on Parallel, Flat Fading Channels

    DTIC Science & Technology

    2010-02-01

    channels, so the channel gain is known on each realization and used in a coherent matched filter; and (c) Rayleigh channels with noncoherent matched...gain is known on each realization and used in a coherent matched filter (channel model 1A); and (c) Rayleigh channels with noncoherent matched filters...filters, averaged over Rayleigh channel realizations (channel model 1A). (b) Noncoherent matched filters with Rayleigh fading (channel model 3). MSEs are

  16. Optimized stereo matching in binocular three-dimensional measurement system using structured light.

    PubMed

    Liu, Kun; Zhou, Changhe; Wei, Shengbin; Wang, Shaoqing; Fan, Xin; Ma, Jianyong

    2014-09-10

    In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.

  17. A FPGA-based architecture for real-time image matching

    NASA Astrophysics Data System (ADS)

    Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo

    2013-10-01

    Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.

  18. Coarse-graining errors and numerical optimization using a relative entropy framework

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2011-03-01

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.

  19. Small area estimation of obesity prevalence and dietary patterns: a model applied to Rio de Janeiro city, Brazil.

    PubMed

    Cataife, Guido

    2014-03-01

    We propose the use of previously developed small area estimation techniques to monitor obesity and dietary habits in developing countries and apply the model to Rio de Janeiro city. We estimate obesity prevalence rates at the Census Tract through a combinatorial optimization spatial microsimulation model that matches body mass index and socio-demographic data in Brazil's 2008-9 family expenditure survey with Census 2010 socio-demographic data. Obesity ranges from 8% to 25% in most areas and affects the poor almost as much as the rich. Male and female obesity rates are uncorrelated at the small area level. The model is an effective tool to understand the complexity of the problem and to aid in policy design. © 2013 Published by Elsevier Ltd.

  20. Doppler ultrasonography of the anterior knee tendons in elite badminton players: colour fraction before and after match.

    PubMed

    Koenig, M J; Torp-Pedersen, S; Boesen, M I; Holm, C C; Bliddal, H

    2010-02-01

    Anterior knee tendon problems are seldom reported in badminton players although the game is obviously stressful to the lower extremities. Painful anterior knee tendons are common among elite badminton players. The anterior knee tendons exhibit colour Doppler activity. This activity increases after a match. Painful tendons have more Doppler activity than tendons without pain. Cohort study. 72 elite badminton players were interviewed about training, pain and injuries. The participants were scanned with high-end ultrasound equipment. Colour Doppler was used to examine the tendons of 64 players before a match and 46 players after a match. Intratendinous colour Doppler flow was measured as colour fraction (CF). The tendon complex was divided into three loci: the quadriceps tendon, the proximal patellar tendon and the insertion on the tibial tuberosity. Interview: Of the 72 players, 62 players had problems with 86 tendons in the lower extremity. Of these 86 tendons, 48 were the anterior knee tendons. Ultrasound: At baseline, the majority of players (87%) had colour Doppler flow in at least one scanning position. After a match, the percentage of the knee complexes involved did not change. CF increased significantly in the dominant leg at the tibial tuberosity; single players had a significantly higher CF after a match at the tibial tuberosity and in the patellar tendon both before and after a match. Painful tendons had the highest colour Doppler activity. Most elite badminton players had pain in the anterior knee tendons and intratendinous Doppler activity both before and after match. High levels of Doppler activity were associated with self-reported ongoing pain.

  1. Modeling multicomponent ion exchange equilibrium utilizing hydrous crystalline silicotitanates by a multiple interactive ion exchange site model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Z.; Anthony, R.G.; Miller, J.E.

    1997-06-01

    An equilibrium multicomponent ion exchange model is presented for the ion exchange of group I metals by TAM-5, a hydrous crystalline silicotitanate. On the basis of the data from ion exchange and structure studies, the solid phase is represented as Na{sub 3}X instead of the usual form of NaX. By using this solid phase representation, the solid can be considered as an ideal phase. A set of model ion exchange reactions is proposed for ion exchange between H{sup +}, Na{sup +}, K{sup +}, Rb{sup +}, and Cs{sup +}. The equilibrium constants for these reactions were estimated from experiments with simplemore » ion exchange systems. Bromley`s model for activity coefficients of electrolytic solutions was used to account for liquid phase nonideality. Bromley`s model parameters for CsOH at high ionic strength and for NO{sub 2}{sup {minus}} and Al(OH){sub 4}{sup {minus}} were estimated in order to apply the model for complex waste simulants. The equilibrium compositions and distribution coefficients of counterions were calculated for complex simulants typical of DOE wastes by solving the equilibrium equations for the model reactions and material balance equations. The predictions match the experimental results within 10% for all of these solutions.« less

  2. Interactive, process-oriented climate modeling with CLIMLAB

    NASA Astrophysics Data System (ADS)

    Rose, B. E. J.

    2016-12-01

    Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The Jupyter Notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields.

  3. Automated Threshold Selection for Template-Based Sonar Target Detection

    DTIC Science & Technology

    2017-08-01

    test based on the distribution of the matched filter correlations. From the matched filter output we evaluate target sized areas and surrounding...synthetic aperture sonar data that were part of the evaluation . Figure 3 shows a nearly uniform seafloor. Figure 4 is more complex, with

  4. Detection of timescales in evolving complex systems

    PubMed Central

    Darst, Richard K.; Granell, Clara; Arenas, Alex; Gómez, Sergio; Saramäki, Jari; Fortunato, Santo

    2016-01-01

    Most complex systems are intrinsically dynamic in nature. The evolution of a dynamic complex system is typically represented as a sequence of snapshots, where each snapshot describes the configuration of the system at a particular instant of time. This is often done by using constant intervals but a better approach would be to define dynamic intervals that match the evolution of the system’s configuration. To this end, we propose a method that aims at detecting evolutionary changes in the configuration of a complex system, and generates intervals accordingly. We show that evolutionary timescales can be identified by looking for peaks in the similarity between the sets of events on consecutive time intervals of data. Tests on simple toy models reveal that the technique is able to detect evolutionary timescales of time-varying data both when the evolution is smooth as well as when it changes sharply. This is further corroborated by analyses of several real datasets. Our method is scalable to extremely large datasets and is computationally efficient. This allows a quick, parameter-free detection of multiple timescales in the evolution of a complex system. PMID:28004820

  5. Accurate documentation in cultural heritage by merging TLS and high-resolution photogrammetric data

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, Pierre; Alby, Emmanuel; Assali, Pierre; Poitevin, Valentin; Hullo, Jean-François; Smigiel, Eddie

    2011-07-01

    Several recording techniques are used together in Cultural Heritage Documentation projects. The main purpose of the documentation and conservation works is usually to generate geometric and photorealistic 3D models for both accurate reconstruction and visualization purposes. The recording approach discussed in this paper is based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons, and criteria as geometry, texture, accuracy, resolution, recording and processing time are often compared. TLS techniques (time of flight or phase shift systems) are often used for the recording of large and complex objects or sites. Point cloud generation from images by dense stereo or multi-image matching can be used as an alternative or a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one as the acquisition system is limited to a digital camera and a few accessories only. Indeed, the stereo matching process offers a cheap, flexible and accurate solution to get 3D point clouds and textured models. The calibration of the camera allows the processing of distortion free images, accurate orientation of the images, and matching at the subpixel level. The main advantage of this photogrammetric methodology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After the matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but with really better raster information for textures. The paper will address the automation of recording and processing steps, the assessment of the results, and the deliverables (e.g. PDF-3D files). Visualization aspects of the final 3D models are presented. Two case studies with merged photogrammetric and TLS data are finally presented: - The Gallo-roman Theatre of Mandeure, France); - The Medieval Fortress of Châtel-sur-Moselle, France), where a network of underground galleries and vaults has been recorded.

  6. A functional model of sensemaking in a neurocognitive architecture.

    PubMed

    Lebiere, Christian; Pirolli, Peter; Thomson, Robert; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R

    2013-01-01

    Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment.

  7. A Functional Model of Sensemaking in a Neurocognitive Architecture

    PubMed Central

    Lebiere, Christian; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R.

    2013-01-01

    Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment. PMID:24302930

  8. An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation

    NASA Astrophysics Data System (ADS)

    Lin, Tsungpo

    Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.

  9. Rapid modeling of complex multi-fault ruptures with simplistic models from real-time GPS: Perspectives from the 2016 Mw 7.8 Kaikoura earthquake

    NASA Astrophysics Data System (ADS)

    Crowell, B.; Melgar, D.

    2017-12-01

    The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.

  10. Computation of wind tunnel wall effects for complex models using a low-order panel method

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.; Harris, Scott H.

    1994-01-01

    A technique for determining wind tunnel wall effects for complex models using the low-order, three dimensional panel method PMARC (Panel Method Ames Research Center) has been developed. Initial validation of the technique was performed using lift-coefficient data in the linear lift range from tests of a large-scale STOVL fighter model in the National Full-Scale Aerodynamics Complex (NFAC) facility. The data from these tests served as an ideal database for validating the technique because the same model was tested in two wind tunnel test sections with widely different dimensions. The lift-coefficient data obtained for the same model configuration in the two test sections were different, indicating a significant influence of the presence of the tunnel walls and mounting hardware on the lift coefficient in at least one of the two test sections. The wind tunnel wall effects were computed using PMARC and then subtracted from the measured data to yield corrected lift-coefficient versus angle-of-attack curves. The corrected lift-coefficient curves from the two wind tunnel test sections matched very well. Detailed pressure distributions computed by PMARC on the wing lower surface helped identify the source of large strut interference effects in one of the wind tunnel test sections. Extension of the technique to analysis of wind tunnel wall effects on the lift coefficient in the nonlinear lift range and on drag coefficient will require the addition of boundary-layer and separated-flow models to PMARC.

  11. Optimal damping profile ratios for stabilization of perfectly matched layers in general anisotropic media

    DOE PAGES

    Gao, Kai; Huang, Lianjie

    2017-11-13

    Conventional perfectly matched layers (PML) can be unstable for certain kinds of anisotropic media. Multi-axial PML removes such instability using nonzero damping coe cients in the directions tangential with the PML interface. While using non-zero damping pro le ratios can stabilize PML, it is important to obtain the smallest possible damping pro le ratios to minimize arti cial re ections caused by these non-zero ratios, particularly for 3D general anisotropic media. Using the eigenvectors of the PML system matrix, we develop a straightforward and e cient numerical algorithm to determine the optimal damping pro le ratios to stabilize PML inmore » 2D and 3D general anisotropic media. Numerical examples show that our algorithm provides optimal damping pro le ratios to ensure the stability of PML and complex-frequency-shifted PML for elastic-wave modeling in 2D and 3D general anisotropic media.« less

  12. Optimal damping profile ratios for stabilization of perfectly matched layers in general anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Huang, Lianjie

    Conventional perfectly matched layers (PML) can be unstable for certain kinds of anisotropic media. Multi-axial PML removes such instability using nonzero damping coe cients in the directions tangential with the PML interface. While using non-zero damping pro le ratios can stabilize PML, it is important to obtain the smallest possible damping pro le ratios to minimize arti cial re ections caused by these non-zero ratios, particularly for 3D general anisotropic media. Using the eigenvectors of the PML system matrix, we develop a straightforward and e cient numerical algorithm to determine the optimal damping pro le ratios to stabilize PML inmore » 2D and 3D general anisotropic media. Numerical examples show that our algorithm provides optimal damping pro le ratios to ensure the stability of PML and complex-frequency-shifted PML for elastic-wave modeling in 2D and 3D general anisotropic media.« less

  13. Matching-to-sample by an echolocating dolphin (Tursiops truncatus).

    PubMed

    Roitblat, H L; Penner, R H; Nachtigall, P E

    1990-01-01

    An adult male dolphin was trained to perform a three-alternative delayed matching-to-sample task while wearing eyecups to occlude its vision. Sample and comparison stimuli consisted of a small and a large PVC plastic tube, a water-filled stainless steel sphere, and a solid aluminum cone. Stimuli were presented under water and the dolphin was allowed to identify the stimuli through echolocation. The echolocation clicks emitted by the dolphin to each sample and each comparison stimulus were recorded and analyzed. Over 48 sessions of testing, choice accuracy averaged 94.5% correct. This high level of accuracy was apparently achieved by varying the number of echolocation clicks emitted to various stimuli. Performance appeared to reflect a preexperimental stereotyped search pattern that dictated the order in which comparison items were examined and a complex sequential-sampling decision process. A model for the dolphin's decision-making processes is described.

  14. A flexible, interactive software tool for fitting the parameters of neuronal models.

    PubMed

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.

  15. A flexible, interactive software tool for fitting the parameters of neuronal models

    PubMed Central

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I.; Freund, Tamás F.; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID:25071540

  16. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.; ,

    1985-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.

  17. Identification of Biokinetic Models Using the Concept of Extents.

    PubMed

    Mašić, Alma; Srinivasan, Sriniketh; Billeter, Julien; Bonvin, Dominique; Villez, Kris

    2017-07-05

    The development of a wide array of process technologies to enable the shift from conventional biological wastewater treatment processes to resource recovery systems is matched by an increasing demand for predictive capabilities. Mathematical models are excellent tools to meet this demand. However, obtaining reliable and fit-for-purpose models remains a cumbersome task due to the inherent complexity of biological wastewater treatment processes. In this work, we present a first study in the context of environmental biotechnology that adopts and explores the use of extents as a way to simplify and streamline the dynamic process modeling task. In addition, the extent-based modeling strategy is enhanced by optimal accounting for nonlinear algebraic equilibria and nonlinear measurement equations. Finally, a thorough discussion of our results explains the benefits of extent-based modeling and its potential to turn environmental process modeling into a highly automated task.

  18. Coarse-graining of proteins based on elastic network models

    NASA Astrophysics Data System (ADS)

    Sinitskiy, Anton V.; Voth, Gregory A.

    2013-08-01

    To simulate molecular processes on biologically relevant length- and timescales, coarse-grained (CG) models of biomolecular systems with tens to even hundreds of residues per CG site are required. One possible way to build such models is explored in this article: an elastic network model (ENM) is employed to define the CG variables. Free energy surfaces are approximated by Taylor series, with the coefficients found by force-matching. CG potentials are shown to undergo renormalization due to roughness of the energy landscape and smoothing of it under coarse-graining. In the case study of hen egg-white lysozyme, the entropy factor is shown to be of critical importance for maintaining the native structure, and a relationship between the proposed ENM-mode-based CG models and traditional CG-bead-based models is discussed. The proposed approach uncovers the renormalizable character of CG models and offers new opportunities for automated and computationally efficient studies of complex free energy surfaces.

  19. On the assimilation set-up of ASCAT soil moisture data for improving streamflow catchment simulation

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Tarpanelli, Angelica; Brocca, Luca; Casalí, Javier

    2018-01-01

    Assimilation of remotely sensed surface soil moisture (SSM) data into hydrological catchment models has been identified as a means to improve streamflow simulations, but reported results vary markedly depending on the particular model, catchment and assimilation procedure used. In this study, the influence of key aspects, such as the type of model, re-scaling technique and SSM observation error considered, were evaluated. For this aim, Advanced SCATterometer ASCAT-SSM observations were assimilated through the ensemble Kalman filter into two hydrological models of different complexity (namely MISDc and TOPLATS) run on two Mediterranean catchments of similar size (750 km2). Three different re-scaling techniques were evaluated (linear re-scaling, variance matching and cumulative distribution function matching), and SSM observation error values ranging from 0.01% to 20% were considered. Four different efficiency measures were used for evaluating the results. Increases in Nash-Sutcliffe efficiency (0.03-0.15) and efficiency indices (10-45%) were obtained, especially when linear re-scaling and observation errors within 4-6% were considered. This study found out that there is a potential to improve streamflow prediction through data assimilation of remotely sensed SSM in catchments of different characteristics and with hydrological models of different conceptualizations schemes, but for that, a careful evaluation of the observation error and re-scaling technique set-up utilized is required.

  20. Use of a 3D Skull Model to Improve Accuracy in Cranioplasty for Autologous Flap Resorption in a 3-Year-Old Child.

    PubMed

    Maduri, Rodolfo; Viaroli, Edoardo; Levivier, Marc; Daniel, Roy T; Messerer, Mahmoud

    2017-01-01

    Cranioplasty is considered a simple reconstructive procedure, usually performed in a single stage. In some clinical conditions, such as in children with multifocal flap osteolysis, it could represent a surgical challenge. In these patients, the partially resorbed autologous flap should be removed and replaced with a precustomed prosthesis which should perfectly match the expected bone defect. We describe the technique used for a navigated cranioplasty in a 3-year-old child with multifocal autologous flap osteolysis. We decided to perform a cranioplasty using a custom-made hydroxyapatite porous ceramic flap. The prosthesis was produced with an epoxy resin 3D skull model of the patient, which included a removable flap corresponding to the planned cranioplasty. Preoperatively, a CT scan of the 3D skull model was performed without the removable flap. The CT scan images of the 3D skull model were merged with the preoperative 3D CT scan of the patient and navigated during the cranioplasty to define with precision the cranioplasty margins. After removal of the autologous resorbed flap, the hydroxyapatite prosthesis matched perfectly with the skull defect. The anatomical result was excellent. Thus, the implementation of cranioplasty with image merge navigation of a 3D skull model may improve cranioplasty accuracy, allowing precise anatomic reconstruction in complex skull defect cases. © 2017 S. Karger AG, Basel.

  1. Connectionist model-based stereo vision for telerobotics

    NASA Technical Reports Server (NTRS)

    Hoff, William; Mathis, Donald

    1989-01-01

    Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.

  2. PHAT+MaNGA: Using resolved stellar populations to improve the recovery of star formation histories from galaxy spectra

    NASA Astrophysics Data System (ADS)

    Byler, Nell

    2017-08-01

    Stellar Population Synthesis (SPS) models are routinely used to interpret extragalactic observations at all redshifts. Currently, the dominant source of uncertainty in SPS modeling lies in the degeneracies associated with synthesizing and fitting complex stellar populations to observed galaxy spectra. To remedy this, we propose an empirical calibration of SPS models using resolved stellar population observations from Hubble Space Telescope (HST) to constrain the stellar masses, ages, and star formation histories (SFHs) in regions matched to 2D spectroscopic observations from MaNGA. We will take advantage of the state of the art observations from the Panchromatic Hubble Andromeda Treasury (PHAT), which maps the dust content, history of chemical enrichment, and history of star formation across the disk of M31 in exquisite detail. Recently, we have coupled these observations with an unprecedented, spatially-resolved suite of IFU observations from MaNGA. With these two comprehensive data sets we can use the true underlying stellar properties from PHAT to properly interpret the aperture-matched integrated spectra from MaNGA. Our MaNGA observations target 20 regions within the PHAT footprint that fully sample the available range in metallicity, SFR, dust content, and stellar density. This transformative dataset will establish a comprehensive link between resolved stellar populations and the inferred properties of unresolved stellar populations across astrophysically important environments. The net data product will be a library of galaxy spectra matched to the true underlying stellar properties, a comparison set that has lasting legacy value for the extragalactic community.

  3. Developing Conceptions of Authority and Contract across the Lifespan: Two Perspectives.

    ERIC Educational Resources Information Center

    Dawson, Theo L.; Gabrielian, Sonya

    2003-01-01

    Compares concepts defining Kohlbergian stages of moral development with those associated with orders of hierarchical complexity determined with a generalized content-independent stage-scoring system. Finds that Kohlberg's sequence generally matches that identified with the scoring system and that contract and authority concepts match the concepts…

  4. High-performance lighting evaluated by photobiological parameters.

    PubMed

    Rebec, Katja Malovrh; Gunde, Marta Klanjšek

    2014-08-10

    The human reception of light includes image-forming and non-image-forming effects which are triggered by spectral distribution and intensity of light. Ideal lighting is similar to daylight, which could be evaluated by spectral or chromaticity match. LED-based and CFL-based lighting were analyzed here, proposed according to spectral and chromaticity match, respectively. The photobiological effects were expressed by effectiveness for blue light hazard, cirtopic activity, and photopic vision. Good spectral match provides light with more similar effects to those obtained by the chromaticity match. The new parameters are useful for better evaluation of complex human responses caused by lighting.

  5. Role of design complexity in technology improvement.

    PubMed

    McNerney, James; Farmer, J Doyne; Redner, Sidney; Trancik, Jessika E

    2011-05-31

    We study a simple model for the evolution of the cost (or more generally the performance) of a technology or production process. The technology can be decomposed into n components, each of which interacts with a cluster of d - 1 other components. Innovation occurs through a series of trial-and-error events, each of which consists of randomly changing the cost of each component in a cluster, and accepting the changes only if the total cost of the cluster is lowered. We show that the relationship between the cost of the whole technology and the number of innovation attempts is asymptotically a power law, matching the functional form often observed for empirical data. The exponent α of the power law depends on the intrinsic difficulty of finding better components, and on what we term the design complexity: the more complex the design, the slower the rate of improvement. Letting d as defined above be the connectivity, in the special case in which the connectivity is constant, the design complexity is simply the connectivity. When the connectivity varies, bottlenecks can arise in which a few components limit progress. In this case the design complexity depends on the details of the design. The number of bottlenecks also determines whether progress is steady, or whether there are periods of stasis punctuated by occasional large changes. Our model connects the engineering properties of a design to historical studies of technology improvement.

  6. SEQUOIA: significance enhanced network querying through context-sensitive random walk and minimization of network conductance.

    PubMed

    Jeong, Hyundoo; Yoon, Byung-Jun

    2017-03-14

    Network querying algorithms provide computational means to identify conserved network modules in large-scale biological networks that are similar to known functional modules, such as pathways or molecular complexes. Two main challenges for network querying algorithms are the high computational complexity of detecting potential isomorphism between the query and the target graphs and ensuring the biological significance of the query results. In this paper, we propose SEQUOIA, a novel network querying algorithm that effectively addresses these issues by utilizing a context-sensitive random walk (CSRW) model for network comparison and minimizing the network conductance of potential matches in the target network. The CSRW model, inspired by the pair hidden Markov model (pair-HMM) that has been widely used for sequence comparison and alignment, can accurately assess the node-to-node correspondence between different graphs by accounting for node insertions and deletions. The proposed algorithm identifies high-scoring network regions based on the CSRW scores, which are subsequently extended by maximally reducing the network conductance of the identified subnetworks. Performance assessment based on real PPI networks and known molecular complexes show that SEQUOIA outperforms existing methods and clearly enhances the biological significance of the query results. The source code and datasets can be downloaded from http://www.ece.tamu.edu/~bjyoon/SEQUOIA .

  7. A Coarse-to-Fine Geometric Scale-Invariant Feature Transform for Large Size High Resolution Satellite Image Registration

    PubMed Central

    Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui

    2018-01-01

    Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589

  8. Synthesis and Characterization of Electroresponsive Materials with Applications In: Part I. Second Harmonic Generation. Part II. Organic-Lanthanide Ion Complexes for Electroluminescence and Optical Amplifiers.

    NASA Astrophysics Data System (ADS)

    Claude, Charles

    1995-01-01

    Materials for optical waveguides were developed from two different approaches, inorganic-organic composites and soft gel polymers. Inorganic-organic composites were developed from alkoxysilane and organically modified silanes based on nonlinear optical chromophores. Organically modified silanes based on N-((3^' -trialkoxysilyl)propyl)-4-nitroaniline were synthesized and sol-gelled with trimethoxysilane. After a densification process at 190^circC with a corona discharge, the second harmonic of the film was measured with a Nd:YAG laser with a fundamental wavelength of 1064nm, d_{33} = 13pm/V. The decay of the second harmonic was expressed by a stretched bi-exponential equation. The decay time (tau _2) was equal to 3374 hours, and was comparable to nonlinear optical systems based on epoxy/Disperse Orange 1. The processing temperature of the organically modified silane was limited to 200^circC due to the decomposition of the organic chromophore. Soft gel polymers were synthesized and characterized for the development of optical waveguides with dc-electrical field assisted phase-matching. Polymers based on 4-nitroaniline terminated poly(ethylene oxide-co-propylene oxide) were shown to exhibit second harmonic generation that were optically phase-matched in an electrical field. The optical signals were stable and reproducible. Siloxane polymers modified with 1-mercapto-4-nitrobenzene and 1-mercapto-4-methylsulfonylstilbene nonlinear optical chromophores were synthesized. The physical and the linear and nonlinear optical properties of the polymers were characterized. Waveguides were developed from the polymers which were optically phase -matched and had an efficiency of 8.1%. The siloxane polymers exhibited optical phase-matching in an applied electrical field and can be used with a semiconductor laser. Organic lanthanide ion complexes for electroluminescence and optical amplifiers were synthesized and characterized. The complexes were characterized for their thermal and oxidative stability and for their optical properties. Organic-europium ion complexes based on derivatives of 2-benzoyl benzoate are stable to a temperature 70^circ C higher than the europium beta -diketonate complexes. The optical and fluorescence properties of the organic-europium ion complexes were characterized. The methoxy and the t-butyl derivatives of the europium 2-benzoylbenzoate complexes exhibited fluorescence quantum efficiencies that were comparable to europium tris(thenoyl trifluoroacetonate) in methylene chloride but the extinction coefficient was two-thirds of the europium thenoyltrifluoroacetonate complexes. The last complex characterized was the europium bis(diphenylphosphino)imine complex. The complex exhibited thermal stability to 550 ^circC under nitrogen.

  9. Nonparametric Bayesian Modeling for Automated Database Schema Matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferragut, Erik M; Laska, Jason A

    2015-01-01

    The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.

  10. Urban Growth Modeling Using AN Artificial Neural Network a Case Study of Sanandaj City, Iran

    NASA Astrophysics Data System (ADS)

    Mohammady, S.; Delavar, M. R.; Pahlavani, P.

    2014-10-01

    Land use activity is a major issue and challenge for town and country planners. Modelling and managing urban growth is a complex problem. Cities are now recognized as complex, non-linear and dynamic process systems. The design of a system that can handle these complexities is a challenging prospect. Local governments that implement urban growth models need to estimate the amount of urban land required in the future given anticipated growth of housing, business, recreation and other urban uses within the boundary. There are so many negative implications related with the type of inappropriate urban development such as increased traffic and demand for mobility, reduced landscape attractively, land use fragmentation, loss of biodiversity and alterations of the hydrological cycle. The aim of this study is to use the Artificial Neural Network (ANN) to make a powerful tool for simulating urban growth patterns. Our study area is Sanandaj city located in the west of Iran. Landsat imageries acquired at 2000 and 2006 are used. Dataset were used include distance to principle roads, distance to residential areas, elevation, slope, distance to green spaces and distance to region centers. In this study an appropriate methodology for urban growth modelling using satellite remotely sensed data is presented and evaluated. Percent Correct Match (PCM) and Figure of Merit were used to evaluate ANN results.

  11. Evaluating Treatment and Generalization Patterns of Two Theoretically Motivated Sentence Comprehension Therapies.

    PubMed

    Des Roches, Carrie A; Vallila-Rohter, Sofia; Villard, Sarah; Tripodis, Yorghos; Caplan, David; Kiran, Swathi

    2016-12-01

    The current study examined treatment outcomes and generalization patterns following 2 sentence comprehension therapies: object manipulation (OM) and sentence-to-picture matching (SPM). Findings were interpreted within the framework of specific deficit and resource reduction accounts, which were extended in order to examine the nature of generalization following treatment of sentence comprehension deficits in aphasia. Forty-eight individuals with aphasia were enrolled in 1 of 8 potential treatment assignments that varied by task (OM, SPM), complexity of trained sentences (complex, simple), and syntactic movement (noun phrase, wh-movement). Comprehension of trained and untrained sentences was probed before and after treatment using stimuli that differed from the treatment stimuli. Linear mixed-model analyses demonstrated that, although both OM and SPM treatments were effective, OM resulted in greater improvement than SPM. Analyses of covariance revealed main effects of complexity in generalization; generalization from complex to simple linguistically related sentences was observed both across task and across movement. Results are consistent with the complexity account of treatment efficacy, as generalization effects were consistently observed from complex to simpler structures. Furthermore, results provide support for resource reduction accounts that suggest that generalization can extend across linguistic boundaries, such as across movement type.

  12. Laboratory simulation of charge exchange-produced X-ray emission from comets.

    PubMed

    Beiersdorfer, P; Boyce, K R; Brown, G V; Chen, H; Kahn, S M; Kelley, R L; May, M; Olson, R E; Porter, F S; Stahle, C K; Tillotson, W A

    2003-06-06

    In laboratory experiments using the engineering spare microcalorimeter detector from the ASTRO-E satellite mission, we recorded the x-ray emission of highly charged ions of carbon, nitrogen, and oxygen, which simulates charge exchange reactions between heavy ions in the solar wind and neutral gases in cometary comae. The spectra are complex and do not readily match predictions. We developed a charge exchange emission model that successfully reproduces the soft x-ray spectrum of comet Linear C/1999 S4, observed with the Chandra X-ray Observatory.

  13. Essential Requirements for Robust Signaling in Hfq Dependent Small RNA Networks

    PubMed Central

    Adamson, David N.; Lim, Han N.

    2011-01-01

    Bacteria possess networks of small RNAs (sRNAs) that are important for modulating gene expression. At the center of many of these sRNA networks is the Hfq protein. Hfq's role is to quickly match cognate sRNAs and target mRNAs from among a large number of possible combinations and anneal them to form duplexes. Here we show using a kinetic model that Hfq can efficiently and robustly achieve this difficult task by minimizing the sequestration of sRNAs and target mRNAs in Hfq complexes. This sequestration can be reduced by two non-mutually exclusive kinetic mechanisms. The first mechanism involves heterotropic cooperativity (where sRNA and target mRNA binding to Hfq is influenced by other RNAs bound to Hfq); this cooperativity can selectively decrease singly-bound Hfq complexes and ternary complexes with non-cognate sRNA-target mRNA pairs while increasing cognate ternary complexes. The second mechanism relies on frequent RNA dissociation enabling the rapid cycling of sRNAs and target mRNAs among different Hfq complexes; this increases the probability the cognate ternary complex forms before the sRNAs and target mRNAs degrade. We further demonstrate that the performance of sRNAs in isolation is not predictive of their performance within a network. These findings highlight the importance of experimentally characterizing duplex formation in physiologically relevant contexts with multiple RNAs competing for Hfq. The model will provide a valuable framework for guiding and interpreting these experiments. PMID:21876666

  14. Model improvements to simulate charging in SEM

    NASA Astrophysics Data System (ADS)

    Arat, K. T.; Klimpel, T.; Hagen, C. W.

    2018-03-01

    Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.

  15. The Hindmarsh-Rose neuron model: bifurcation analysis and piecewise-linear approximations.

    PubMed

    Storace, Marco; Linaro, Daniele; de Lange, Enno

    2008-09-01

    This paper provides a global picture of the bifurcation scenario of the Hindmarsh-Rose model. A combination between simulations and numerical continuations is used to unfold the complex bifurcation structure. The bifurcation analysis is carried out by varying two bifurcation parameters and evidence is given that the structure that is found is universal and appears for all combinations of bifurcation parameters. The information about the organizing principles and bifurcation diagrams are then used to compare the dynamics of the model with that of a piecewise-linear approximation, customized for circuit implementation. A good match between the dynamical behaviors of the models is found. These results can be used both to design a circuit implementation of the Hindmarsh-Rose model mimicking the diversity of neural response and as guidelines to predict the behavior of the model as well as its circuit implementation as a function of parameters. (c) 2008 American Institute of Physics.

  16. Model correlation and damage location for large space truss structures: Secant method development and evaluation

    NASA Technical Reports Server (NTRS)

    Smith, Suzanne Weaver; Beattie, Christopher A.

    1991-01-01

    On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.

  17. Spectra, energy levels, and energy transition of lanthanide complexes with cinnamic acid and its derivatives

    NASA Astrophysics Data System (ADS)

    Zhou, Kaining; Feng, Zhongshan; Shen, Jun; Wu, Bing; Luo, Xiaobing; Jiang, Sha; Li, Li; Zhou, Xianju

    2016-04-01

    High resolution spectra and luminescent lifetimes of 6 europium(III)-cinnamic acid complex {[Eu2L6(DMF)(H2O)]·nDMF·H2O}m (L = cinnamic acid I, 4-methyl-cinnamic acid II, 4-chloro-cinnamic acid III, 4-methoxy-cinnamic acid IV, 4-hydroxy-cinnamic acid V, 4-nitro-cinnamic acid VI; DMF = N, N-dimethylformamide, C3H7NO) were recorded from 8 K to room temperature. The energy levels of Eu3 + in these 6 complexes are obtained from the spectra analysis. It is found that the energy levels of the central Eu3 + ions are influenced by the nephelauxetic effect, while the triplet state of ligand is lowered by the p-π conjugation effect of the para-substituted functional groups. The best energy matching between the ligand triplet state and the central ion excited state is found in complex I. While the other complexes show poorer matching because the gap of 5D0 and triplet state contracts.

  18. A Computer Aided Broad Band Impedance Matching Technique Using a Comparison Reflectometer. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gordy, R. S.

    1972-01-01

    An improved broadband impedance matching technique was developed. The technique is capable of resolving points in the waveguide which generate reflected energy. A version of the comparison reflectometer was developed and fabricated to determine the mean amplitude of the reflection coefficient excited at points in the guide as a function of distance, and the complex reflection coefficient of a specific discontinuity in the guide as a function of frequency. An impedance matching computer program was developed which is capable of impedance matching the characteristics of each disturbance independent of other reflections in the guide. The characteristics of four standard matching elements were compiled, and their associated curves of reflection coefficient and shunt susceptance as a function of frequency are presented. It is concluded that an economical, fast, and reliable impedance matching technique has been established which can provide broadband impedance matches.

  19. Test Input Generation for Red-Black Trees using Abstraction

    NASA Technical Reports Server (NTRS)

    Visser, Willem; Pasareanu, Corina S.; Pelanek, Radek

    2005-01-01

    We consider the problem of test input generation for code that manipulates complex data structures. Test inputs are sequences of method calls from the data structure interface. We describe test input generation techniques that rely on state matching to avoid generation of redundant tests. Exhaustive techniques use explicit state model checking to explore all the possible test sequences up to predefined input sizes. Lossy techniques rely on abstraction mappings to compute and store abstract versions of the concrete states; they explore under-approximations of all the possible test sequences. We have implemented the techniques on top of the Java PathFinder model checker and we evaluate them using a Java implementation of red-black trees.

  20. Sudden cardiac death in adults with congenital heart disease: does QRS-complex fragmentation discriminate in structurally abnormal hearts?

    PubMed

    Vehmeijer, Jim T; Koyak, Zeliha; Bokma, Jouke P; Budts, Werner; Harris, Louise; Mulder, Barbara J M; de Groot, Joris R

    2018-06-01

    Sudden cardiac death (SCD) causes a large portion of all mortality in adult congenital heart disease (ACHD) patients. However, identification of high-risk patients remains challenging. Fragmented QRS-complexes (fQRS) are a marker for SCD in patients with acquired heart disease but data in ACHD patients are lacking. We therefore aim to evaluate the prognostic value of fQRS for SCD in ACHD patients. From a multicentre cohort of 25 790 ACHD patients, we included tachyarrhythmic SCD cases (n = 147), and controls (n = 266) matched by age, gender, congenital defect and (surgical) intervention. fQRS was defined as ≥1 discontinuous deflection in narrow QRS-complexes, and ≥2 in wide QRS-complexes (>120 ms), in two contiguous ECG leads. We calculated odds ratios (OR) using univariable and multivariable conditional logistic regression models correcting for impaired systemic ventricular function, heart failure and QRS duration >120 ms. ECGs of 147 SCD cases (65% male, median age of death 34 years) and of 266 controls were assessed. fQRS was present in 51% of cases and 34% of controls (OR 2.0, P = 0.003). In multivariable analysis, fQRS was independently associated with SCD (OR 1.9, P = 0.01). The most common diagnose of SCD cases was tetralogy of Fallot (ToF, 34 cases). In ToF, fQRS was present in 71% of cases vs. 43% of controls (OR for SCD 2.8, P = 0.03). fQRS was independently associated with SCD in ACHD patients in a cohort of SCD patients and matched controls. fQRS may therefore contribute to the decision when evaluating ACHD patients for primary prevention of SCD.

  1. An improved ASIFT algorithm for indoor panorama image matching

    NASA Astrophysics Data System (ADS)

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  2. Velocity Field of the McMurdo Shear Zone from Annual Three-Dimensional Ground Penetrating Radar Imaging and Crevasse Matching

    NASA Astrophysics Data System (ADS)

    Ray, L.; Jordan, M.; Arcone, S. A.; Kaluzienski, L. M.; Koons, P. O.; Lever, J.; Walker, B.; Hamilton, G. S.

    2017-12-01

    The McMurdo Shear Zone (MSZ) is a narrow, intensely crevassed strip tens of km long separating the Ross and McMurdo ice shelves (RIS and MIS) and an important pinning feature for the RIS. We derive local velocity fields within the MSZ from two consecutive annual ground penetrating radar (GPR) datasets that reveal complex firn and marine ice crevassing; no englacial features are evident. The datasets were acquired in 2014 and 2015 using robot-towed 400 MHz and 200 MHz GPR over a 5 km x 5.7 km grid. 100 west-to-east transects at 50 m spacing provide three-dimensional maps that reveal the length of many firn crevasses, and their year-to-year structural evolution. Hand labeling of crevasse cross sections near the MSZ western and eastern boundaries reveal matching firn and marine ice crevasses, and more complex and chaotic features between these boundaries. By matching crevasse features from year to year both on the eastern and western boundaries and within the chaotic region, marine ice crevasses along the western and eastern boundaries are shown to align directly with firn crevasses, and the local velocity field is estimated and compared with data from strain rate surveys and remote sensing. While remote sensing provides global velocity fields, crevasse matching indicates greater local complexity attributed to faulting, folding, and rotation.

  3. The Ultramafic Complex of Reinfjord: from the Magnetic Petrology to the Interpretation of the Magnetic Anomalies

    NASA Astrophysics Data System (ADS)

    Pastore, Zeudia; McEnroe, Suzanne; Church, Nathan; Fichler, Christine; ter Maat, Geertje W.; Fumagalli, Patrizia; Oda, Hirokuni; Larsen, Rune B.

    2017-04-01

    A 3D model of the geometry of the Reinfjord complex integrating geological and petrophysical data with high resolution aeromagnetic, ground magnetic and gravity data is developed. The Reinfjord ultramafic complex in northern Norway is one of the major ultramafic complexes of the Neoproterozoic Seiland Igneous Province (SIP). This province, now embedded in the Caledonian orogen, was emplaced deep in the crust (30 km of depth) and is believed to represent a section of the deep plumbing system of a large igneous province. The Reinfjord complex consists of three magmatic series formed during multiple recharging events resulting in the formation of a cylindrically zoned complex with a slightly younger dunite core surrounded by wehrlite and lherzolite units. Gabbros and gneiss form the host rock. The ultramafic complex has several distinct magnetic anomalies which do not match the mapped lithological boundaries, but are correlated with changes in magnetic susceptibilities. In particular, the deviating densities and magnetic susceptibilities at the northern side of the complex are interpreted to be due to serpentinization. Detailed studies of magnetic anomalies and magnetic properties of samples can provide a powerful tool for mapping petrological changes. Samples can have wide range of magnetic properties depending on composition, amount of ferromagnetic minerals, grain sizes and microstructures. Later geological processes such as serpentinization can alter this signal. Therefore a micro-scale study of magnetic anomalies at the thin section scale was carried out to understand better the link between the magnetic petrology and the magnetic anomalies. Serpentinization can significantly enhance the magnetic properties and therefore change the nature of the magnetic anomaly. The detailed gravity and magnetic model here presented shows the subsurface structure of the ultramafic complex refining the geological interpretation of the magnetic sources within it, and the local effects of serpentinization.

  4. Longitudinal wave propagation in multi cylindrical viscoelastic matching layers of airborne ultrasonic transducer: new method to consider the matching layer's diameter (frequency <100 kHz).

    PubMed

    Saffar, Saber; Abdullah, Amir

    2013-08-01

    Wave propagation in viscoelastic disk layers is encountered in many applications including studies of airborne ultrasonic transducers. For viscoelastic materials, both material and geometric dispersion are possible when the diameter of the matching layer is of the same order as the wavelength. Lateral motions of the matching layer(s) that result from the Poisson effect are accounted by using a new concept called the "effective-density". A new wave equation is derived for both metallic and non-metallic (polymeric) materials, usually employed for the matching layers of airborne ultrasonic transducer. The material properties are modeled by using the Kelvin model for metals and Linear Solid Standard model for non-metallic (polymeric) matching layers. The utilized model of the material of the matching layers has influence on amount and trend of variation in speed ratio. In this regard, 60% reduction in speed ratio is observed for Kelvin model for aluminum with diameter of 80 mm at 100 kHz while for a similar diameter but Standard Linear Model, the speed ratio increase to twice value at 15 kHz, and then reduced until 70% at 67 kHz for Polypropylene. The new wave theory simplifies to the one-dimensional solution for waves in metallic or polymeric matching layers if the Poisson ratio is set to zero. The predictions simplify to Love's equation for stress waves in elastic disks when loss term is removed from equations for both models. Afterwards, the new wave theory is employed to determine the airborne ultrasonic matching layers to maximize the energy transmission to the air. The optimal matching layers are determined by using genetic algorithm theory for 1, 2 and 3 airborne matching layers. It has been shown that 1-D equation is useless at frequencies less than 100 kHz and the effect of diameter of the matching layers must be considered to determine the acoustic impedances (matching layers) to design airborne ultrasonic transducers. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Experimental validation of numerical simulations on a cerebral aneurysm phantom model

    PubMed Central

    Seshadhri, Santhosh; Janiga, Gábor; Skalej, Martin; Thévenin, Dominique

    2012-01-01

    The treatment of cerebral aneurysms, found in roughly 5% of the population and associated in case of rupture to a high mortality rate, is a major challenge for neurosurgery and neuroradiology due to the complexity of the intervention and to the resulting, high hazard ratio. Improvements are possible but require a better understanding of the associated, unsteady blood flow patterns in complex 3D geometries. It would be very useful to carry out such studies using suitable numerical models, if it is proven that they reproduce accurately enough the real conditions. This validation step is classically based on comparisons with measured data. Since in vivo measurements are extremely difficult and therefore of limited accuracy, complementary model-based investigations considering realistic configurations are essential. In the present study, simulations based on computational fluid dynamics (CFD) have been compared with in situ, laser-Doppler velocimetry (LDV) measurements in the phantom model of a cerebral aneurysm. The employed 1:1 model is made from transparent silicone. A liquid mixture composed of water, glycerin, xanthan gum and sodium chloride has been specifically adapted for the present investigation. It shows physical flow properties similar to real blood and leads to a refraction index perfectly matched to that of the silicone model, allowing accurate optical measurements of the flow velocity. For both experiments and simulations, complex pulsatile flow waveforms and flow rates were accounted for. This finally allows a direct, quantitative comparison between measurements and simulations. In this manner, the accuracy of the employed computational model can be checked. PMID:24265876

  6. Computational model of in vivo human energy metabolism during semi-starvation and re-feeding

    PubMed Central

    Hall, Kevin D.

    2008-01-01

    Changes of body weight and composition are the result of complex interactions among metabolic fluxes contributing to macronutrient balances. To better understand these interactions, a mathematical model was constructed that used the measured dietary macronutrient intake during semi-starvation and re-feeding as model inputs and computed whole-body energy expenditure, de novo lipogenesis, gluconeogenesis, as well as turnover and oxidation of carbohydrate, fat and protein. Published in vivo human data provided the basis for the model components which were integrated by fitting a few unknown parameters to the classic Minnesota human starvation experiment. The model simulated the measured body weight and fat mass changes during semi-starvation and re-feeding and predicted the unmeasured metabolic fluxes underlying the body composition changes. The resting metabolic rate matched the experimental measurements and required a model of adaptive thermogenesis. Re-feeding caused an elevation of de novo lipogenesis which, along with increased fat intake, resulted in a rapid repletion and overshoot of body fat. By continuing the computer simulation with the pre-starvation diet and physical activity, the original body weight and composition was eventually restored, but body fat mass was predicted to take more than one additional year to return to within 5% of its original value. The model was validated by simulating a recently published short-term caloric restriction experiment without changing the model parameters. The predicted changes of body weight, fat mass, resting metabolic rate, and nitrogen balance matched the experimental measurements thereby providing support for the validity of the model. PMID:16449298

  7. Complex Solutions for Complex Needs: Towards Holistic and Collaborative Practice

    ERIC Educational Resources Information Center

    Beadle, Sally

    2009-01-01

    While the need for holistic health and social service responses is increasingly being articulated in Australia, the discussion is not always matched by improvements in service delivery. This project looked at one service setting where youth workers were encouraged to take a holistic approach to their clients' often-complex needs. Interviews with…

  8. Camouflage and Clutch Survival in Plovers and Terns

    NASA Astrophysics Data System (ADS)

    Stoddard, Mary Caswell; Kupán, Krisztina; Eyster, Harold N.; Rojas-Abreu, Wendoly; Cruz-López, Medardo; Serrano-Meneses, Martín Alejandro; Küpper, Clemens

    2016-09-01

    Animals achieve camouflage through a variety of mechanisms, of which background matching and disruptive coloration are likely the most common. Although many studies have investigated camouflage mechanisms using artificial stimuli and in lab experiments, less work has addressed camouflage in the wild. Here we examine egg camouflage in clutches laid by ground-nesting Snowy Plovers Charadrius nivosus and Least Terns Sternula antillarum breeding in mixed aggregations at Bahía de Ceuta, Sinaloa, Mexico. We obtained digital images of clutches laid by both species. We then calibrated the images and used custom computer software and edge detection algorithms to quantify measures related to three potential camouflage mechanisms: pattern complexity matching, disruptive effects and background color matching. Based on our image analyses, Snowy Plover clutches, in general, appeared to be more camouflaged than Least Tern clutches. Snowy Plover clutches also survived better than Least Tern clutches. Unexpectedly, variation in clutch survival was not explained by any measure of egg camouflage in either species. We conclude that measures of egg camouflage are poor predictors of clutch survival in this population. The behavior of the incubating parents may also affect clutch predation. Determining the significance of egg camouflage requires further testing using visual models and behavioral experiments.

  9. Camouflage and Clutch Survival in Plovers and Terns.

    PubMed

    Stoddard, Mary Caswell; Kupán, Krisztina; Eyster, Harold N; Rojas-Abreu, Wendoly; Cruz-López, Medardo; Serrano-Meneses, Martín Alejandro; Küpper, Clemens

    2016-09-12

    Animals achieve camouflage through a variety of mechanisms, of which background matching and disruptive coloration are likely the most common. Although many studies have investigated camouflage mechanisms using artificial stimuli and in lab experiments, less work has addressed camouflage in the wild. Here we examine egg camouflage in clutches laid by ground-nesting Snowy Plovers Charadrius nivosus and Least Terns Sternula antillarum breeding in mixed aggregations at Bahía de Ceuta, Sinaloa, Mexico. We obtained digital images of clutches laid by both species. We then calibrated the images and used custom computer software and edge detection algorithms to quantify measures related to three potential camouflage mechanisms: pattern complexity matching, disruptive effects and background color matching. Based on our image analyses, Snowy Plover clutches, in general, appeared to be more camouflaged than Least Tern clutches. Snowy Plover clutches also survived better than Least Tern clutches. Unexpectedly, variation in clutch survival was not explained by any measure of egg camouflage in either species. We conclude that measures of egg camouflage are poor predictors of clutch survival in this population. The behavior of the incubating parents may also affect clutch predation. Determining the significance of egg camouflage requires further testing using visual models and behavioral experiments.

  10. Camouflage and Clutch Survival in Plovers and Terns

    PubMed Central

    Stoddard, Mary Caswell; Kupán, Krisztina; Eyster, Harold N.; Rojas-Abreu, Wendoly; Cruz-López, Medardo; Serrano-Meneses, Martín Alejandro; Küpper, Clemens

    2016-01-01

    Animals achieve camouflage through a variety of mechanisms, of which background matching and disruptive coloration are likely the most common. Although many studies have investigated camouflage mechanisms using artificial stimuli and in lab experiments, less work has addressed camouflage in the wild. Here we examine egg camouflage in clutches laid by ground-nesting Snowy Plovers Charadrius nivosus and Least Terns Sternula antillarum breeding in mixed aggregations at Bahía de Ceuta, Sinaloa, Mexico. We obtained digital images of clutches laid by both species. We then calibrated the images and used custom computer software and edge detection algorithms to quantify measures related to three potential camouflage mechanisms: pattern complexity matching, disruptive effects and background color matching. Based on our image analyses, Snowy Plover clutches, in general, appeared to be more camouflaged than Least Tern clutches. Snowy Plover clutches also survived better than Least Tern clutches. Unexpectedly, variation in clutch survival was not explained by any measure of egg camouflage in either species. We conclude that measures of egg camouflage are poor predictors of clutch survival in this population. The behavior of the incubating parents may also affect clutch predation. Determining the significance of egg camouflage requires further testing using visual models and behavioral experiments. PMID:27616020

  11. Methodological Complications of Matching Designs under Real World Constraints: Lessons from a Study of Deeper Learning

    ERIC Educational Resources Information Center

    Zeiser, Kristina; Rickles, Jordan; Garet, Michael S.

    2014-01-01

    To help researchers understand potential issues one can encounter when conducting propensity matching studies in complex settings, this paper describes methodological complications faced when studying schools using deeper learning practices to improve college and career readiness. The study uses data from high schools located in six districts…

  12. Good Hope in Chaos: Beyond Matching to Complexity in Career Development

    ERIC Educational Resources Information Center

    Pryor, R. G. L.; Bright, J. E. H.

    2009-01-01

    The significance of both higher education and career counselling is outlined. The predominant matching paradigm for career development service delivery is described. Its implications for reinforcing the status quo in the South African community are identified and questioned. The Chaos Theory of Careers (CTC) is suggested as an alternative…

  13. Dynamic Temporal Processing of Nonspeech Acoustic Information by Children with Specific Language Impairment.

    ERIC Educational Resources Information Center

    Visto, Jane C.; And Others

    1996-01-01

    Ten children (ages 12-16) with specific language impairments (SLI) and controls matched for chronological or language age were tested with measures of complex sound localization involving the precedence effect phenomenon. SLI children exhibited tracking skills similar to language-age matched controls, indicating impairment in their ability to use…

  14. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  15. Solving the aerodynamics of fungal flight: How air viscosity slows spore motion

    PubMed Central

    Fischer, Mark W. F.; Stolze-Rybczynski, Jessica L.; Davis, Diana J.; Cui, Yunluan; Money, Nicholas P.

    2010-01-01

    Viscous drag causes the rapid deceleration of fungal spores after high-speed launches and limits discharge distance. Stokes' law posits a linear relationship between drag force and velocity. It provides an excellent fit to experimental measurements of the terminal velocity of free-falling spores and other instances of low Reynolds number motion (Re<1). More complex, non-linear drag models have been devised for movements characterized by higher Re, but their effectiveness for modeling the launch of fast-moving fungal spores has not been tested. In this paper, we use data on spore discharge processes obtained from ultra-high-speed video recordings to evaluate the effects of air viscosity predicted by Stokes' law and a commonly used non-linear drag model. We find that discharge distances predicted from launch speeds by Stokes' model provide a much better match to measured distances than estimates from the more complex drag model. Stokes' model works better over a wide range projectile sizes, launch speeds, and discharge distances, from microscopic mushroom ballistospores discharged at <1 m/s over a distance of <0.1 mm (Re<1.0), to macroscopic sporangia of Pilobolus that are launched at >10 m/s and travel as far as 2.5 m (Re>100). PMID:21036338

  16. Electrical conductivity modeling in fractal non-saturated porous media

    NASA Astrophysics Data System (ADS)

    Wei, W.; Cai, J.; Hu, X.; Han, Q.

    2016-12-01

    The variety of electrical conductivity in non-saturated conditions is important to study electric conduction in natural sedimentary rocks. The electrical conductivity in completely saturated porous media is a porosity-function representing the complex connected behavior of single conducting phases (pore fluid). For partially saturated conditions, the electrical conductivity becomes even more complicated since the connectedness of pore. Archie's second law is an empirical electrical conductivity-porosity and -saturation model that has been used to predict the formation factor of non-saturated porous rock. However, the physical interpretation of its parameters, e.g., the cementation exponent m and the saturation exponent n, remains questionable. On basis of our previous work, we combine the pore-solid fractal (PSF) model to build an electrical conductivity model in non-saturated porous media. Our theoretical porosity- and saturation-dependent models contain endmember properties, such as fluid electrical conductivities, pore fractal dimension and tortuosity fractal dimension (representing the complex degree of electrical flowing path). We find the presented model with non-saturation-dependent electrical conductivity datasets indicate excellent match between theory and experiments. This means the value of pore fractal dimension and tortuosity fractal dimension change from medium to medium and depends not only on geometrical properties of pore structure but also characteristics of electrical current flowing in the non-saturated porous media.

  17. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135

  18. A Tabletop Tool for Modeling Life Support Systems

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.; Majumdar, A.; McDaniels, D.; Stewart, E.

    2003-01-01

    This paper describes the development plan for a comprehensive research and diagnostic tool for aspects of advanced life support systems in space-based laboratories. Specifically it aims to build a high fidelity tabletop model that can be used for the purpose of risk mitigation, failure mode analysis, contamination tracking, and testing reliability. We envision a comprehensive approach involving experimental work coupled with numerical simulation to develop this diagnostic tool. It envisions a 10% scale transparent model of a space platform such as the International Space Station that operates with water or a specific matched index of refraction liquid as the working fluid. This allows the scaling of a 10 ft x 10 ft x 10 ft room with air flow to 1 ft x 1 ft x 1 ft tabletop model with water/liquid flow. Dynamic similitude for this length scale dictates model velocities to be 67% of full-scale and thereby the time scale of the model to represent 15% of the full- scale system; meaning identical processes in the model are completed in 15% of the full- scale time. The use of an index matching fluid (fluid that matches the refractive index of cast acrylic, the model material) allows making the entire model (with complex internal geometry) transparent and hence conducive to non-intrusive optical diagnostics. So using such a system one can test environment control parameters such as core flows (axial flows), cross flows (from registers and diffusers), potential problem areas such as flow short circuits, inadequate oxygen content, build up of other gases beyond desirable levels, test mixing processes within the system at local nodes or compartments and assess the overall system performance. The system allows quantitative measurements of contaminants introduced in the system and allows testing and optimizing the tracking process and removal of contaminants. The envisaged system will be modular and hence flexible for quick configuration change and subsequent testing. The data and inferences from the tests will allow for improvements in the development and design of next generation life support systems and configurations.

  19. Potential Flow Theory and Operation Guide for the Panel Code PMARC. Version 14

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1999-01-01

    The theoretical basis for PMARC, a low-order panel code for modeling complex three-dimensional bodies, in potential flow, is outlined. PMARC can be run on a wide variety of computer platforms, including desktop machines, workstations, and supercomputers. Execution times for PMARC vary tremendously depending on the computer resources used, but typically range from several minutes for simple or moderately complex cases to several hours for very large complex cases. Several of the advanced features currently included in the code, such as internal flow modeling, boundary layer analysis, and time-dependent flow analysis, including problems involving relative motion, are discussed in some detail. The code is written in Fortran77, using adjustable-size arrays so that it can be easily redimensioned to match problem requirements and computer hardware constraints. An overview of the program input is presented. A detailed description of the input parameters is provided in the appendices. PMARC results for several test cases are presented along with analytic or experimental data, where available. The input files for these test cases are given in the appendices. PMARC currently supports plotfile output formats for several commercially available graphics packages. The supported graphics packages are Plot3D, Tecplot, and PmarcViewer.

  20. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    PubMed Central

    Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin

    2016-01-01

    A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process. PMID:27338410

  1. Towards a Next-Generation Catalogue Cross-Match Service

    NASA Astrophysics Data System (ADS)

    Pineau, F.; Boch, T.; Derriere, S.; Arches Consortium

    2015-09-01

    We have been developing in the past several catalogue cross-match tools. On one hand the CDS XMatch service (Pineau et al. 2011), able to perform basic but very efficient cross-matches, scalable to the largest catalogues on a single regular server. On the other hand, as part of the European project ARCHES1, we have been developing a generic and flexible tool which performs potentially complex multi-catalogue cross-matches and which computes probabilities of association based on a novel statistical framework. Although the two approaches have been managed so far as different tracks, the need for next generation cross-match services dealing with both efficiency and complexity is becoming pressing with forthcoming projects which will produce huge high quality catalogues. We are addressing this challenge which is both theoretical and technical. In ARCHES we generalize to N catalogues the candidate selection criteria - based on the chi-square distribution - described in Pineau et al. (2011). We formulate and test a number of Bayesian hypothesis which necessarily increases dramatically with the number of catalogues. To assign a probability to each hypotheses, we rely on estimated priors which account for local densities of sources. We validated our developments by comparing the theoretical curves we derived with the results of Monte-Carlo simulations. The current prototype is able to take into account heterogeneous positional errors, object extension and proper motion. The technical complexity is managed by OO programming design patterns and SQL-like functionalities. Large tasks are split into smaller independent pieces for scalability. Performances are achieved resorting to multi-threading, sequential reads and several tree data-structures. In addition to kd-trees, we account for heterogeneous positional errors and object's extension using M-trees. Proper-motions are supported using a modified M-tree we developed, inspired from Time Parametrized R-trees (TPR-tree). Quantitative tests in comparison with the basic cross-match will be presented.

  2. Does an English appeal court ruling increase the risks of miscarriages of justice when complex DNA profiles are searched against the national DNA database?

    PubMed

    Gill, P; Bleka, Ø; Egeland, T

    2014-11-01

    Likelihood ratio (LR) methods to interpret multi-contributor, low template, complex DNA mixtures are becoming standard practice. The next major development will be to introduce search engines based on the new methods to interrogate very large national DNA databases, such as those held by China, the USA and the UK. Here we describe a rapid method that was used to assign a LR to each individual member of database of 5 million genotypes which can be ranked in order. Previous authors have only considered database trawls in the context of binary match or non-match criteria. However, the concept of match/non-match no longer applies within the new paradigm introduced, since the distribution of resultant LRs is continuous for practical purposes. An English appeal court decision allows scientists to routinely report complex DNA profiles using nothing more than their subjective personal 'experience of casework' and 'observations' in order to apply an expression of the rarity of an evidential sample. This ruling must be considered in context of a recent high profile English case, where an individual was extracted from a database and wrongly accused of a serious crime. In this case the DNA evidence was used to negate the overwhelming exculpatory (non-DNA) evidence. Demonstrable confirmation bias, also known as the 'CSI-effect, seriously affected the investigation. The case demonstrated that in practice, databases could be used to select and prosecute an individual, simply because he ranked high in the list of possible matches. We have identified this phenomenon as a cognitive error which we term: 'the naïve investigator effect'. We take the opportunity to test the performance of database extraction strategies either by using a simple matching allele count (MAC) method or LR. The example heard by the appeal court is used as the exemplar case. It is demonstrated that the LR search-method offers substantial benefits compared to searches based on simple matching allele count (MAC) methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Neuropsychological functioning in older people with type 2 diabetes: the effect of controlling for confounding factors.

    PubMed

    Asimakopoulou, K G; Hampson, S E; Morrish, N J

    2002-04-01

    Neuropsychological functioning was examined in a group of 33 older (mean age 62.40 +/- 9.62 years) people with Type 2 diabetes (Group 1) and 33 non-diabetic participants matched with Group 1 on age, sex, premorbid intelligence and presence of hypertension and cardio/cerebrovascular conditions (Group 2). Data statistically corrected for confounding factors obtained from the diabetic group were compared with the matched control group. The results suggested small cognitive deficits in diabetic people's verbal memory and mental flexibility (Logical Memory A and SS7). No differences were seen between the two samples in simple and complex visuomotor attention, sustained complex visual attention, attention efficiency, mental double tracking, implicit memory, and self-reported memory problems. These findings indicate minimal cognitive impairment in relatively uncomplicated Type 2 diabetes and demonstrate the importance of control and matching for confounding factors.

  4. On the uniqueness of color patterns in raptor feathers

    USGS Publications Warehouse

    Ellis, D.H.

    2009-01-01

    For this study, I compared sequentially molted feathers for a few captive raptors from year to year and symmetrically matched feathers (left/right pairs) for many raptors to see if color patterns of sequential feather pairs were identical or if symmetrical pairs were mirror-image identical. Feather pairs were found to be identical only when without color pattern (e.g., the all-white rectrices of Bald Eagles [Haliaeetus leucocephalus]). Complex patterns were not closely matched, but some simple patterns were sometimes closely matched, although not identical. Previous claims that complex color patterns in feather pairs are fingerprint-identical (and therefore that molted feathers from wild raptors can be used to identify breeding adults from year to year with certainty) were found to be untrue: each feather is unique. Although it is unwise to be certain of bird of origin using normal feathers, abnormal feathers can often be so used. ?? 2009 The Raptor Research Foundation, Inc.

  5. Improvement of Reynolds-Stress and Triple-Product Lag Models

    NASA Technical Reports Server (NTRS)

    Olsen, Michael E.; Lillard, Randolph P.

    2017-01-01

    The Reynolds-stress and triple product Lag models were created with a normal stress distribution which was denied by a 4:3:2 distribution of streamwise, spanwise and wall normal stresses, and a ratio of r(sub w) = 0.3k in the log layer region of high Reynolds number flat plate flow, which implies R11(+)= [4/(9/2)*.3] approximately 2.96. More recent measurements show a more complex picture of the log layer region at high Reynolds numbers. The first cut at improving these models along with the direction for future refinements is described. Comparison with recent high Reynolds number data shows areas where further work is needed, but also shows inclusion of the modeled turbulent transport terms improve the prediction where they influence the solution. Additional work is needed to make the model better match experiment, but there is significant improvement in many of the details of the log layer behavior.

  6. Background colour matching by a crab spider in the field: a community sensory ecology perspective.

    PubMed

    Defrize, Jérémy; Théry, Marc; Casas, Jérôme

    2010-05-01

    The question of whether a species matches the colour of its natural background in the perspective of the correct receiver is complex to address for several reasons; however, the answer to this question may provide invaluable support for functional interpretations of colour. In most cases, little is known about the identity and visual sensory abilities of the correct receiver and the precise location at which interactions take place in the field, in particular for mimetic systems. In this study, we focused on Misumena vatia, a crab spider meeting the criteria for assessing crypsis better than many other models, and claimed to use colour changes for both aggressive and protective crypsis. We carried out a systematic field survey to quantitatively assess the exactness of background colour matching in M. vatia with respect to the visual system of many of its receivers within the community. We applied physiological models of bird, bee and blowfly colour vision, using flower and spider spectral reflectances measured with a spectroradiometer. We observed that crypsis at long distance is systematically achieved, exclusively through achromatic contrast, in both bee and bird visions. At short distance, M. vatia is mostly chromatically detectable, whatever the substrate, for bees and birds. However, spiders can be either poorly discriminable or quite visible depending on the substrate for bees. Spiders are always chromatically undetectable for blowflies. We discuss the biological relevance of these results in both defensive and aggressive contexts of crypsis within a community sensory perspective.

  7. Early detection of metabolic and energy disorders by thermal time series stochastic complexity analysis

    PubMed Central

    Lutaif, N.A.; Palazzo, R.; Gontijo, J.A.R.

    2014-01-01

    Maintenance of thermal homeostasis in rats fed a high-fat diet (HFD) is associated with changes in their thermal balance. The thermodynamic relationship between heat dissipation and energy storage is altered by the ingestion of high-energy diet content. Observation of thermal registers of core temperature behavior, in humans and rodents, permits identification of some characteristics of time series, such as autoreference and stationarity that fit adequately to a stochastic analysis. To identify this change, we used, for the first time, a stochastic autoregressive model, the concepts of which match those associated with physiological systems involved and applied in male HFD rats compared with their appropriate standard food intake age-matched male controls (n=7 per group). By analyzing a recorded temperature time series, we were able to identify when thermal homeostasis would be affected by a new diet. The autoregressive time series model (AR model) was used to predict the occurrence of thermal homeostasis, and this model proved to be very effective in distinguishing such a physiological disorder. Thus, we infer from the results of our study that maximum entropy distribution as a means for stochastic characterization of temperature time series registers may be established as an important and early tool to aid in the diagnosis and prevention of metabolic diseases due to their ability to detect small variations in thermal profile. PMID:24519093

  8. Early detection of metabolic and energy disorders by thermal time series stochastic complexity analysis.

    PubMed

    Lutaif, N A; Palazzo, R; Gontijo, J A R

    2014-01-01

    Maintenance of thermal homeostasis in rats fed a high-fat diet (HFD) is associated with changes in their thermal balance. The thermodynamic relationship between heat dissipation and energy storage is altered by the ingestion of high-energy diet content. Observation of thermal registers of core temperature behavior, in humans and rodents, permits identification of some characteristics of time series, such as autoreference and stationarity that fit adequately to a stochastic analysis. To identify this change, we used, for the first time, a stochastic autoregressive model, the concepts of which match those associated with physiological systems involved and applied in male HFD rats compared with their appropriate standard food intake age-matched male controls (n=7 per group). By analyzing a recorded temperature time series, we were able to identify when thermal homeostasis would be affected by a new diet. The autoregressive time series model (AR model) was used to predict the occurrence of thermal homeostasis, and this model proved to be very effective in distinguishing such a physiological disorder. Thus, we infer from the results of our study that maximum entropy distribution as a means for stochastic characterization of temperature time series registers may be established as an important and early tool to aid in the diagnosis and prevention of metabolic diseases due to their ability to detect small variations in thermal profile.

  9. The 3of5 web application for complex and comprehensive pattern matching in protein sequences.

    PubMed

    Seiler, Markus; Mehrle, Alexander; Poustka, Annemarie; Wiemann, Stefan

    2006-03-16

    The identification of patterns in biological sequences is a key challenge in genome analysis and in proteomics. Frequently such patterns are complex and highly variable, especially in protein sequences. They are frequently described using terms of regular expressions (RegEx) because of the user-friendly terminology. Limitations arise for queries with the increasing complexity of patterns and are accompanied by requirements for enhanced capabilities. This is especially true for patterns containing ambiguous characters and positions and/or length ambiguities. We have implemented the 3of5 web application in order to enable complex pattern matching in protein sequences. 3of5 is named after a special use of its main feature, the novel n-of-m pattern type. This feature allows for an extensive specification of variable patterns where the individual elements may vary in their position, order, and content within a defined stretch of sequence. The number of distinct elements can be constrained by operators, and individual characters may be excluded. The n-of-m pattern type can be combined with common regular expression terms and thus also allows for a comprehensive description of complex patterns. 3of5 increases the fidelity of pattern matching and finds ALL possible solutions in protein sequences in cases of length-ambiguous patterns instead of simply reporting the longest or shortest hits. Grouping and combined search for patterns provides a hierarchical arrangement of larger patterns sets. The algorithm is implemented as internet application and freely accessible. The application is available at http://dkfz.de/mga2/3of5/3of5.html. The 3of5 application offers an extended vocabulary for the definition of search patterns and thus allows the user to comprehensively specify and identify peptide patterns with variable elements. The n-of-m pattern type offers an improved accuracy for pattern matching in combination with the ability to find all solutions, without compromising the user friendliness of regular expression terms.

  10. TSOS and TSOS-FK hybrid methods for modelling the propagation of seismic waves

    NASA Astrophysics Data System (ADS)

    Ma, Jian; Yang, Dinghui; Tong, Ping; Ma, Xiao

    2018-05-01

    We develop a new time-space optimized symplectic (TSOS) method for numerically solving elastic wave equations in heterogeneous isotropic media. We use the phase-preserving symplectic partitioned Runge-Kutta method to evaluate the time derivatives and optimized explicit finite-difference (FD) schemes to discretize the space derivatives. We introduce the averaged medium scheme into the TSOS method to further increase its capability of dealing with heterogeneous media and match the boundary-modified scheme for implementing free-surface boundary conditions and the auxiliary differential equation complex frequency-shifted perfectly matched layer (ADE CFS-PML) non-reflecting boundaries with the TSOS method. A comparison of the TSOS method with analytical solutions and standard FD schemes indicates that the waveform generated by the TSOS method is more similar to the analytic solution and has a smaller error than other FD methods, which illustrates the efficiency and accuracy of the TSOS method. Subsequently, we focus on the calculation of synthetic seismograms for teleseismic P- or S-waves entering and propagating in the local heterogeneous region of interest. To improve the computational efficiency, we successfully combine the TSOS method with the frequency-wavenumber (FK) method and apply the ADE CFS-PML to absorb the scattered waves caused by the regional heterogeneity. The TSOS-FK hybrid method is benchmarked against semi-analytical solutions provided by the FK method for a 1-D layered model. Several numerical experiments, including a vertical cross-section of the Chinese capital area crustal model, illustrate that the TSOS-FK hybrid method works well for modelling waves propagating in complex heterogeneous media and remains stable for long-time computation. These numerical examples also show that the TSOS-FK method can tackle the converted and scattered waves of the teleseismic plane waves caused by local heterogeneity. Thus, the TSOS and TSOS-FK methods proposed in this study present an essential tool for the joint inversion of local, regional, and teleseismic waveform data.

  11. Modeling Confidence and Response Time in Recognition Memory

    ERIC Educational Resources Information Center

    Ratcliff, Roger; Starns, Jeffrey J.

    2009-01-01

    A new model for confidence judgments in recognition memory is presented. In the model, the match between a single test item and memory produces a distribution of evidence, with better matches corresponding to distributions with higher means. On this match dimension, confidence criteria are placed, and the areas between the criteria under the…

  12. Conditional dissipation of scalars in homogeneous turbulence: Closure for MMC modelling

    NASA Astrophysics Data System (ADS)

    Wandel, Andrew P.

    2013-08-01

    While the mean and unconditional variance are to be predicted well by any reasonable turbulent combustion model, these are generally not sufficient for the accurate modelling of complex phenomena such as extinction/reignition. An additional criterion has been recently introduced: accurate modelling of the dissipation timescales associated with fluctuations of scalars about their conditional mean (conditional dissipation timescales). Analysis of Direct Numerical Simulation (DNS) results for a passive scalar shows that the conditional dissipation timescale is of the order of the integral timescale and smaller than the unconditional dissipation timescale. A model is proposed: the conditional dissipation timescale is proportional to the integral timescale. This model is used in Multiple Mapping Conditioning (MMC) modelling for a passive scalar case and a reactive scalar case, comparing to DNS results for both. The results show that this model improves the accuracy of MMC predictions so as to match the DNS results more closely using a relatively-coarse spatial resolution compared to other turbulent combustion models.

  13. Sequence homology between HLA-bound cytomegalovirus and human peptides: A potential trigger for alloreactivity

    PubMed Central

    Koparde, Vishal N.; Jameson-Lee, Maximilian; Elnasseh, Abdelrhman G.; Scalora, Allison F.; Kobulnicky, David J.; Serrano, Myrna G.; Roberts, Catherine H.; Buck, Gregory A.; Neale, Michael C.; Nixon, Daniel E.; Toor, Amir A.

    2017-01-01

    Human cytomegalovirus (hCMV) reactivation may often coincide with the development of graft-versus-host-disease (GVHD) in stem cell transplantation (SCT). Seventy seven SCT donor-recipient pairs (DRP) (HLA matched unrelated donor (MUD), n = 50; matched related donor (MRD), n = 27) underwent whole exome sequencing to identify single nucleotide polymorphisms (SNPs) generating alloreactive peptide libraries for each DRP (9-mer peptide-HLA complexes); Human CMV CROSS (Cross-Reactive Open Source Sequence) database was compiled from NCBI; HLA class I binding affinity for each DRPs HLA was calculated by NetMHCpan 2.8 and hCMV- derived 9-mers algorithmically compared to the alloreactive peptide-HLA complex libraries. Short consecutive (≥6) amino acid (AA) sequence homology matching hCMV to recipient peptides was considered for HLA-bound-peptide (IC50<500nM) cross reactivity. Of the 70,686 hCMV 9-mers contained within the hCMV CROSS database, an average of 29,658 matched the MRD DRP alloreactive peptides and 52,910 matched MUD DRP peptides (p<0.001). In silico analysis revealed multiple high affinity, immunogenic CMV-Human peptide matches (IC50<500 nM) expressed in GVHD-affected tissue-specific manner. hCMV+GVHD was found in 18 patients, 13 developing hCMV viremia before GVHD onset. Analysis of patients with GVHD identified potential cross reactive peptide expression within affected organs. We propose that hCMV peptide sequence homology with human alloreactive peptides may contribute to the pathophysiology of GVHD. PMID:28800601

  14. Sequence homology between HLA-bound cytomegalovirus and human peptides: A potential trigger for alloreactivity.

    PubMed

    Hall, Charles E; Koparde, Vishal N; Jameson-Lee, Maximilian; Elnasseh, Abdelrhman G; Scalora, Allison F; Kobulnicky, David J; Serrano, Myrna G; Roberts, Catherine H; Buck, Gregory A; Neale, Michael C; Nixon, Daniel E; Toor, Amir A

    2017-01-01

    Human cytomegalovirus (hCMV) reactivation may often coincide with the development of graft-versus-host-disease (GVHD) in stem cell transplantation (SCT). Seventy seven SCT donor-recipient pairs (DRP) (HLA matched unrelated donor (MUD), n = 50; matched related donor (MRD), n = 27) underwent whole exome sequencing to identify single nucleotide polymorphisms (SNPs) generating alloreactive peptide libraries for each DRP (9-mer peptide-HLA complexes); Human CMV CROSS (Cross-Reactive Open Source Sequence) database was compiled from NCBI; HLA class I binding affinity for each DRPs HLA was calculated by NetMHCpan 2.8 and hCMV- derived 9-mers algorithmically compared to the alloreactive peptide-HLA complex libraries. Short consecutive (≥6) amino acid (AA) sequence homology matching hCMV to recipient peptides was considered for HLA-bound-peptide (IC50<500nM) cross reactivity. Of the 70,686 hCMV 9-mers contained within the hCMV CROSS database, an average of 29,658 matched the MRD DRP alloreactive peptides and 52,910 matched MUD DRP peptides (p<0.001). In silico analysis revealed multiple high affinity, immunogenic CMV-Human peptide matches (IC50<500 nM) expressed in GVHD-affected tissue-specific manner. hCMV+GVHD was found in 18 patients, 13 developing hCMV viremia before GVHD onset. Analysis of patients with GVHD identified potential cross reactive peptide expression within affected organs. We propose that hCMV peptide sequence homology with human alloreactive peptides may contribute to the pathophysiology of GVHD.

  15. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image was processed by mean filter and median filter, then image matching was processed. The result show that when the noise is little, mean filter and median filter can achieve a good result. But when the noise density of salt and pepper noise is bigger than 0.4, or the variance of Gaussian noise is bigger than 0.0015, the result of image matching will be wrong.

  16. Varieties of Stimulus Control in Matching-to-Sample: A Kernel Analysis

    ERIC Educational Resources Information Center

    Fields, Lanny; Garruto, Michelle; Watanabe, Mari

    2010-01-01

    Conditional discrimination or matching-to-sample procedures have been used to study a wide range of complex psychological phenomena with infrahuman and human subjects. In most studies, the percentage of trials in which a subject selects the comparison stimulus that is related to the sample stimulus is used to index the control exerted by the…

  17. Comparing Organizational Learning Rates in Public and Non-Profit Schools in Qom Province of Iran

    ERIC Educational Resources Information Center

    Zarei Matin, Hassan; Jandaghi, Gholamreza; Moini, Boshra

    2007-01-01

    Regarding the increased complexity and dynamics of environmental factors and rapid changes, traditional organizations are not longer able to match with such changes and are destroying. Hence, as a tool for survival and matching with these changes, learning organizations are highly considered by many firms and corporations. What you are reading, is…

  18. Comparison of USGS and DLR topographic models of Comet Borrelly and photometric applications

    USGS Publications Warehouse

    Kirk, R.L.; Howington-Kraus, E.; Soderblom, L.A.; Giese, B.; Oberst, J.

    2004-01-01

    Stereo analysis of images obtained during the 2001 flyby of Comet Borrelly by NASA's Deep Space 1 (DS1) probe allows us to quantify the shape and photometric behavior of the nucleus. The shape is complex, with planar facets corresponding to the dark, mottled regions of the surface whereas the bright, smooth regions are convexly curved. The photometric as well as textural differences between these regions can be explained in terms of topography (roughness) at and below the image resolution, without invoking significant variations in single-particle properties; the material on Borrelly's surface could be quite uniform. A statistical comparison of the digital elevation models (DEMs) produced from the three highest-resolution images independently at the USGS and DLR shows that their difference standard deviation is 120 m, consistent with a matching error of 0.20 pixel (similar to reported matching accuracies for many other stereo datasets). The DEMs also show some systematic differences attributable to manual versus automatic matching. Disk-resolved photometric modeling of the nucleus using the DEM shows that bright, smooth terrains on Borrelly are similar in roughness (Hapke roughness ?? = 20??) to C-type asteroid Mathilde but slightly brighter and more backscattering (single-scattering albedo w = 0.056, Henyey-Greenstein phase parameter g = -0.32). The dark, mottled terrain is photometrically consistent with the same particles but with roughnesses as large as 60??. Intrinsically darker material is inconsistent with the phase behavior of these regions. Many local radiance variations are clearly related to topography, and others are consistent with a topographic explanation; one need not invoke albedo variations greater than a few tens of percent to explain the appearance of Borrelly. Published by Elsevier Inc.

  19. On the Impact of Execution Models: A Case Study in Computational Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram

    2015-05-25

    Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less

  20. Mathematical model of organic substrate degradation in solid waste windrow composting.

    PubMed

    Seng, Bunrith; Kristanti, Risky Ayu; Hadibarata, Tony; Hirayama, Kimiaki; Katayama-Hirayama, Keiko; Kaneko, Hidehiro

    2016-01-01

    Organic solid waste composting is a complex process that involves many coupled physical, chemical and biological mechanisms. To understand this complexity and to ease in planning, design and management of the composting plant, mathematical model for simulation is usually applied. The aim of this paper is to develop a mathematical model of organic substrate degradation and its performance evaluation in solid waste windrow composting system. The present model is a biomass-dependent model, considering biological growth processes under the limitation of moisture, oxygen and substrate contents, and temperature. The main output of this model is substrate content which was divided into two categories: slowly and rapidly degradable substrates. To validate the model, it was applied to a laboratory scale windrow composting of a mixture of wood chips and dog food. The wastes were filled into a cylindrical reactor of 6 cm diameter and 1 m height. The simulation program was run for 3 weeks with 1 s stepwise. The simulated results were in reasonably good agreement with the experimental results. The MC and temperature of model simulation were found to be matched with those of experiment, but limited for rapidly degradable substrates. Under anaerobic zone, the degradation of rapidly degradable substrate needs to be incorporated into the model to achieve full simulation of a long period static pile composting. This model is a useful tool to estimate the changes of substrate content during composting period, and acts as a basic model for further development of a sophisticated model.

  1. Auditory memory for timbre.

    PubMed

    McKeown, Denis; Wellsted, David

    2009-06-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex was decreased (Experiments 1 and 2) or increased (Experiments 3, 4, and 5) in intensity on half of trials: The task was simply to identify those trials. Prior to each trial, a pure tone inducer was introduced either at the same frequency as the target component or at the frequency of a different component of the complex. Consistent with a frequency-specific form of disruption, discrimination performance was impaired when the inducing tone matched the frequency of the following decrement or increment. A timbre memory model (TMM) is proposed incorporating channel-specific interference allied to inhibition of attending in the coding of sounds in the context of memory traces of recent sounds. (c) 2009 APA, all rights reserved.

  2. Biophysical comparison of ATP synthesis mechanisms shows a kinetic advantage for the rotary process.

    PubMed

    Anandakrishnan, Ramu; Zhang, Zining; Donovan-Maiye, Rory; Zuckerman, Daniel M

    2016-10-04

    The ATP synthase (F-ATPase) is a highly complex rotary machine that synthesizes ATP, powered by a proton electrochemical gradient. Why did evolution select such an elaborate mechanism over arguably simpler alternating-access processes that can be reversed to perform ATP synthesis? We studied a systematic enumeration of alternative mechanisms, using numerical and theoretical means. When the alternative models are optimized subject to fundamental thermodynamic constraints, they fail to match the kinetic ability of the rotary mechanism over a wide range of conditions, particularly under low-energy conditions. We used a physically interpretable, closed-form solution for the steady-state rate for an arbitrary chemical cycle, which clarifies kinetic effects of complex free-energy landscapes. Our analysis also yields insights into the debated "kinetic equivalence" of ATP synthesis driven by transmembrane pH and potential difference. Overall, our study suggests that the complexity of the F-ATPase may have resulted from positive selection for its kinetic advantage.

  3. A neurobehavioral examination of individuals with high-functioning autism and Asperger's disorder using a fronto-striatal model of dysfunction.

    PubMed

    Rinehart, Nicole J; Bradshaw, John L; Tonge, Bruce J; Brereton, Avril V; Bellgrove, Mark A

    2002-06-01

    The repetitive, stereotyped, and obsessive behaviors that characterize autism may in part be attributable to disruption of the region of the fronto-striatal system, which mediates executive abilities. Neuropsychological testing has shown that children with autism exhibit set-shifting deficiencies on tests such as the Wisconsin Card Sorting task but show normal inhibitory ability on variants of the Stroop color-word test. According to Minshew and Goldstein's multiple primary deficit theory, the complexity of the executive functioning task is important in determining the performance of individuals with autism. This study employed a visual-spatial task (with a Stroop-type component) to examine the integrity of executive functioning, in particular inhibition, in autism (n = 12) and Asperger's disorder (n = 12) under increasing levels of cognitive complexity. Whereas the Asperger's disorder group performed similarly to age- and IQ-matched control participants, even at the higher levels of cognitive complexity, the high-functioning autism group displayed inhibitory deficits specifically associated with increasing cognitive load.

  4. Interactive social contagions and co-infections on complex networks

    NASA Astrophysics Data System (ADS)

    Liu, Quan-Hui; Zhong, Lin-Feng; Wang, Wei; Zhou, Tao; Eugene Stanley, H.

    2018-01-01

    What we are learning about the ubiquitous interactions among multiple social contagion processes on complex networks challenges existing theoretical methods. We propose an interactive social behavior spreading model, in which two behaviors sequentially spread on a complex network, one following the other. Adopting the first behavior has either a synergistic or an inhibiting effect on the spread of the second behavior. We find that the inhibiting effect of the first behavior can cause the continuous phase transition of the second behavior spreading to become discontinuous. This discontinuous phase transition of the second behavior can also become a continuous one when the effect of adopting the first behavior becomes synergistic. This synergy allows the second behavior to be more easily adopted and enlarges the co-existence region of both behaviors. We establish an edge-based compartmental method, and our theoretical predictions match well with the simulation results. Our findings provide helpful insights into better understanding the spread of interactive social behavior in human society.

  5. Modelling galaxy spectra in presence of interstellar dust - III. From nearby galaxies to the distant Universe

    NASA Astrophysics Data System (ADS)

    Cassarà, L. P.; Piovan, L.; Chiosi, C.

    2015-07-01

    Improving upon the standard evolutionary population synthesis technique, we present spectrophotometric models of galaxies with morphology going from spherical structures to discs, properly accounting for the effect of dust in the interstellar medium (ISM). The models contain three main physical components: the diffuse ISM made of gas and dust, the complexes of molecular clouds where active star formation occurs, and stars of any age and chemical composition. These models are based on robust evolutionary chemical description providing the total amount of gas and stars present at any age, and matching the properties of galaxies of different morphological types. We have considered the results obtained by Piovan et al. for the properties of the ISM, and those by Cassarà et al. for the spectral energy distribution (SED) of single stellar populations, both in presence of dust, to model the integral SEDs of galaxies of different morphological types, going from pure bulges to discs passing through a number of composite systems with different combinations of the two components. The first part of the paper is devoted to recall the technical details of the method and the basic relations driving the interaction between the physical components of the galaxy. Then, the main parameters are examined and their effects on the SED of three prototype galaxies are highlighted. The theoretical SEDs nicely match the observational ones both for nearby galaxies and those at high redshift.

  6. Modeling the frequency response of microwave radiometers with QUCS

    NASA Astrophysics Data System (ADS)

    Zonca, A.; Roucaries, B.; Williams, B.; Rubin, I.; D'Arcangelo, O.; Meinhold, P.; Lubin, P.; Franceschet, C.; Jahn, S.; Mennella, A.; Bersanelli, M.

    2010-12-01

    Characterization of the frequency response of coherent radiometric receivers is a key element in estimating the flux of astrophysical emissions, since the measured signal depends on the convolution of the source spectral emission with the instrument band shape. Laboratory Radio Frequency (RF) measurements of the instrument bandpass often require complex test setups and are subject to a number of systematic effects driven by thermal issues and impedance matching, particularly if cryogenic operation is involved. In this paper we present an approach to modeling radiometers bandpasses by integrating simulations and RF measurements of individual components. This method is based on QUCS (Quasi Universal Circuit Simulator), an open-source circuit simulator, which gives the flexibility of choosing among the available devices, implementing new analytical software models or using measured S-parameters. Therefore an independent estimate of the instrument bandpass is achieved using standard individual component measurements and validated analytical simulations. In order to automate the process of preparing input data, running simulations and exporting results we developed the Python package python-qucs and released it under GNU Public License. We discuss, as working cases, bandpass response modeling of the COFE and Planck Low Frequency Instrument (LFI) radiometers and compare results obtained with QUCS and with a commercial circuit simulator software. The main purpose of bandpass modeling in COFE is to optimize component matching, while in LFI they represent the best estimation of frequency response, since end-to-end measurements were strongly affected by systematic effects.

  7. Goal-seeking neural net for recall and recognition

    NASA Astrophysics Data System (ADS)

    Omidvar, Omid M.

    1990-07-01

    Neural networks have been used to mimic cognitive processes which take place in animal brains. The learning capability inherent in neural networks makes them suitable candidates for adaptive tasks such as recall and recognition. The synaptic reinforcements create a proper condition for adaptation, which results in memorization, formation of perception, and higher order information processing activities. In this research a model of a goal seeking neural network is studied and the operation of the network with regard to recall and recognition is analyzed. In these analyses recall is defined as retrieval of stored information where little or no matching is involved. On the other hand recognition is recall with matching; therefore it involves memorizing a piece of information with complete presentation. This research takes the generalized view of reinforcement in which all the signals are potential reinforcers. The neuronal response is considered to be the source of the reinforcement. This local approach to adaptation leads to the goal seeking nature of the neurons as network components. In the proposed model all the synaptic strengths are reinforced in parallel while the reinforcement among the layers is done in a distributed fashion and pipeline mode from the last layer inward. A model of complex neuron with varying threshold is developed to account for inhibitory and excitatory behavior of real neuron. A goal seeking model of a neural network is presented. This network is utilized to perform recall and recognition tasks. The performance of the model with regard to the assigned tasks is presented.

  8. Shape parameters explain data from spatial transformations: comment on Pearce et al. (2004) and Tommasi & Polli (2004).

    PubMed

    Cheng, Ken; Gallistel, C R

    2005-04-01

    In 2 recent studies on rats (J. M. Pearce, M. A. Good, P. M. Jones, & A. McGregor, see record 2004-12429-006) and chicks (L. Tommasi & C. Polli, see record 2004-15642-007), the animals were trained to search in 1 corner of a rectilinear space. When tested in transformed spaces of different shapes, the animals still showed systematic choices. Both articles rejected the global matching of shape in favor of local matching processes. The present authors show that although matching by shape congruence is unlikely, matching by the shape parameter of the 1st principal axis can explain all the data. Other shape parameters, such as symmetry axes, may do even better. Animals are likely to use some global matching to constrain and guide the use of local cues; such use keeps local matching processes from exploding in complexity.

  9. Coarse-graining errors and numerical optimization using a relative entropy framework.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2011-03-07

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.

  10. Implementation of perfectly matched layers in an arbitrary geometrical boundary for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Gao, Hongwei; Zhang, Jianfeng

    2008-09-01

    The perfectly matched layer (PML) absorbing boundary condition is incorporated into an irregular-grid elastic-wave modelling scheme, thus resulting in an irregular-grid PML method. We develop the irregular-grid PML method using the local coordinate system based PML splitting equations and integral formulation of the PML equations. The irregular-grid PML method is implemented under a discretization of triangular grid cells, which has the ability to absorb incident waves in arbitrary directions. This allows the PML absorbing layer to be imposed along arbitrary geometrical boundaries. As a result, the computational domain can be constructed with smaller nodes, for instance, to represent the 2-D half-space by a semi-circle rather than a rectangle. By using a smooth artificial boundary, the irregular-grid PML method can also avoid the special treatments to the corners, which lead to complex computer implementations in the conventional PML method. We implement the irregular-grid PML method in both 2-D elastic isotropic and anisotropic media. The numerical simulations of a VTI lamb's problem, wave propagation in an isotropic elastic medium with curved surface and in a TTI medium demonstrate the good behaviour of the irregular-grid PML method.

  11. Scientist Role Models in the Classroom: How Important Is Gender Matching?

    ERIC Educational Resources Information Center

    Conner, Laura D. Carsten; Danielson, Jennifer

    2016-01-01

    Gender-matched role models are often proposed as a mechanism to increase identification with science among girls, with the ultimate aim of broadening participation in science. While there is a great deal of evidence suggesting that role models can be effective, there is mixed support in the literature for the importance of gender matching. We used…

  12. Plasmonic complex fluids of nematiclike and helicoidal self-assemblies of gold nanorods with a negative order parameter.

    PubMed

    Liu, Qingkun; Senyuk, Bohdan; Tang, Jianwei; Lee, Taewoo; Qian, Jun; He, Sailing; Smalyukh, Ivan I

    2012-08-24

    We describe a soft matter system of self-organized oblate micelles and plasmonic gold nanorods that exhibit a negative orientational order parameter. Because of anisotropic surface anchoring interactions, colloidal gold nanorods tend to align perpendicular to the director describing the average orientation of normals to the discoidal micelles. Helicoidal structures of highly concentrated nanorods with a negative order parameter are realized by adding a chiral additive and are further controlled by means of confinement and mechanical stress. Polarization-sensitive absorption, scattering, and two-photon luminescence are used to characterize orientations and spatial distributions of nanorods. Self-alignment and effective-medium optical properties of these hybrid inorganic-organic complex fluids match predictions of a simple model based on anisotropic surface anchoring interactions of nanorods with the structured host medium.

  13. Characterization of topological structure on complex networks.

    PubMed

    Nakamura, Ikuo

    2003-10-01

    Characterizing the topological structure of complex networks is a significant problem especially from the viewpoint of data mining on the World Wide Web. "Page rank" used in the commercial search engine Google is such a measure of authority to rank all the nodes matching a given query. We have investigated the page-rank distribution of the real Web and a growing network model, both of which have directed links and exhibit a power law distributions of in-degree (the number of incoming links to the node) and out-degree (the number of outgoing links from the node), respectively. We find a concentration of page rank on a small number of nodes and low page rank on high degree regimes in the real Web, which can be explained by topological properties of the network, e.g., network motifs, and connectivities of nearest neighbors.

  14. Improving the simple, complicated and complex realities of community-acquired pneumonia.

    PubMed

    Liu, S K; Homa, K; Butterly, J R; Kirkland, K B; Batalden, P B

    2009-04-01

    This paper first describes efforts to improve the care for patients hospitalised with community-acquired pneumonia and the associated changes in quality measures at a rural academic medical centre. The results of the improvement interventions and the associated clinical realities, expected outcomes, measures, improvement interventions and improvement aims are then re-examined using the Glouberman and Zimmerman typology of healthcare problems--simple, complicated and complex. The typology is then used to explore the future design and assessment of improvement interventions, which may allow better matching with the types of problem healthcare providers and organisations are confronted with. Matching improvement interventions with problem category has the possibility of improving the success of improvement efforts and the reliability of care while at the same time preserving needed provider autonomy and judgement to adapt care for more complex problems.

  15. Spectra, energy levels, and energy transition of lanthanide complexes with cinnamic acid and its derivatives.

    PubMed

    Zhou, Kaining; Feng, Zhongshan; Shen, Jun; Wu, Bing; Luo, Xiaobing; Jiang, Sha; Li, Li; Zhou, Xianju

    2016-04-05

    High resolution spectra and luminescent lifetimes of 6 europium(III)-cinnamic acid complex {[Eu2L6(DMF)(H2O)]·nDMF·H2O}m (L=cinnamic acid I, 4-methyl-cinnamic acid II, 4-chloro-cinnamic acid III, 4-methoxy-cinnamic acid IV, 4-hydroxy-cinnamic acid V, 4-nitro-cinnamic acid VI; DMF=N, N-dimethylformamide, C3H7NO) were recorded from 8 K to room temperature. The energy levels of Eu(3+) in these 6 complexes are obtained from the spectra analysis. It is found that the energy levels of the central Eu(3+) ions are influenced by the nephelauxetic effect, while the triplet state of ligand is lowered by the p-π conjugation effect of the para-substituted functional groups. The best energy matching between the ligand triplet state and the central ion excited state is found in complex I. While the other complexes show poorer matching because the gap of (5)D0 and triplet state contracts. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Complex Event Recognition Architecture

    NASA Technical Reports Server (NTRS)

    Fitzgerald, William A.; Firby, R. James

    2009-01-01

    Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.

  17. Complexity measures of music

    NASA Astrophysics Data System (ADS)

    Pease, April; Mahmoodi, Korosh; West, Bruce J.

    2018-03-01

    We present a technique to search for the presence of crucial events in music, based on the analysis of the music volume. Earlier work on this issue was based on the assumption that crucial events correspond to the change of music notes, with the interesting result that the complexity index of the crucial events is mu ~ 2, which is the same inverse power-law index of the dynamics of the brain. The search technique analyzes music volume and confirms the results of the earlier work, thereby contributing to the explanation as to why the brain is sensitive to music, through the phenomenon of complexity matching. Complexity matching has recently been interpreted as the transfer of multifractality from one complex network to another. For this reason we also examine the mulifractality of music, with the observation that the multifractal spectrum of a computer performance is significantly narrower than the multifractal spectrum of a human performance of the same musical score. We conjecture that although crucial events are demonstrably important for information transmission, they alone are not suficient to define musicality, which is more adequately measured by the multifractality spectrum.

  18. On validating remote sensing simulations using coincident real data

    NASA Astrophysics Data System (ADS)

    Wang, Mingming; Yao, Wei; Brown, Scott; Goodenough, Adam; van Aardt, Jan

    2016-05-01

    The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra's shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.

  19. Analysis of the solution structure of Thermosynechococcus elongatus photosystem I in n-dodecyl-β-d-maltoside using small-angle neutron scattering and molecular dynamics simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, Rosemary K.; Harris, Bradley J.; Iwuchukwu, Ifeyinwa J.

    2014-05-01

    Small-angle neutron scattering (SANS) and molecular dynamics (MD) simulation were used to investigate the structure of trimeric photosystem I (PSI) from Thermosynechococcus elongatus (T. elongatus) stabilized in n-dodecyl-β-d-maltoside (DDM) detergent solution. Scattering curves of detergent and protein–detergent complexes were measured at 18% D2O, the contrast match point for the detergent, and 100% D2O, allowing observation of the structures of protein/detergent complexes. It was determined that the maximum dimension of the PSI–DDM complex was consistent with the presence of a monolayer belt of detergent around the periphery of PSI. A dummy-atom reconstruction of the shape of the complex from the SANSmore » data indicates that the detergent envelope has an irregular shape around the hydrophobic periphery of the PSI trimer rather than a uniform, toroidal belt around the complex. A 50 ns MD simulation model (a DDM ring surrounding the PSI complex with extra interstitial DDM) of the PSI–DDM complex was developed for comparison with the SANS data. The results suggest that DDM undergoes additional structuring around the membrane-spanning surface of the complex instead of a simple, relatively uniform belt, as is generally assumed for studies that use detergents to solubilize membrane proteins.« less

  20. Probative value of absolute and relative judgments in eyewitness identification.

    PubMed

    Clark, Steven E; Erickson, Michael A; Breneman, Jesse

    2011-10-01

    It is well-accepted that eyewitness identification decisions based on relative judgments are less accurate than identification decisions based on absolute judgments. However, the theoretical foundation for this view has not been established. In this study relative and absolute judgments were compared through simulations of the WITNESS model (Clark, Appl Cogn Psychol 17:629-654, 2003) to address the question: Do suspect identifications based on absolute judgments have higher probative value than suspect identifications based on relative judgments? Simulations of the WITNESS model showed a consistent advantage for absolute judgments over relative judgments for suspect-matched lineups. However, simulations of same-foils lineups showed a complex interaction based on the accuracy of memory and the similarity relationships among lineup members.

  1. Control strategy optimization of HVAC plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Facci, Andrea Luigi; Zanfardino, Antonella; Martini, Fabrizio

    In this paper we present a methodology to optimize the operating conditions of heating, ventilation and air conditioning (HVAC) plants to achieve a higher energy efficiency in use. Semi-empiric numerical models of the plant components are used to predict their performances as a function of their set-point and the environmental and occupied space conditions. The optimization is performed through a graph-based algorithm that finds the set-points of the system components that minimize energy consumption and/or energy costs, while matching the user energy demands. The resulting model can be used with systems of almost any complexity, featuring both HVAC components andmore » energy systems, and is sufficiently fast to make it applicable to real-time setting.« less

  2. Power-rate-distortion analysis for wireless video communication under energy constraint

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Liang, Yongfang; Ahmad, Ishfaq

    2004-01-01

    In video coding and streaming over wireless communication network, the power-demanding video encoding operates on the mobile devices with limited energy supply. To analyze, control, and optimize the rate-distortion (R-D) behavior of the wireless video communication system under the energy constraint, we need to develop a power-rate-distortion (P-R-D) analysis framework, which extends the traditional R-D analysis by including another dimension, the power consumption. Specifically, in this paper, we analyze the encoding mechanism of typical video encoding systems and develop a parametric video encoding architecture which is fully scalable in computational complexity. Using dynamic voltage scaling (DVS), a hardware technology recently developed in CMOS circuits design, the complexity scalability can be translated into the power consumption scalability of the video encoder. We investigate the rate-distortion behaviors of the complexity control parameters and establish an analytic framework to explore the P-R-D behavior of the video encoding system. Both theoretically and experimentally, we show that, using this P-R-D model, the encoding system is able to automatically adjust its complexity control parameters to match the available energy supply of the mobile device while maximizing the picture quality. The P-R-D model provides a theoretical guideline for system design and performance optimization in wireless video communication under energy constraint, especially over the wireless video sensor network.

  3. A digital matched filter for reverse time chaos.

    PubMed

    Bailey, J Phillip; Beal, Aubrey N; Dean, Robert N; Hamilton, Michael C

    2016-07-01

    The use of reverse time chaos allows the realization of hardware chaotic systems that can operate at speeds equivalent to existing state of the art while requiring significantly less complex circuitry. Matched filter decoding is possible for the reverse time system since it exhibits a closed form solution formed partially by a linear basis pulse. Coefficients have been calculated and are used to realize the matched filter digitally as a finite impulse response filter. Numerical simulations confirm that this correctly implements a matched filter that can be used for detection of the chaotic signal. In addition, the direct form of the filter has been implemented in hardware description language and demonstrates performance in agreement with numerical results.

  4. A digital matched filter for reverse time chaos

    NASA Astrophysics Data System (ADS)

    Bailey, J. Phillip; Beal, Aubrey N.; Dean, Robert N.; Hamilton, Michael C.

    2016-07-01

    The use of reverse time chaos allows the realization of hardware chaotic systems that can operate at speeds equivalent to existing state of the art while requiring significantly less complex circuitry. Matched filter decoding is possible for the reverse time system since it exhibits a closed form solution formed partially by a linear basis pulse. Coefficients have been calculated and are used to realize the matched filter digitally as a finite impulse response filter. Numerical simulations confirm that this correctly implements a matched filter that can be used for detection of the chaotic signal. In addition, the direct form of the filter has been implemented in hardware description language and demonstrates performance in agreement with numerical results.

  5. Nonlinear acoustic wave equations with fractional loss operators.

    PubMed

    Prieur, Fabrice; Holm, Sverre

    2011-09-01

    Fractional derivatives are well suited to describe wave propagation in complex media. When introduced in classical wave equations, they allow a modeling of attenuation and dispersion that better describes sound propagation in biological tissues. Traditional constitutive equations from solid mechanics and heat conduction are modified using fractional derivatives. They are used to derive a nonlinear wave equation which describes attenuation and dispersion laws that match observations. This wave equation is a generalization of the Westervelt equation, and also leads to a fractional version of the Khokhlov-Zabolotskaya-Kuznetsov and Burgers' equations. © 2011 Acoustical Society of America

  6. Infrared imaging microscopy of bone: Illustrations from a mouse model of Fabry disease

    PubMed Central

    Boskey, Adele L.; Goldberg, Michel; Kulkarni, Ashok; Gomez, Santiago

    2006-01-01

    Bone is a complex tissue whose composition and properties vary with age, sex, diet, tissue type, health and disease. In this review, we demonstrate how infrared spectroscopy and infrared spectroscopic imaging can be applied to the study of these variations. A specific example of mice with Fabry disease (a lipid storage disease) is presented in which it is demonstrated that the bones of these young animals, while showing typical spatial variation in mineral content, mineral crystal size, and collagen maturity, do not differ from the bones of age- and sex-matched wild type animals. PMID:16697974

  7. Infrared imaging microscopy of bone: illustrations from a mouse model of Fabry disease.

    PubMed

    Boskey, Adele L; Goldberg, Michel; Kulkarni, Ashok; Gomez, Santiago

    2006-07-01

    Bone is a complex tissue whose composition and properties vary with age, sex, diet, tissue type, health and disease. In this review, we demonstrate how infrared spectroscopy and infrared spectroscopic imaging can be applied to the study of these variations. A specific example of mice with Fabry disease (a lipid storage disease) is presented in which it is demonstrated that the bones of these young animals, while showing typical spatial variation in mineral content, mineral crystal size, and collagen maturity, do not differ from the bones of age- and sex-matched wild type animals.

  8. Participation and social networks of school-age children with complex communication needs: a descriptive study.

    PubMed

    Thirumanickam, Abirami; Raghavendra, Parimala; Olsson, Catherine

    2011-09-01

    Social participation becomes particularly important in middle childhood, as it contributes towards the acquisition and development of critical life skills such as developing friendships and a sense of belonging. However, only limited literature is available on the impact of communication difficulties on social participation in middle childhood. This study compared the participation patterns of school-age children with and without physical disabilities and complex communication needs in extracurricular activities. Participants included five children between 6-9 years of age with moderate-severe physical disability and complex communication needs, and five matched peers. Findings showed that children with physical disability and complex communication needs engaged in activities with reduced variety, lower frequency, fewer partners and in limited venues, but reported higher levels of enjoyment and preference for activity participation, than their matched peers. These children also had fewer same-aged friends, but more paid workers in their social circle. This small-scale descriptive study provides some preliminary evidence about the impact of severe communication difficulties on participation and socialization.

  9. Plasma Parameters From Reentry Signal Attenuation

    DOE PAGES

    Statom, T. K.

    2018-02-27

    This study presents the application of a theoretically developed method that provides plasma parameter solution space information from measured RF attenuation that occurs during reentry. The purpose is to provide reentry plasma parameter information from the communication signal attenuation. The theoretical development centers around the attenuation and the complex index of refraction. The methodology uses an imaginary index of the refraction matching algorithm with a tolerance to find suitable solutions that satisfy the theory. The imaginary matching terms are then used to determine the real index of refraction resulting in the complex index of refraction. Then a filter is usedmore » to reject nonphysical solutions. Signal attenuation-based plasma parameter properties investigated include the complex index of refraction, plasma frequency, electron density, collision frequency, propagation constant, attenuation constant, phase constant, complex plasma conductivity, and electron mobility. RF plasma thickness attenuation is investigated and compared to the literature. Finally, similar plasma thickness for a specific signal attenuation can have different plasma properties.« less

  10. Plasma Parameters From Reentry Signal Attenuation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Statom, T. K.

    This study presents the application of a theoretically developed method that provides plasma parameter solution space information from measured RF attenuation that occurs during reentry. The purpose is to provide reentry plasma parameter information from the communication signal attenuation. The theoretical development centers around the attenuation and the complex index of refraction. The methodology uses an imaginary index of the refraction matching algorithm with a tolerance to find suitable solutions that satisfy the theory. The imaginary matching terms are then used to determine the real index of refraction resulting in the complex index of refraction. Then a filter is usedmore » to reject nonphysical solutions. Signal attenuation-based plasma parameter properties investigated include the complex index of refraction, plasma frequency, electron density, collision frequency, propagation constant, attenuation constant, phase constant, complex plasma conductivity, and electron mobility. RF plasma thickness attenuation is investigated and compared to the literature. Finally, similar plasma thickness for a specific signal attenuation can have different plasma properties.« less

  11. 78 FR 33869 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-05

    ... systems to execute Stock/Option Orders,\\7\\ Stock/Complex Orders,\\8\\ and the option components of such... Change Amending Exchange Rule 6.91 To Remove Provisions Governing How the Complex Matching Engine Handles Electronic Complex Orders That Contain a Stock Leg May 30, 2013. Pursuant to Section 19(b)(1) \\1\\ of the...

  12. Complex Sentence Comprehension and Working Memory in Children With Specific Language Impairment

    PubMed Central

    Montgomery, James W.; Evans, Julia L.

    2015-01-01

    Purpose This study investigated the association of 2 mechanisms of working memory (phonological short-term memory [PSTM], attentional resource capacity/allocation) with the sentence comprehension of school-age children with specific language impairment (SLI) and 2 groups of control children. Method Twenty-four children with SLI, 18 age-matched (CA) children, and 16 language- and memory-matched (LMM) children completed a nonword repetition task (PSTM), the competing language processing task (CLPT; resource capacity/allocation), and a sentence comprehension task comprising complex and simple sentences. Results (1) The SLI group performed worse than the CA group on each memory task; (2) all 3 groups showed comparable simple sentence comprehension, but for complex sentences, the SLI and LMM groups performed worse than the CA group; (3) for the SLI group, (a) CLPT correlated with complex sentence comprehension, and (b) nonword repetition correlated with simple sentence comprehension; (4) for CA children, neither memory variable correlated with either sentence type; and (5) for LMM children, only CLPT correlated with complex sentences. Conclusions Comprehension of both complex and simple grammar by school-age children with SLI is a mentally demanding activity, requiring significant working memory resources. PMID:18723601

  13. Evaluating Treatment and Generalization Patterns of Two Theoretically Motivated Sentence Comprehension Therapies

    PubMed Central

    Des Roches, Carrie A.; Vallila-Rohter, Sofia; Villard, Sarah; Tripodis, Yorghos; Caplan, David

    2016-01-01

    Purpose The current study examined treatment outcomes and generalization patterns following 2 sentence comprehension therapies: object manipulation (OM) and sentence-to-picture matching (SPM). Findings were interpreted within the framework of specific deficit and resource reduction accounts, which were extended in order to examine the nature of generalization following treatment of sentence comprehension deficits in aphasia. Method Forty-eight individuals with aphasia were enrolled in 1 of 8 potential treatment assignments that varied by task (OM, SPM), complexity of trained sentences (complex, simple), and syntactic movement (noun phrase, wh-movement). Comprehension of trained and untrained sentences was probed before and after treatment using stimuli that differed from the treatment stimuli. Results Linear mixed-model analyses demonstrated that, although both OM and SPM treatments were effective, OM resulted in greater improvement than SPM. Analyses of covariance revealed main effects of complexity in generalization; generalization from complex to simple linguistically related sentences was observed both across task and across movement. Conclusions Results are consistent with the complexity account of treatment efficacy, as generalization effects were consistently observed from complex to simpler structures. Furthermore, results provide support for resource reduction accounts that suggest that generalization can extend across linguistic boundaries, such as across movement type. PMID:27997950

  14. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    PubMed

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  15. Three-dimensional wideband electromagnetic modeling on massively parallel computers

    NASA Astrophysics Data System (ADS)

    Alumbaugh, David L.; Newman, Gregory A.; Prevost, Lydie; Shadid, John N.

    1996-01-01

    A method is presented for modeling the wideband, frequency domain electromagnetic (EM) response of a three-dimensional (3-D) earth to dipole sources operating at frequencies where EM diffusion dominates the response (less than 100 kHz) up into the range where propagation dominates (greater than 10 MHz). The scheme employs the modified form of the vector Helmholtz equation for the scattered electric fields to model variations in electrical conductivity, dielectric permitivity and magnetic permeability. The use of the modified form of the Helmholtz equation allows for perfectly matched layer ( PML) absorbing boundary conditions to be employed through the use of complex grid stretching. Applying the finite difference operator to the modified Helmholtz equation produces a linear system of equations for which the matrix is sparse and complex symmetrical. The solution is obtained using either the biconjugate gradient (BICG) or quasi-minimum residual (QMR) methods with preconditioning; in general we employ the QMR method with Jacobi scaling preconditioning due to stability. In order to simulate larger, more realistic models than has been previously possible, the scheme has been modified to run on massively parallel (MP) computer architectures. Execution on the 1840-processor Intel Paragon has indicated a maximum model size of 280 × 260 × 200 cells with a maximum flop rate of 14.7 Gflops. Three different geologic models are simulated to demonstrate the use of the code for frequencies ranging from 100 Hz to 30 MHz and for different source types and polarizations. The simulations show that the scheme is correctly able to model the air-earth interface and the jump in the electric and magnetic fields normal to discontinuities. For frequencies greater than 10 MHz, complex grid stretching must be employed to incorporate absorbing boundaries while below this normal (real) grid stretching can be employed.

  16. A novel approach to characterize information radiation in complex networks

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyang; Wang, Ying; Zhu, Lin; Li, Chao

    2016-06-01

    The traditional research of information dissemination is mostly based on the virus spreading model that the information is being spread by probability, which does not match very well to the reality, because the information that we receive is always more or less than what was sent. In order to quantitatively describe variations in the amount of information during the spreading process, this article proposes a safety information radiation model on the basis of communication theory, combining with relevant theories of complex networks. This model comprehensively considers the various influence factors when safety information radiates in the network, and introduces some concepts from the communication theory perspective, such as the radiation gain function, receiving gain function, information retaining capacity and information second reception capacity, to describe the safety information radiation process between nodes and dynamically investigate the states of network nodes. On a micro level, this article analyzes the influence of various initial conditions and parameters on safety information radiation through the new model simulation. The simulation reveals that this novel approach can reflect the variation of safety information quantity of each node in the complex network, and the scale-free network has better ;radiation explosive power;, while the small-world network has better ;radiation staying power;. The results also show that it is efficient to improve the overall performance of network security by selecting nodes with high degrees as the information source, refining and simplifying the information, increasing the information second reception capacity and decreasing the noises. In a word, this article lays the foundation for further research on the interactions of information and energy between internal components within complex systems.

  17. Factors That Influence Running Intensity in Interchange Players in Professional Rugby League.

    PubMed

    Delaney, Jace A; Thornton, Heidi R; Duthie, Grant M; Dascombe, Ben J

    2016-11-01

    Rugby league coaches adopt replacement strategies for their interchange players to maximize running intensity; however, it is important to understand the factors that may influence match performance. To assess the independent factors affecting running intensity sustained by interchange players during professional rugby league. Global positioning system (GPS) data were collected from all interchanged players (starters and nonstarters) in a professional rugby league squad across 24 matches of a National Rugby League season. A multilevel mixed-model approach was employed to establish the effect of various technical (attacking and defensive involvements), temporal (bout duration, time in possession, etc), and situational (season phase, recovery cycle, etc) factors on the relative distance covered and average metabolic power (P met ) during competition. Significant effects were standardized using correlation coefficients, and the likelihood of the effect was described using magnitude-based inferences. Superior intermittent running ability resulted in very likely large increases in both relative distance and P met . As the length of a bout increased, both measures of running intensity exhibited a small decrease. There were at least likely small increases in running intensity for matches played after short recovery cycles and against strong opposition. During a bout, the number of collision-based involvements increased running intensity, whereas time in possession and ball time out of play decreased demands. These data demonstrate a complex interaction of individual- and match-based factors that require consideration when developing interchange strategies, and the manipulation of training loads during shorter recovery periods and against stronger opponents may be beneficial.

  18. Time-Series Analysis of Embodied Interaction: Movement Variability and Complexity Matching As Dyadic Properties

    PubMed Central

    Zapata-Fonseca, Leonardo; Dotov, Dobromir; Fossion, Ruben; Froese, Tom

    2016-01-01

    There is a growing consensus that a fuller understanding of social cognition depends on more systematic studies of real-time social interaction. Such studies require methods that can deal with the complex dynamics taking place at multiple interdependent temporal and spatial scales, spanning sub-personal, personal, and dyadic levels of analysis. We demonstrate the value of adopting an extended multi-scale approach by re-analyzing movement time-series generated in a study of embodied dyadic interaction in a minimal virtual reality environment (a perceptual crossing experiment). Reduced movement variability revealed an interdependence between social awareness and social coordination that cannot be accounted for by either subjective or objective factors alone: it picks out interactions in which subjective and objective conditions are convergent (i.e., elevated coordination is perceived as clearly social, and impaired coordination is perceived as socially ambiguous). This finding is consistent with the claim that interpersonal interaction can be partially constitutive of direct social perception. Clustering statistics (Allan Factor) of salient events revealed fractal scaling. Complexity matching defined as the similarity between these scaling laws was significantly more pronounced in pairs of participants as compared to surrogate dyads. This further highlights the multi-scale and distributed character of social interaction and extends previous complexity matching results from dyadic conversation to non-verbal social interaction dynamics. Trials with successful joint interaction were also associated with an increase in local coordination. Consequently, a local coordination pattern emerges on the background of complex dyadic interactions in the PCE task and makes joint successful performance possible. PMID:28018274

  19. Quantum Matching Theory (with new complexity-theoretic, combinatorial and topical insights on the nature of the quantum entanglement)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurvits, L.

    2002-01-01

    Classical matching theory can be defined in terms of matrices with nonnegative entries. The notion of Positive operator, central in Quantum Theory, is a natural generalization of matrices with non-negative entries. Based on this point of view, we introduce a definition of perfect Quantum (operator) matching. We show that the new notion inherits many 'classical' properties, but not all of them. This new notion goes somewhere beyound matroids. For separable bipartite quantum states this new notion coinsides with the full rank property of the intersection of two corresponding geometric matroids. In the classical situation, permanents are naturally associated with perfectsmore » matchings. We introduce an analog of permanents for positive operators, called Quantum Permanent and show how this generalization of the permanent is related to the Quantum Entanglement. Besides many other things, Quantum Permanents provide new rational inequalities necessary for the separability of bipartite quantum states. Using Quantum Permanents, we give deterministic poly-time algorithm to solve Hidden Matroids Intersection Problem and indicate some 'classical' complexity difficulties associated with the Quantum Entanglement. Finally, we prove that the weak membership problem for the convex set of separable bipartite density matrices is NP-HARD.« less

  20. Adaptive matched filter spatial detection performance on standard imagery from a wideband VHF/UHF SAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, M.R.; Phillips, S.A.; Sofianos, D.J.

    1994-12-31

    The adaptive matched filter was implemented as a spatial detector for amplitude-only or complex images, and applied to an image formed by standard narrow band means from a wide angle, wideband radar. Direct performance comparisons were made between different implementations and various matched and mismatched cases by using a novel approach to generate ROC curves parametrically. For perfectly matched cases, performance using imaged targets was found to be significantly lower than potential performance of artificial targets whose features differed from the background. Incremental gain due to whitening the background was also found to be small, indicating little background spatial correlation.more » It is conjectured that the relatively featureless behavior in both targets and background is due to the image formation process, since this technique averages together all wide angle, wideband information. For mismatched cases where the signature was unknown, the amplitude detector losses were approximately equal to whatever gain over noncoherent integration that matching provided. However, the complex detector was generally very sensitive to unknown information, especially phase, and produced much larger losses. Whitening under these mismatched conditions produced further losses. Detector choice thus depends primarily on how reproducible target signatures are, especially if phase is used, and the subsequent number of stored signatures necessary to account for various imaging aspect angles.« less

  1. Optimization of incremental structure from motion combining a random k-d forest and pHash for unordered images in a complex scene

    NASA Astrophysics Data System (ADS)

    Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi

    2018-01-01

    On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.

  2. Hdr Imaging for Feature Detection on Detailed Architectural Scenes

    NASA Astrophysics Data System (ADS)

    Kontogianni, G.; Stathopoulou, E. K.; Georgopoulos, A.; Doulamis, A.

    2015-02-01

    3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR) images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR) images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  3. A developmental study of proverb comprehension.

    PubMed

    Resnick, D A

    1982-09-01

    Growth in proverb comprehension was hypothesized to result from the gradual emergence of cognitive abilities reflected in a sequence of increasingly complex abilities: story matching, transfer of relations, desymbolization, proverb matching, and paraphrase. Items for these abilities for each of 10 proverbs of two structural types were administered in three test sessions to 438 students in grades three to seven. An analogy subtest was used to measure general intelligence. ANOVA yielded significant main effects for grade, tasks, and proverbs (all p's less than .01). A significant task x proverb interaction (p less than .01) revealed the difficulty of precise control over the language of the items. Proverb structure had no measurable impact on difficulty. Analogy score was a significant factor in performance (p less than .01) but not as potent as age (p less than .01). The sequential order of abilities received only weak confirmation, though tasks did correlate among themselves with medium strength (r's = .50-.70). Individual interviews added a qualitative dimension to the findings. The suitability of cognitive hierarchical models for proverb comprehension was questioned.

  4. Local gradient Gabor pattern (LGGP) with applications in face recognition, cross-spectral matching, and soft biometrics

    NASA Astrophysics Data System (ADS)

    Chen, Cunjian; Ross, Arun

    2013-05-01

    Researchers in face recognition have been using Gabor filters for image representation due to their robustness to complex variations in expression and illumination. Numerous methods have been proposed to model the output of filter responses by employing either local or global descriptors. In this work, we propose a novel but simple approach for encoding Gradient information on Gabor-transformed images to represent the face, which can be used for identity, gender and ethnicity assessment. Extensive experiments on the standard face benchmark FERET (Visible versus Visible), as well as the heterogeneous face dataset HFB (Near-infrared versus Visible), suggest that the matching performance due to the proposed descriptor is comparable against state-of-the-art descriptor-based approaches in face recognition applications. Furthermore, the same feature set is used in the framework of a Collaborative Representation Classification (CRC) scheme for deducing soft biometric traits such as gender and ethnicity from face images in the AR, Morph and CAS-PEAL databases.

  5. Coding response to a case-mix measurement system based on multiple diagnoses.

    PubMed

    Preyra, Colin

    2004-08-01

    To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.

  6. First-harmonic nonlinearities can predict unseen third-harmonics in medium-amplitude oscillatory shear (MAOS)

    NASA Astrophysics Data System (ADS)

    Carey-De La Torre, Olivia; Ewoldt, Randy H.

    2018-02-01

    We use first-harmonic MAOS nonlinearities from G 1' and G 1″ to test a predictive structure-rheology model for a transient polymer network. Using experiments with PVA-Borax (polyvinyl alcohol cross-linked by sodium tetraborate (borax)) at 11 different compositions, the model is calibrated to first-harmonic MAOS data on a torque-controlled rheometer at a fixed frequency, and used to predict third-harmonic MAOS on a displacement controlled rheometer at a different frequency three times larger. The prediction matches experiments for decomposed MAOS measures [ e 3] and [ v 3] with median disagreement of 13% and 25%, respectively, across all 11 compositions tested. This supports the validity of this model, and demonstrates the value of using all four MAOS signatures to understand and test structure-rheology relations for complex fluids.

  7. Space Laboratory on a Table Top: A Next Generative ECLSS design and diagnostic tool

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.

    2005-01-01

    This paper describes the development plan for a comprehensive research and diagnostic tool for aspects of advanced life support systems in space-based laboratories. Specifically it aims to build a high fidelity tabletop model that can be used for the purpose of risk mitigation, failure mode analysis, contamination tracking, and testing reliability. We envision a comprehensive approach involving experimental work coupled with numerical simulation to develop this diagnostic tool. It envisions a 10% scale transparent model of a space platform such as the International Space Station that operates with water or a specific matched index of refraction liquid as the working fluid. This allows the scaling of a 10 ft x 10 ft x 10 ft room with air flow to 1 ft x 1 ft x 1 ft tabletop model with water/liquid flow. Dynamic similitude for this length scale dictates model velocities to be 67% of full-scale and thereby the time scale of the model to represent 15% of the full- scale system; meaning identical processes in the model are completed in 15% of the full- scale-time. The use of an index matching fluid (fluid that matches the refractive index of cast acrylic, the model material) allows making the entire model (with complex internal geometry) transparent and hence conducive to non-intrusive optical diagnostics. So using such a system one can test environment control parameters such as core flows (axial flows), cross flows (from registers and diffusers), potential problem areas such as flow short circuits, inadequate oxygen content, build up of other gases beyond desirable levels, test mixing processes within the system at local nodes or compartments and assess the overall system performance. The system allows quantitative measurements of contaminants introduced in the system and allows testing and optimizing the tracking process and removal of contaminants. The envisaged system will be modular and hence flexible for quick configuration change and subsequent testing. The data and inferences from the tests will allow for improvements in the development and design of next generation life support systems and configurations. Preliminary experimental and modeling work in this area will be presented. This involves testing of a single inlet-exit model with detailed 3-D flow visualization and quantitative diagnostics and computational modeling of the system.

  8. Color-Matching and Blending-Effect of Universal Shade Bulk-Fill-Resin-Composite in Resin-Composite-Models and Natural Teeth.

    PubMed

    Abdelraouf, Rasha M; Habib, Nour A

    2016-01-01

    Objectives . To assess visually color-matching and blending-effect (BE) of a universal shade bulk-fill-resin-composite placed in resin-composite-models with different shades and cavity sizes and in natural teeth (extracted and patients' teeth). Materials and Methods . Resin-composite-discs (10 mm × 1 mm) were prepared of universal shade composite and resin-composite of shades: A1, A2, A3, A3.5, and A4. Spectrophotometric-color-measurement was performed to calculate color-difference (Δ E ) between the universal shade and shaded-resin-composites discs and determine their translucency-parameter (TP). Visual assessment was performed by seven normal-color-vision-observers to determine the color-matching between the universal shade and each shade, under Illuminant D65. Color-matching visual scoring (VS) values were expressed numerically (1-5): 1: mismatch/totally unacceptable, 2: Poor-Match/hardly acceptable, 3: Good-Match/acceptable, 4: Close-Match/small-difference, and 5: Exact-Match/no-color-difference. Occlusal cavities of different sizes were prepared in teeth-like resin-composite-models with shades A1, A2, A3, A3.5, and A4. The cavities were filled by the universal shade composite. The same scale was used to score color-matching between the fillings and composite-models. BE was calculated as difference in mean-visual-scores in models and that of discs. Extracted teeth with two different class I-cavity sizes as well as ten patients' lower posterior molars with occlusal caries were prepared, filled by universal shade composite, and assessed similarly. Results . In models, the universal shade composite showed close matching in the different cavity sizes and surrounding shades (4 ≤ VS < 5) (BE = 0.6-2.9 in small cavities and 0.5-2.8 in large cavities). In extracted teeth, there was good-to-close color-matching (VS = 3.7-4.4 in small cavities, BE = 2.5-3.2) (VS = 3-3.5, BE = 1.8-2.3 in large cavities). In patients' molars, the universal shade composite showed good-matching (VS = 3-3.3, BE = -0.9-2.1). Conclusions . Color-matching of universal shade resin-composite was satisfactory rather than perfect in patients' teeth.

  9. Seeing the Wood for the Trees: Applying the dual-memory system model to investigate expert teachers' observational skills in natural ecological learning environments

    NASA Astrophysics Data System (ADS)

    Stolpe, Karin; Björklund, Lars

    2012-01-01

    This study aims to investigate two expert ecology teachers' ability to attend to essential details in a complex environment during a field excursion, as well as how they teach this ability to their students. In applying a cognitive dual-memory system model for learning, we also suggest a rationale for their behaviour. The model implies two separate memory systems: the implicit, non-conscious, non-declarative system and the explicit, conscious, declarative system. This model provided the starting point for the research design. However, it was revised from the empirical findings supported by new theoretical insights. The teachers were video and audio recorded during their excursion and interviewed in a stimulated recall setting afterwards. The data were qualitatively analysed using the dual-memory system model. The results show that the teachers used holistic pattern recognition in their own identification of natural objects. However, teachers' main strategy to teach this ability is to give the students explicit rules or specific characteristics. According to the dual-memory system model the holistic pattern recognition is processed in the implicit memory system as a non-conscious match with earlier experienced situations. We suggest that this implicit pattern matching serves as an explanation for teachers' ecological and teaching observational skills. Another function of the implicit memory system is its ability to control automatic behaviour and non-conscious decision-making. The teachers offer the students firsthand sensory experiences which provide a prerequisite for the formation of implicit memories that provides a foundation for expertise.

  10. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection.

    PubMed

    Baumann, Oliver; Vromen, Joyce M G; Cheung, Allen; McFadyen, Jessica; Ren, Yudan; Guo, Christine C

    2018-01-01

    We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection.

  11. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection

    PubMed Central

    Ren, Yudan

    2018-01-01

    Abstract We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection. PMID:29354682

  12. Additional adjoint Monte Carlo studies of the shielding of concrete structures against initial gamma radiation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.; Cohen, M.O.

    1975-02-01

    The adjoint Monte Carlo method previously developed by MAGI has been applied to the calculation of initial radiation dose due to air secondary gamma rays and fission product gamma rays at detector points within buildings for a wide variety of problems. These provide an in-depth survey of structure shielding effects as well as many new benchmark problems for matching by simplified models. Specifically, elevated ring source results were obtained in the following areas: doses at on-and off-centerline detectors in four concrete blockhouse structures; doses at detector positions along the centerline of a high-rise structure without walls; dose mapping at basementmore » detector positions in the high-rise structure; doses at detector points within a complex concrete structure containing exterior windows and walls and interior partitions; modeling of the complex structure by replacing interior partitions by additional material at exterior walls; effects of elevation angle changes; effects on the dose of changes in fission product ambient spectra; and modeling of mutual shielding due to external structures. In addition, point source results yielding dose extremes about the ring source average were obtained. (auth)« less

  13. Mathematical models of behavior of individual animals.

    PubMed

    Tsibulsky, Vladimir L; Norman, Andrew B

    2007-01-01

    This review is focused on mathematical modeling of behaviors of a whole organism with special emphasis on models with a clearly scientific approach to the problem that helps to understand the mechanisms underlying behavior. The aim is to provide an overview of old and contemporary mathematical models without complex mathematical details. Only deterministic and stochastic, but not statistical models are reviewed. All mathematical models of behavior can be divided into two main classes. First, models that are based on the principle of teleological determinism assume that subjects choose the behavior that will lead them to a better payoff in the future. Examples are game theories and operant behavior models both of which are based on the matching law. The second class of models are based on the principle of causal determinism, which assume that subjects do not choose from a set of possibilities but rather are compelled to perform a predetermined behavior in response to specific stimuli. Examples are perception and discrimination models, drug effects models and individual-based population models. A brief overview of the utility of each mathematical model is provided for each section.

  14. Facilitating CCS Business Planning by Extending the Functionality of the SimCCS Integrated System Model

    DOE PAGES

    Ellett, Kevin M.; Middleton, Richard S.; Stauffer, Philip H.; ...

    2017-08-18

    The application of integrated system models for evaluating carbon capture and storage technology has expanded steadily over the past few years. To date, such models have focused largely on hypothetical scenarios of complex source-sink matching involving numerous large-scale CO 2 emitters, and high-volume, continuous reservoirs such as deep saline formations to function as geologic sinks for carbon storage. Though these models have provided unique insight on the potential costs and feasibility of deploying complex networks of integrated infrastructure, there remains a pressing need to translate such insight to the business community if this technology is to ever achieve a trulymore » meaningful impact in greenhouse gas mitigation. Here, we present a new integrated system modelling tool termed SimCCUS aimed at providing crucial decision support for businesses by extending the functionality of a previously developed model called SimCCS. The primary innovation of the SimCCUS tool development is the incorporation of stacked geological reservoir systems with explicit consideration of processes and costs associated with the operation of multiple CO 2 utilization and storage targets from a single geographic location. In such locations provide significant efficiencies through economies of scale, effectively minimizing CO 2 storage costs while simultaneously maximizing revenue streams via the utilization of CO 2 as a commodity for enhanced hydrocarbon recovery.« less

  15. Facilitating CCS Business Planning by Extending the Functionality of the SimCCS Integrated System Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellett, Kevin M.; Middleton, Richard S.; Stauffer, Philip H.

    The application of integrated system models for evaluating carbon capture and storage technology has expanded steadily over the past few years. To date, such models have focused largely on hypothetical scenarios of complex source-sink matching involving numerous large-scale CO 2 emitters, and high-volume, continuous reservoirs such as deep saline formations to function as geologic sinks for carbon storage. Though these models have provided unique insight on the potential costs and feasibility of deploying complex networks of integrated infrastructure, there remains a pressing need to translate such insight to the business community if this technology is to ever achieve a trulymore » meaningful impact in greenhouse gas mitigation. Here, we present a new integrated system modelling tool termed SimCCUS aimed at providing crucial decision support for businesses by extending the functionality of a previously developed model called SimCCS. The primary innovation of the SimCCUS tool development is the incorporation of stacked geological reservoir systems with explicit consideration of processes and costs associated with the operation of multiple CO 2 utilization and storage targets from a single geographic location. In such locations provide significant efficiencies through economies of scale, effectively minimizing CO 2 storage costs while simultaneously maximizing revenue streams via the utilization of CO 2 as a commodity for enhanced hydrocarbon recovery.« less

  16. Structural adjustment for accurate conditioning in large-scale subsurface systems

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman

    2017-03-01

    Most of the current subsurface simulation approaches consider a priority list for honoring the well and any other auxiliary data, and eventually adopt a middle ground between the quality of the model and conditioning it to hard data. However, as the number of datasets increases, such methods often produce undesirable features in the subsurface model. Due to their high flexibility, subsurface modeling based on training images (TIs) is becoming popular. Providing comprehensive TIs remains, however, an outstanding problem. In addition, identifying a pattern similar to those in the TI that honors the well and other conditioning data is often difficult. Moreover, the current subsurface modeling approaches do not account for small perturbations that may occur in a subsurface system. Such perturbations are active in most of the depositional systems. In this paper, a new methodology is presented that is based on an irregular gridding scheme that accounts for incomplete TIs and minor offsets. Use of the methodology enables one to use a small or incomplete TI and adaptively change the patterns in the simulation grid in order to simultaneously honor the well data and take into account the effect of the local offsets. Furthermore, the proposed method was used on various complex process-based models and their structures are deformed for matching with the conditioning point data. The accuracy and robustness of the proposed algorithm are successfully demonstrated by applying it to models of several complex examples.

  17. Solving the aerodynamics of fungal flight: how air viscosity slows spore motion.

    PubMed

    Fischer, Mark W F; Stolze-Rybczynski, Jessica L; Davis, Diana J; Cui, Yunluan; Money, Nicholas P

    2010-01-01

    Viscous drag causes the rapid deceleration of fungal spores after high-speed launches and limits discharge distance. Stokes' law posits a linear relationship between drag force and velocity. It provides an excellent fit to experimental measurements of the terminal velocity of free-falling spores and other instances of low Reynolds number motion (Re<1). More complex, non-linear drag models have been devised for movements characterized by higher Re, but their effectiveness for modeling the launch of fast-moving fungal spores has not been tested. In this paper, we use data on spore discharge processes obtained from ultra-high-speed video recordings to evaluate the effects of air viscosity predicted by Stokes' law and a commonly used non-linear drag model. We find that discharge distances predicted from launch speeds by Stokes' model provide a much better match to measured distances than estimates from the more complex drag model. Stokes' model works better over a wide range projectile sizes, launch speeds, and discharge distances, from microscopic mushroom ballistospores discharged at <1 m s(-1) over a distance of <0.1mm (Re<1.0), to macroscopic sporangia of Pilobolus that are launched at >10 m s(-1) and travel as far as 2.5m (Re>100). Copyright © 2010 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  18. CLIMLAB: a Python-based software toolkit for interactive, process-oriented climate modeling

    NASA Astrophysics Data System (ADS)

    Rose, B. E. J.

    2015-12-01

    Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The IPython notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields. However CLIMLAB is well suited to be deployed as a computational back-end for a graphical gaming environment based on earth-system modeling.

  19. Gene expression profiling via LongSAGE in a non-model plant species: a case study in seeds of Brassica napus

    PubMed Central

    Obermeier, Christian; Hosseini, Bashir; Friedt, Wolfgang; Snowdon, Rod

    2009-01-01

    Background Serial analysis of gene expression (LongSAGE) was applied for gene expression profiling in seeds of oilseed rape (Brassica napus ssp. napus). The usefulness of this technique for detailed expression profiling in a non-model organism was demonstrated for the highly complex, neither fully sequenced nor annotated genome of B. napus by applying a tag-to-gene matching strategy based on Brassica ESTs and the annotated proteome of the closely related model crucifer A. thaliana. Results Transcripts from 3,094 genes were detected at two time-points of seed development, 23 days and 35 days after pollination (DAP). Differential expression showed a shift from gene expression involved in diverse developmental processes including cell proliferation and seed coat formation at 23 DAP to more focussed metabolic processes including storage protein accumulation and lipid deposition at 35 DAP. The most abundant transcripts at 23 DAP were coding for diverse protease inhibitor proteins and proteases, including cysteine proteases involved in seed coat formation and a number of lipid transfer proteins involved in embryo pattern formation. At 35 DAP, transcripts encoding napin, cruciferin and oleosin storage proteins were most abundant. Over both time-points, 18.6% of the detected genes were matched by Brassica ESTs identified by LongSAGE tags in antisense orientation. This suggests a strong involvement of antisense transcript expression in regulatory processes during B. napus seed development. Conclusion This study underlines the potential of transcript tagging approaches for gene expression profiling in Brassica crop species via EST matching to annotated A. thaliana genes. Limits of tag detection for low-abundance transcripts can today be overcome by ultra-high throughput sequencing approaches, so that tag-based gene expression profiling may soon become the method of choice for global expression profiling in non-model species. PMID:19575793

  20. Receptive fields of locust brain neurons are matched to polarization patterns of the sky.

    PubMed

    Bech, Miklós; Homberg, Uwe; Pfeiffer, Keram

    2014-09-22

    Many animals, including insects, are able to use celestial cues as a reference for spatial orientation and long-distance navigation [1]. In addition to direct sunlight, the chromatic gradient of the sky and its polarization pattern are suited to serve as orientation cues [2-5]. Atmospheric scattering of sunlight causes a regular pattern of E vectors in the sky, which are arranged along concentric circles around the sun [5, 6]. Although certain insects rely predominantly on sky polarization for spatial orientation [7], it has been argued that detection of celestial E vector orientation may not suffice to differentiate between solar and antisolar directions [8, 9]. We show here that polarization-sensitive (POL) neurons in the brain of the desert locust Schistocerca gregaria can overcome this ambiguity. Extracellular recordings from POL units in the central complex and lateral accessory lobes revealed E vector tunings arranged in concentric circles within large receptive fields, matching the sky polarization pattern at certain solar positions. Modeling of neuronal responses under an idealized sky polarization pattern (Rayleigh sky) suggests that these "matched filter" properties allow locusts to unambiguously determine the solar azimuth by relying solely on the sky polarization pattern for compass navigation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Transplantation of iPS-Derived Tumor Cells with a Homozygous MHC Haplotype Induces GRP94 Antibody Production in MHC-Matched Macaques.

    PubMed

    Ishigaki, Hirohito; Maeda, Toshinaga; Inoue, Hirokazu; Akagi, Tsuyoshi; Sasamura, Takako; Ishida, Hideaki; Inubushi, Toshiro; Okahara, Junko; Shiina, Takashi; Nakayama, Misako; Itoh, Yasushi; Ogasawara, Kazumasa

    2017-11-01

    Immune surveillance is a critical component of the antitumor response in vivo , yet the specific components of the immune system involved in this regulatory response remain unclear. In this study, we demonstrate that autoantibodies can mitigate tumor growth in vitro and in vivo We generated two cancer cell lines, embryonal carcinoma and glioblastoma cell lines, from monkey-induced pluripotent stem cells (iPSC) carrying a homozygous haplotype of major histocompatibility complex (MHC, Mafa in Macaca fascicularis). To establish a monkey cancer model, we transplanted these cells into monkeys carrying the matched Mafa haplotype in one of the chromosomes. Neither Mafa-homozygous cancer cell line grew in monkeys carrying the matched Mafa haplotype heterozygously. We detected in the plasma of these monkeys an IgG autoantibody against GRP94, a heat shock protein. Injection of the plasma prevented growth of the tumor cells in immunodeficient mice, whereas plasma IgG depleted of GRP94 IgG exhibited reduced killing activity against cancer cells in vitro These results indicate that humoral immunity, including autoantibodies against GRP94, plays a role in cancer immune surveillance. Cancer Res; 77(21); 6001-10. ©2017 AACR . ©2017 American Association for Cancer Research.

  2. Component-based target recognition inspired by human vision

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Agyepong, Kwabena

    2009-05-01

    In contrast with machine vision, human can recognize an object from complex background with great flexibility. For example, given the task of finding and circling all cars (no further information) in a picture, you may build a virtual image in mind from the task (or target) description before looking at the picture. Specifically, the virtual car image may be composed of the key components such as driver cabin and wheels. In this paper, we propose a component-based target recognition method by simulating the human recognition process. The component templates (equivalent to the virtual image in mind) of the target (car) are manually decomposed from the target feature image. Meanwhile, the edges of the testing image can be extracted by using a difference of Gaussian (DOG) model that simulates the spatiotemporal response in visual process. A phase correlation matching algorithm is then applied to match the templates with the testing edge image. If all key component templates are matched with the examining object, then this object is recognized as the target. Besides the recognition accuracy, we will also investigate if this method works with part targets (half cars). In our experiments, several natural pictures taken on streets were used to test the proposed method. The preliminary results show that the component-based recognition method is very promising.

  3. Using Dispersed Modes During Model Correlation

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.; Hathcock, Megan L.

    2017-01-01

    The model correlation process for the modal characteristics of a launch vehicle is well established. After a test, parameters within the nominal model are adjusted to reflect structural dynamics revealed during testing. However, a full model correlation process for a complex structure can take months of man-hours and many computational resources. If the analyst only has weeks, or even days, of time in which to correlate the nominal model to the experimental results, then the traditional correlation process is not suitable. This paper describes using model dispersions to assist the model correlation process and decrease the overall cost of the process. The process creates thousands of model dispersions from the nominal model prior to the test and then compares each of them to the test data. Using mode shape and frequency error metrics, one dispersion is selected as the best match to the test data. This dispersion is further improved by using a commercial model correlation software. In the three examples shown in this paper, this dispersion based model correlation process performs well when compared to models correlated using traditional techniques and saves time in the post-test analysis.

  4. Observed and simulated ground motions in the San Bernardino basin region for the Hector Mine, California, earthquake

    USGS Publications Warehouse

    Graves, R.W.; Wald, D.J.

    2004-01-01

    During the MW 7.1 Hector Mine earthquake, peak ground velocities recorded at sites in the central San Bernardino basin region were up to 2 times larger and had significantly longer durations of strong shaking than sites just outside the basin. To better understand the effects of 3D structure on the long-period ground-motion response in this region, we have performed finite-difference simulations for this earthquake. The simulations are numerically accurate for periods of 2 sec and longer and incorporate the detailed spatial and temporal heterogeneity of source rupture, as well as complex 3D basin structure. Here, we analyze three models of the San Bernardino basin: model A (with structural constraints from gravity and seismic reflection data), model F (water well and seismic refraction data), and the Southern California Earthquake Center version 3 model (hydrologic and seismic refraction data). Models A and F are characterized by a gradual increase in sediment thickness toward the south with an abrupt step-up in the basement surface across the San Jacinto fault. The basin structure in the SCEC version 3 model has a nearly uniform sediment thickness of 1 km with little basement topography along the San Jacinto fault. In models A and F, we impose a layered velocity structure within the sediments based on the seismic refraction data and an assumed depth-dependent Vp/Vs ratio. Sediment velocities within the SCEC version 3 model are given by a smoothly varying rule-based function that is calibrated to the seismic refraction measurements. Due to computational limitations, the minimum shear-wave velocity is fixed at 600 m/sec in all of the models. Ground-motion simulations for both models A and F provide a reasonably good match to the amplitude and waveform characteristics of the recorded motions. In these models, surface waves are generated as energy enters the basin through the gradually sloping northern margin. Due to the basement step along the San Jacinto fault, the surface wave energy is confined to the region north of this structure, consistent with the observations. The SCEC version 3 model, lacking the basin geometry complexity present in the other two models, fails to provide a satisfactory match to the characteristics of the observed motions. Our study demonstrates the importance of using detailed and accurate basin geometry for predicting ground motions and also highlights the utility of integrating geological, geophysical, and seismological observations in the development and validation of 3D velocity models.

  5. Working memory subsystems and task complexity in young boys with Fragile X syndrome.

    PubMed

    Baker, S; Hooper, S; Skinner, M; Hatton, D; Schaaf, J; Ornstein, P; Bailey, D

    2011-01-01

    Working memory problems have been targeted as core deficits in individuals with Fragile X syndrome (FXS); however, there have been few studies that have examined working memory in young boys with FXS, and even fewer studies that have studied the working memory performance of young boys with FXS across different degrees of complexity. The purpose of this study was to investigate the phonological loop and visual-spatial working memory in young boys with FXS, in comparison to mental age-matched typical boys, and to examine the impact of complexity of the working memory tasks on performance. The performance of young boys (7 to 13-years-old) with FXS (n = 40) was compared with that of mental age and race matched typically developing boys (n = 40) on measures designed to test the phonological loop and the visuospatial sketchpad across low, moderate and high degrees of complexity. Multivariate analyses were used to examine group differences across the specific working memory systems and degrees of complexity. Results suggested that boys with FXS showed deficits in phonological loop and visual-spatial working memory tasks when compared with typically developing mental age-matched boys. For the boys with FXS, the phonological loop was significantly lower than the visual-spatial sketchpad; however, there was no significant difference in performance across the low, moderate and high degrees of complexity in the working memory tasks. Reverse tasks from both the phonological loop and visual-spatial sketchpad appeared to be the most challenging for both groups, but particularly for the boys with FXS. These findings implicate a generalised deficit in working memory in young boys with FXS, with a specific disproportionate impairment in the phonological loop. Given the lack of differentiation on the low versus high complexity tasks, simple span tasks may provide an adequate estimate of working memory until greater involvement of the central executive is achieved. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.

  6. Working memory subsystems and task complexity in young boys with Fragile X syndrome

    PubMed Central

    Baker, S.; Hooper, S.; Skinner, M.; Hatton, D.; Schaaf, J.; Ornstein, P.; Bailey, D.

    2011-01-01

    Background Working memory problems have been targeted as core deficits in individuals with Fragile X syndrome (FXS); however, there have been few studies that have examined working memory in young boys with FXS, and even fewer studies that have studied the working memory performance of young boys with FXS across different degrees of complexity. The purpose of this study was to investigate the phonological loop and visual–spatial working memory in young boys with FXS, in comparison to mental age-matched typical boys, and to examine the impact of complexity of the working memory tasks on performance. Methods The performance of young boys (7 to 13-years-old) with FXS (n = 40) was compared with that of mental age and race matched typically developing boys (n = 40) on measures designed to test the phonological loop and the visuospatial sketchpad across low, moderate and high degrees of complexity. Multivariate analyses were used to examine group differences across the specific working memory systems and degrees of complexity. Results Results suggested that boys with FXS showed deficits in phonological loop and visual–spatial working memory tasks when compared with typically developing mental age-matched boys. For the boys with FXS, the phonological loop was significantly lower than the visual–spatial sketchpad; however, there was no significant difference in performance across the low, moderate and high degrees of complexity in the working memory tasks. Reverse tasks from both the phonological loop and visual–spatial sketchpad appeared to be the most challenging for both groups, but particularly for the boys with FXS. Conclusions These findings implicate a generalised deficit in working memory in young boys with FXS, with a specific disproportionate impairment in the phonological loop. Given the lack of differentiation on the low versus high complexity tasks, simple span tasks may provide an adequate estimate of working memory until greater involvement of the central executive is achieved. PMID:21121991

  7. A Deep Similarity Metric Learning Model for Matching Text Chunks to Spatial Entities

    NASA Astrophysics Data System (ADS)

    Ma, K.; Wu, L.; Tao, L.; Li, W.; Xie, Z.

    2017-12-01

    The matching of spatial entities with related text is a long-standing research topic that has received considerable attention over the years. This task aims at enrich the contents of spatial entity, and attach the spatial location information to the text chunk. In the data fusion field, matching spatial entities with the corresponding describing text chunks has a big range of significance. However, the most traditional matching methods often rely fully on manually designed, task-specific linguistic features. This work proposes a Deep Similarity Metric Learning Model (DSMLM) based on Siamese Neural Network to learn similarity metric directly from the textural attributes of spatial entity and text chunk. The low-dimensional feature representation of the space entity and the text chunk can be learned separately. By employing the Cosine distance to measure the matching degree between the vectors, the model can make the matching pair vectors as close as possible. Mearnwhile, it makes the mismatching as far apart as possible through supervised learning. In addition, extensive experiments and analysis on geological survey data sets show that our DSMLM model can effectively capture the matching characteristics between the text chunk and the spatial entity, and achieve state-of-the-art performance.

  8. Fluid Dynamics of Clap-and-Fling with Highly Flexible Wings inspired by the Locomotion of Sea Butterflies

    NASA Astrophysics Data System (ADS)

    Zhou, Zhuoyu; Shoele, Kourosh; Adhikari, Deepak; Yen, Jeannette; Webster, Donald; Mittal, Rajat; Johns Hopkins University Team; Georgia Institute of Technology Team

    2015-11-01

    This study is motivated by the locomotion of sea butterflies (L. Helicina) which propel themselves in the water column using highly flexible wing-like parapodia. These animals execute a complex clap-and-fling with their highly flexible wings that is different from that of insects, and the fluid dynamics of which is not well understood. We use two models to study the fluid dyamics of these wings. In the first, we use prescribed wing kinematics that serve as a model of those observed for these animals. The second model is a fluid-structure interaction model where wing-like parapodia are modeled as flexible but inextensible membranes. The membrane properties, such as bending and stretching stiffness are modified such that the corresponding motion qualitatively matches the kinematics of L. helicina. Both models are used to examine the fluid dynamics of the clap-and-fling and its effectiveness in generating lift for these animals. Acknowledgement - research is supported by a grant from NSF.

  9. Hybrid markerless tracking of complex articulated motion in golf swings.

    PubMed

    Fung, Sim Kwoh; Sundaraj, Kenneth; Ahamed, Nizam Uddin; Kiang, Lam Chee; Nadarajah, Sivadev; Sahayadhas, Arun; Ali, Md Asraf; Islam, Md Anamul; Palaniappan, Rajkumar

    2014-04-01

    Sports video tracking is a research topic that has attained increasing attention due to its high commercial potential. A number of sports, including tennis, soccer, gymnastics, running, golf, badminton and cricket have been utilised to display the novel ideas in sports motion tracking. The main challenge associated with this research concerns the extraction of a highly complex articulated motion from a video scene. Our research focuses on the development of a markerless human motion tracking system that tracks the major body parts of an athlete straight from a sports broadcast video. We proposed a hybrid tracking method, which consists of a combination of three algorithms (pyramidal Lucas-Kanade optical flow (LK), normalised correlation-based template matching and background subtraction), to track the golfer's head, body, hands, shoulders, knees and feet during a full swing. We then match, track and map the results onto a 2D articulated human stick model to represent the pose of the golfer over time. Our work was tested using two video broadcasts of a golfer, and we obtained satisfactory results. The current outcomes of this research can play an important role in enhancing the performance of a golfer, provide vital information to sports medicine practitioners by providing technically sound guidance on movements and should assist to diminish the risk of golfing injuries. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Image denoising for real-time MRI.

    PubMed

    Klosowski, Jakob; Frahm, Jens

    2017-03-01

    To develop an image noise filter suitable for MRI in real time (acquisition and display), which preserves small isolated details and efficiently removes background noise without introducing blur, smearing, or patch artifacts. The proposed method extends the nonlocal means algorithm to adapt the influence of the original pixel value according to a simple measure for patch regularity. Detail preservation is improved by a compactly supported weighting kernel that closely approximates the commonly used exponential weight, while an oracle step ensures efficient background noise removal. Denoising experiments were conducted on real-time images of healthy subjects reconstructed by regularized nonlinear inversion from radial acquisitions with pronounced undersampling. The filter leads to a signal-to-noise ratio (SNR) improvement of at least 60% without noticeable artifacts or loss of detail. The method visually compares to more complex state-of-the-art filters as the block-matching three-dimensional filter and in certain cases better matches the underlying noise model. Acceleration of the computation to more than 100 complex frames per second using graphics processing units is straightforward. The sensitivity of nonlocal means to small details can be significantly increased by the simple strategies presented here, which allows partial restoration of SNR in iteratively reconstructed images without introducing a noticeable time delay or image artifacts. Magn Reson Med 77:1340-1352, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  11. Unsplit complex frequency shifted perfectly matched layer for second-order wave equation using auxiliary differential equations.

    PubMed

    Gao, Yingjie; Zhang, Jinhai; Yao, Zhenxing

    2015-12-01

    The complex frequency shifted perfectly matched layer (CFS-PML) can improve the absorbing performance of PML for nearly grazing incident waves. However, traditional PML and CFS-PML are based on first-order wave equations; thus, they are not suitable for second-order wave equation. In this paper, an implementation of CFS-PML for second-order wave equation is presented using auxiliary differential equations. This method is free of both convolution calculations and third-order temporal derivatives. As an unsplit CFS-PML, it can reduce the nearly grazing incidence. Numerical experiments show that it has better absorption than typical PML implementations based on second-order wave equation.

  12. Evaluation of the class II region of the major histocompatibility complex of the greyhound with the genomic matching technique and sequence-based typing.

    PubMed

    Fliegner, R A; Holloway, S A; Lester, S; McLure, C A; Dawkins, R L

    2008-08-01

    The class II region of the major histocompatibility complex was evaluated in 25 greyhounds by sequence-based typing and the genomic matching technique (GMT). Two new DLA-DRB1 alleles were identified. Twenty-four dogs carried the DLA-DRB1*01201/DQA1*00401/DQB1*01303/DQB1*01701 haplotype, which carries two DQB1 alleles. One haplotype was identified from which DQB1 and DQA1 appeared to be deleted. The GMT enabled detection of DQB1 copy number, discrimination of the different class II haplotypes and the identification of new, possibly biologically relevant polymorphisms.

  13. Comparative effects of traditional Chinese and Western migraine medicines in an animal model of nociceptive trigeminovascular activation.

    PubMed

    Zhao, Yonglie; Martins-Oliveira, Margarida; Akerman, Simon; Goadsby, Peter J

    2018-06-01

    Background Migraine is a highly prevalent and disabling disorder of the brain with limited therapeutic options, particularly for preventive treatment. There is a need to identify novel targets and test their potential efficacy in relevant preclinical migraine models. Traditional Chinese medicines have been used for millennia and may offer avenues for exploration. Methods We evaluated two traditional Chinese medicines, gastrodin and ligustrazine, and compared them to two Western approaches with propranolol and levetiracetam, one effective and one ineffective, in an established in vivo rodent model of nociceptive durovascular trigeminal activation. Results Intravenous gastrodin (30 and 100 mg/kg) significantly inhibited nociceptive dural-evoked neuronal firing in the trigeminocervical complex. Ligustrazine (10 mg/kg) and propranolol (3 mg/kg) also significantly inhibited dural-evoked trigeminocervical complex responses, although the timing of responses of ligustrazine does not match its pharmacokinetic profile. Levetiracetam had no effects on trigeminovascular responses. Conclusion Our data suggest gastrodin has potential as an anti-migraine treatment, whereas ligustrazine seems less promising. Interestingly, in line with clinical trial data, propranolol was effective and levetiracetam not. Exploration of the mechanisms and modelling effects of Chinese traditional therapies offers novel route for drug discovery in migraine.

  14. Perspectives on why digital ecologies matter: combining population genetics and ecologically informed agent-based models with GIS for managing dipteran livestock pests.

    PubMed

    Peck, Steven L

    2014-10-01

    It is becoming clear that handling the inherent complexity found in ecological systems is an essential task for finding ways to control insect pests of tropical livestock such as tsetse flies, and old and new world screwworms. In particular, challenging multivalent management programs, such as Area Wide Integrated Pest Management (AW-IPM), face daunting problems of complexity at multiple spatial scales, ranging from landscape level processes to those of smaller scales such as the parasite loads of individual animals. Daunting temporal challenges also await resolution, such as matching management time frames to those found on ecological and even evolutionary temporal scales. How does one deal with representing processes with models that involve multiple spatial and temporal scales? Agent-based models (ABM), combined with geographic information systems (GIS), may allow for understanding, predicting and managing pest control efforts in livestock pests. This paper argues that by incorporating digital ecologies in our management efforts clearer and more informed decisions can be made. I also point out the power of these models in making better predictions in order to anticipate the range of outcomes possible or likely. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.

  15. The relationship between cognition, job complexity, and employment duration in first-episode psychosis.

    PubMed

    Caruana, Emma; Cotton, Susan; Killackey, Eóin; Allott, Kelly

    2015-09-01

    To investigate the relationship between cognition and employment duration in first-episode psychosis (FEP), and establish if a "fit" between cognition and job complexity is associated with longer employment duration. This study involved secondary data analysis of a subsample of FEP individuals (n = 65) who participated in a randomized controlled trial comparing Individual Placement and Support plus treatment as usual (TAU), versus TAU alone, over 6 months. A cognitive battery was administered at baseline and employment duration (hours) and job complexity in the longest held job over 6 months were measured. Factor analysis with promax rotation of the cognitive battery revealed 4 cognitive domains: (a) attention and processing speed; (b) verbal learning and memory; (c) verbal comprehension and fluency; and (d) visual organization and memory (VO&M). The final hierarchical regression model found that VO&M and job complexity independently predicted employment duration in longest held job; however, the "fit" (or interaction) between VO&M and job complexity was not significant. These findings suggest that VO&M and job complexity are important predictors of employment duration, but it is not necessary to ensure VO&M ability matches job complexity. However, there are limited comparative studies in this area, and other aspects of the person-organization fit perspective may still be useful to optimize vocational outcomes in FEP. (c) 2015 APA, all rights reserved).

  16. Comparative Analysis of Mass Spectral Similarity Measures on Peak Alignment for Comprehensive Two-Dimensional Gas Chromatography Mass Spectrometry

    PubMed Central

    2013-01-01

    Peak alignment is a critical procedure in mass spectrometry-based biomarker discovery in metabolomics. One of peak alignment approaches to comprehensive two-dimensional gas chromatography mass spectrometry (GC×GC-MS) data is peak matching-based alignment. A key to the peak matching-based alignment is the calculation of mass spectral similarity scores. Various mass spectral similarity measures have been developed mainly for compound identification, but the effect of these spectral similarity measures on the performance of peak matching-based alignment still remains unknown. Therefore, we selected five mass spectral similarity measures, cosine correlation, Pearson's correlation, Spearman's correlation, partial correlation, and part correlation, and examined their effects on peak alignment using two sets of experimental GC×GC-MS data. The results show that the spectral similarity measure does not affect the alignment accuracy significantly in analysis of data from less complex samples, while the partial correlation performs much better than other spectral similarity measures when analyzing experimental data acquired from complex biological samples. PMID:24151524

  17. Approximate matching of structured motifs in DNA sequences.

    PubMed

    El-Mabrouk, Nadia; Raffinot, Mathieu; Duchesne, Jean-Eudes; Lajoie, Mathieu; Luc, Nicolas

    2005-04-01

    Several methods have been developed for identifying more or less complex RNA structures in a genome. All these methods are based on the search for conserved primary and secondary sub-structures. In this paper, we present a simple formal representation of a helix, which is a combination of sequence and folding constraints, as a constrained regular expression. This representation allows us to develop a well-founded algorithm that searches for all approximate matches of a helix in a genome. The algorithm is based on an alignment graph constructed from several copies of a pushdown automaton, arranged one on top of another. This is a first attempt to take advantage of the possibilities of pushdown automata in the context of approximate matching. The worst time complexity is O(krpn), where k is the error threshold, n the size of the genome, p the size of the secondary expression, and r its number of union symbols. We then extend the algorithm to search for pseudo-knots and secondary structures containing an arbitrary number of helices.

  18. OS2: Oblivious similarity based searching for encrypted data outsourced to an untrusted domain

    PubMed Central

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Ramzan, Naeem

    2017-01-01

    Public cloud storage services are becoming prevalent and myriad data sharing, archiving and collaborative services have emerged which harness the pay-as-you-go business model of public cloud. To ensure privacy and confidentiality often encrypted data is outsourced to such services, which further complicates the process of accessing relevant data by using search queries. Search over encrypted data schemes solve this problem by exploiting cryptographic primitives and secure indexing to identify outsourced data that satisfy the search criteria. Almost all of these schemes rely on exact matching between the encrypted data and search criteria. A few schemes which extend the notion of exact matching to similarity based search, lack realism as those schemes rely on trusted third parties or due to increase storage and computational complexity. In this paper we propose Oblivious Similarity based Search (OS2) for encrypted data. It enables authorized users to model their own encrypted search queries which are resilient to typographical errors. Unlike conventional methodologies, OS2 ranks the search results by using similarity measure offering a better search experience than exact matching. It utilizes encrypted bloom filter and probabilistic homomorphic encryption to enable authorized users to access relevant data without revealing results of search query evaluation process to the untrusted cloud service provider. Encrypted bloom filter based search enables OS2 to reduce search space to potentially relevant encrypted data avoiding unnecessary computation on public cloud. The efficacy of OS2 is evaluated on Google App Engine for various bloom filter lengths on different cloud configurations. PMID:28692697

  19. [Formula: see text]: Oblivious similarity based searching for encrypted data outsourced to an untrusted domain.

    PubMed

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Ramzan, Naeem; Khan, Wajahat Ali

    2017-01-01

    Public cloud storage services are becoming prevalent and myriad data sharing, archiving and collaborative services have emerged which harness the pay-as-you-go business model of public cloud. To ensure privacy and confidentiality often encrypted data is outsourced to such services, which further complicates the process of accessing relevant data by using search queries. Search over encrypted data schemes solve this problem by exploiting cryptographic primitives and secure indexing to identify outsourced data that satisfy the search criteria. Almost all of these schemes rely on exact matching between the encrypted data and search criteria. A few schemes which extend the notion of exact matching to similarity based search, lack realism as those schemes rely on trusted third parties or due to increase storage and computational complexity. In this paper we propose Oblivious Similarity based Search ([Formula: see text]) for encrypted data. It enables authorized users to model their own encrypted search queries which are resilient to typographical errors. Unlike conventional methodologies, [Formula: see text] ranks the search results by using similarity measure offering a better search experience than exact matching. It utilizes encrypted bloom filter and probabilistic homomorphic encryption to enable authorized users to access relevant data without revealing results of search query evaluation process to the untrusted cloud service provider. Encrypted bloom filter based search enables [Formula: see text] to reduce search space to potentially relevant encrypted data avoiding unnecessary computation on public cloud. The efficacy of [Formula: see text] is evaluated on Google App Engine for various bloom filter lengths on different cloud configurations.

  20. Beyond neutral and forbidden links: morphological matches and the assembly of mutualistic hawkmoth-plant networks.

    PubMed

    Sazatornil, Federico D; Moré, Marcela; Benitez-Vieyra, Santiago; Cocucci, Andrea A; Kitching, Ian J; Schlumpberger, Boris O; Oliveira, Paulo E; Sazima, Marlies; Amorim, Felipe W

    2016-11-01

    A major challenge in evolutionary ecology is to understand how co-evolutionary processes shape patterns of interactions between species at community level. Pollination of flowers with long corolla tubes by long-tongued hawkmoths has been invoked as a showcase model of co-evolution. Recently, optimal foraging models have predicted that there might be a close association between mouthparts' length and the corolla depth of the visited flowers, thus favouring trait convergence and specialization at community level. Here, we assessed whether hawkmoths more frequently pollinate plants with floral tube lengths similar to their proboscis lengths (morphological match hypothesis) against abundance-based processes (neutral hypothesis) and ecological trait mismatches constraints (forbidden links hypothesis), and how these processes structure hawkmoth-plant mutualistic networks from five communities in four biogeographical regions of South America. We found convergence in morphological traits across the five communities and that the distribution of morphological differences between hawkmoths and plants is consistent with expectations under the morphological match hypothesis in three of the five communities. In the two remaining communities, which are ecotones between two distinct biogeographical areas, interactions are better predicted by the neutral hypothesis. Our findings are consistent with the idea that diffuse co-evolution drives the evolution of extremely long proboscises and flower tubes, and highlight the importance of morphological traits, beyond the forbidden links hypothesis, in structuring interactions between mutualistic partners, revealing that the role of niche-based processes can be much more complex than previously known. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.

  1. Analysis of ground state in random bipartite matching

    NASA Astrophysics Data System (ADS)

    Shi, Gui-Yuan; Kong, Yi-Xiu; Liao, Hao; Zhang, Yi-Cheng

    2016-02-01

    Bipartite matching problems emerge in many human social phenomena. In this paper, we study the ground state of the Gale-Shapley model, which is the most popular bipartite matching model. We apply the Kuhn-Munkres algorithm to compute the numerical ground state of the model. For the first time, we obtain the number of blocking pairs which is a measure of the system instability. We also show that the number of blocking pairs formed by each person follows a geometric distribution. Furthermore, we study how the connectivity in the bipartite matching problems influences the instability of the ground state.

  2. Geographic Mosaic of Plant Evolution: Extrafloral Nectary Variation Mediated by Ant and Herbivore Assemblages

    PubMed Central

    Nogueira, Anselmo; Rey, Pedro J.; Alcántara, Julio M.; Feitosa, Rodrigo M.; Lohmann, Lúcia G.

    2015-01-01

    Herbivory is an ecological process that is known to generate different patterns of selection on defensive plant traits across populations. Studies on this topic could greatly benefit from the general framework of the Geographic Mosaic Theory of Coevolution (GMT). Here, we hypothesize that herbivory represents a strong pressure for extrafloral nectary (EFN) bearing plants, with differences in herbivore and ant visitor assemblages leading to different evolutionary pressures among localities and ultimately to differences in EFN abundance and function. In this study, we investigate this hypothesis by analyzing 10 populations of Anemopaegma album (30 individuals per population) distributed through ca. 600 km of Neotropical savanna and covering most of the geographic range of this plant species. A common garden experiment revealed a phenotypic differentiation in EFN abundance, in which field and experimental plants showed a similar pattern of EFN variation among populations. We also did not find significant correlations between EFN traits and ant abundance, herbivory and plant performance across localities. Instead, a more complex pattern of ant–EFN variation, a geographic mosaic, emerged throughout the geographical range of A. album. We modeled the functional relationship between EFNs and ant traits across ant species and extended this phenotypic interface to characterize local situations of phenotypic matching and mismatching at the population level. Two distinct types of phenotypic matching emerged throughout populations: (1) a population with smaller ants (Crematogaster crinosa) matched with low abundance of EFNs; and (2) seven populations with bigger ants (Camponotus species) matched with higher EFN abundances. Three matched populations showed the highest plant performance and narrower variance of EFN abundance, representing potential plant evolutionary hotspots. Cases of mismatched and matched populations with the lowest performance were associated with abundant and highly detrimental herbivores. Our findings provide insights on the ecology and evolution of plant–ant guarding systems, and suggest new directions to research on facultative mutualistic interactions at wide geographic scales. PMID:25885221

  3. Negotiating Practice Research

    ERIC Educational Resources Information Center

    Julkunen, Ilse; Uggerhoj, Lars

    2016-01-01

    The complexity of carrying out practice research in social service organizations is often matched by the complexity of teaching future social work practitioners to use and engage in practice research. To appreciate the scope of the teaching challenge, it is important to reflect on the evolving definition of practice research and issues involved in…

  4. Field-scale Prediction of Enhanced DNAPL Dissolution Using Partitioning Tracers and Flow Pattern Effects

    NASA Astrophysics Data System (ADS)

    Wang, F.; Annable, M. D.; Jawitz, J. W.

    2012-12-01

    The equilibrium streamtube model (EST) has demonstrated the ability to accurately predict dense nonaqueous phase liquid (DNAPL) dissolution in laboratory experiments and numerical simulations. Here the model is applied to predict DNAPL dissolution at a PCE-contaminated dry cleaner site, located in Jacksonville, Florida. The EST is an analytical solution with field-measurable input parameters. Here, measured data from a field-scale partitioning tracer test were used to parameterize the EST model and the predicted PCE dissolution was compared to measured data from an in-situ alcohol (ethanol) flood. In addition, a simulated partitioning tracer test from a calibrated spatially explicit multiphase flow model (UTCHEM) was also used to parameterize the EST analytical solution. The ethanol prediction based on both the field partitioning tracer test and the UTCHEM tracer test simulation closely matched the field data. The PCE EST prediction showed a peak shift to an earlier arrival time that was concluded to be caused by well screen interval differences between the field tracer test and alcohol flood. This observation was based on a modeling assessment of potential factors that may influence predictions by using UTCHEM simulations. The imposed injection and pumping flow pattern at this site for both the partitioning tracer test and alcohol flood was more complex than the natural gradient flow pattern (NGFP). Both the EST model and UTCHEM were also used to predict PCE dissolution under natural gradient conditions, with much simpler flow patterns than the forced-gradient double five spot of the alcohol flood. The NGFP predictions based on parameters determined from tracer tests conducted with complex flow patterns underestimated PCE concentrations and total mass removal. This suggests that the flow patterns influence aqueous dissolution and that the aqueous dissolution under the NGFP is more efficient than dissolution under complex flow patterns.

  5. Genotyping and interpretation of STR-DNA: Low-template, mixtures and database matches-Twenty years of research and development.

    PubMed

    Gill, Peter; Haned, Hinda; Bleka, Oyvind; Hansson, Oskar; Dørum, Guro; Egeland, Thore

    2015-09-01

    The introduction of Short Tandem Repeat (STR) DNA was a revolution within a revolution that transformed forensic DNA profiling into a tool that could be used, for the first time, to create National DNA databases. This transformation would not have been possible without the concurrent development of fluorescent automated sequencers, combined with the ability to multiplex several loci together. Use of the polymerase chain reaction (PCR) increased the sensitivity of the method to enable the analysis of a handful of cells. The first multiplexes were simple: 'the quad', introduced by the defunct UK Forensic Science Service (FSS) in 1994, rapidly followed by a more discriminating 'six-plex' (Second Generation Multiplex) in 1995 that was used to create the world's first national DNA database. The success of the database rapidly outgrew the functionality of the original system - by the year 2000 a new multiplex of ten-loci was introduced to reduce the chance of adventitious matches. The technology was adopted world-wide, albeit with different loci. The political requirement to introduce pan-European databases encouraged standardisation - the development of European Standard Set (ESS) of markers comprising twelve-loci is the latest iteration. Although development has been impressive, the methods used to interpret evidence have lagged behind. For example, the theory to interpret complex DNA profiles (low-level mixtures), had been developed fifteen years ago, but only in the past year or so, are the concepts starting to be widely adopted. A plethora of different models (some commercial and others non-commercial) have appeared. This has led to a confusing 'debate' about the 'best' to use. The different models available are described along with their advantages and disadvantages. A section discusses the development of national DNA databases, along with details of an associated controversy to estimate the strength of evidence of matches. Current methodology is limited to searches of complete profiles - another example where the interpretation of matches has not kept pace with development of theory. STRs have also transformed the area of Disaster Victim Identification (DVI) which frequently requires kinship analysis. However, genotyping efficiency is complicated by complex, degraded DNA profiles. Finally, there is now a detailed understanding of the causes of stochastic effects that cause DNA profiles to exhibit the phenomena of drop-out and drop-in, along with artefacts such as stutters. The phenomena discussed include: heterozygote balance; stutter; degradation; the effect of decreasing quantities of DNA; the dilution effect. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Recording Approach of Heritage Sites Based on Merging Point Clouds from High Resolution Photogrammetry and Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, P.; Alby, E.; Landes, T.; Koehl, M.; Guillemin, S.; Hullo, J. F.; Assali, P.; Smigiel, E.

    2012-07-01

    Different approaches and tools are required in Cultural Heritage Documentation to deal with the complexity of monuments and sites. The documentation process has strongly changed in the last few years, always driven by technology. Accurate documentation is closely relied to advances of technology (imaging sensors, high speed scanning, automation in recording and processing data) for the purposes of conservation works, management, appraisal, assessment of the structural condition, archiving, publication and research (Patias et al., 2008). We want to focus in this paper on the recording aspects of cultural heritage documentation, especially the generation of geometric and photorealistic 3D models for accurate reconstruction and visualization purposes. The selected approaches are based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons and recent advances have changed the way of the recording approach. The choice of the best workflow relies on the site configuration, the performances of the sensors, and criteria as geometry, accuracy, resolution, georeferencing, texture, and of course processing time. TLS techniques (time of flight or phase shift systems) are widely used for recording large and complex objects and sites. Point cloud generation from images by dense stereo or multi-view matching can be used as an alternative or as a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one, as the acquisition system is limited to a high-performance digital camera and a few accessories only. Indeed, the stereo or multi-view matching process offers a cheap, flexible and accurate solution to get 3D point clouds. Moreover, the captured images might also be used for models texturing. Several software packages are available, whether web-based, open source or commercial. The main advantage of this photogrammetric or computer vision based technology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but in addition with radiometric information for textures. The discussion in this paper reviews recording and important processing steps as geo-referencing and data merging, the essential assessment of the results, and examples of deliverables from projects of the Photogrammetry and Geomatics Group (INSA Strasbourg, France).

  7. Document Image Parsing and Understanding using Neuromorphic Architecture

    DTIC Science & Technology

    2015-03-01

    processing speed at different layers. In the pattern matching layer, the computing power of multicore processors is explored to reduce the processing...developed to reduce the processing speed at different layers. In the pattern matching layer, the computing power of multicore processors is explored... cortex where the complex data is reduced to abstract representations. The abstract representation is compared to stored patterns in massively parallel

  8. Reduced cost and mortality using home telehealth to promote self-management of complex chronic conditions: a retrospective matched cohort study of 4,999 veteran patients.

    PubMed

    Darkins, Adam; Kendall, Stephen; Edmonson, Ellen; Young, Michele; Stressel, Pamela

    2015-01-01

    This retrospective analysis of 2009-2012 Veterans Health Administration (VHA) administrative data assessed the efficacy of care coordination home telehealth (CCHT), a model of care designed to reduce institutional care. Outcomes for 4,999 CCHT-non-institutional care (NIC) patients were compared with usual (non-CCHT) care in a matched cohort group (MCG) of 183,872 Veterans. Both cohorts were comprised of patients with complex chronic conditions with statistically similar baseline (pre-CCHT enrollment) healthcare costs, when adjusted for age, sex, chronic disease, emergency room (ER) visits, hospital admissions, hospital lengths of stay, and pharmacy costs. Subsequent analyses after 12 months of CCHT-NIC enrollment showed mean annual healthcare costs for CCHT-NIC patients fell 4%, from $21,071 to $20,206, whereas the corresponding costs for MCG patients increased 48%, from $20,937 to $31,055. Higher mean annual pharmacy expenditure of 22% ($470 over baseline) for CCHT-NIC patients versus 15% for MCG patients ($326 over baseline) was attributable to the medication compliance effect of better care coordination. Several healthcare cost drivers (e.g., ER visits and admissions) had sizable declines in the CCHT-NIC group. Medicare usage review in both cohorts excluded this as a confounding factor in cost analyses. Prefinal case selection criteria analysis of both cohorts yielded a 9.8% mortality rate in CCHT patients versus 16.58% in non-CCHT patients. This study corroborates previous positive VHA analyses of CCHT but contradicts results from recent non-VHA studies, highlighting the efficacy of the VHA's standardized CCHT model, which incorporates a biopsychosocial approach to care that emphasizes patient self-management.

  9. A digital matched filter for reverse time chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, J. Phillip, E-mail: mchamilton@auburn.edu; Beal, Aubrey N.; Dean, Robert N.

    2016-07-15

    The use of reverse time chaos allows the realization of hardware chaotic systems that can operate at speeds equivalent to existing state of the art while requiring significantly less complex circuitry. Matched filter decoding is possible for the reverse time system since it exhibits a closed form solution formed partially by a linear basis pulse. Coefficients have been calculated and are used to realize the matched filter digitally as a finite impulse response filter. Numerical simulations confirm that this correctly implements a matched filter that can be used for detection of the chaotic signal. In addition, the direct form ofmore » the filter has been implemented in hardware description language and demonstrates performance in agreement with numerical results.« less

  10. Target matching based on multi-view tracking

    NASA Astrophysics Data System (ADS)

    Liu, Yahui; Zhou, Changsheng

    2011-01-01

    A feature matching method is proposed based on Maximally Stable Extremal Regions (MSER) and Scale Invariant Feature Transform (SIFT) to solve the problem of the same target matching in multiple cameras. Target foreground is extracted by using frame difference twice and bounding box which is regarded as target regions is calculated. Extremal regions are got by MSER. After fitted into elliptical regions, those regions will be normalized into unity circles and represented with SIFT descriptors. Initial matching is obtained from the ratio of the closest distance to second distance less than some threshold and outlier points are eliminated in terms of RANSAC. Experimental results indicate the method can reduce computational complexity effectively and is also adapt to affine transformation, rotation, scale and illumination.

  11. Probabilistic model for quick detection of dissimilar binary images

    NASA Astrophysics Data System (ADS)

    Mustafa, Adnan A. Y.

    2015-09-01

    We present a quick method to detect dissimilar binary images. The method is based on a "probabilistic matching model" for image matching. The matching model is used to predict the probability of occurrence of distinct-dissimilar image pairs (completely different images) when matching one image to another. Based on this model, distinct-dissimilar images can be detected by matching only a few points between two images with high confidence, namely 11 points for a 99.9% successful detection rate. For image pairs that are dissimilar but not distinct-dissimilar, more points need to be mapped. The number of points required to attain a certain successful detection rate or confidence depends on the amount of similarity between the compared images. As this similarity increases, more points are required. For example, images that differ by 1% can be detected by mapping fewer than 70 points on average. More importantly, the model is image size invariant; so, images of any sizes will produce high confidence levels with a limited number of matched points. As a result, this method does not suffer from the image size handicap that impedes current methods. We report on extensive tests conducted on real images of different sizes.

  12. Modularization of gradient-index optical design using wavefront matching enabled optimization.

    PubMed

    Nagar, Jogender; Brocker, Donovan E; Campbell, Sawyer D; Easum, John A; Werner, Douglas H

    2016-05-02

    This paper proposes a new design paradigm which allows for a modular approach to replacing a homogeneous optical lens system with a higher-performance GRadient-INdex (GRIN) lens system using a WaveFront Matching (WFM) method. In multi-lens GRIN systems, a full-system-optimization approach can be challenging due to the large number of design variables. The proposed WFM design paradigm enables optimization of each component independently by explicitly matching the WaveFront Error (WFE) of the original homogeneous component at the exit pupil, resulting in an efficient design procedure for complex multi-lens systems.

  13. An efficient way of layout processing based on calibre DRC and pattern matching for defects inspection application

    NASA Astrophysics Data System (ADS)

    Li, Helen; Lee, Robben; Lee, Tyzy; Xue, Teddy; Liu, Hermes; Wu, Hall; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang

    2018-03-01

    As technology advances, escalating layout design complexity and chip size make defect inspection becomes more challenging than ever before. The YE (Yield Enhancement) engineers are seeking for an efficient strategy to ensure accuracy without suffering running time. A smart way is to set different resolutions for different pattern structures, for examples, logic pattern areas have a higher scan resolution while the dummy areas have a lower resolution, SRAM area may have another different resolution. This can significantly reduce the scan processing time meanwhile the accuracy does not suffer. Due to the limitation of the inspection equipment, the layout must be processed in order to output the Care Area marker in line with the requirement of the equipment, for instance, the marker shapes must be rectangle and the number of the rectangle shapes should be as small as possible. The challenge is how to select the different Care Areas by pattern structures, merge the areas efficiently and then partition them into pieces of rectangle shapes. This paper presents a solution based on Calibre DRC and Pattern Matching. Calibre equation-based DRC is a powerful layout processing engine and Calibre Pattern Matching's automated visual capture capability enables designers to define these geometries as layout patterns and store them in libraries which can be re-used in multiple design layouts. Pattern Matching simplifies the description of very complex relationships between pattern shapes efficiently and accurately. Pattern matching's true power is on display when it is integrated with normal DRC deck. In this application of defects inspection, we first run Calibre DRC to get rule based Care Area then use Calibre Pattern Matching's automated pattern capture capability to capture Care Area shapes which need a higher scan resolution with a tune able pattern halo. In the pattern matching step, when the patterns are matched, a bounding box marker will be output to identify the high resolution area. The equation-based DRC and Pattern Matching effectively work together for different scan phases.

  14. Bayesian deterministic decision making: a normative account of the operant matching law and heavy-tailed reward history dependency of choices.

    PubMed

    Saito, Hiroshi; Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato

    2014-01-01

    The decision making behaviors of humans and animals adapt and then satisfy an "operant matching law" in certain type of tasks. This was first pointed out by Herrnstein in his foraging experiments on pigeons. The matching law has been one landmark for elucidating the underlying processes of decision making and its learning in the brain. An interesting question is whether decisions are made deterministically or probabilistically. Conventional learning models of the matching law are based on the latter idea; they assume that subjects learn choice probabilities of respective alternatives and decide stochastically with the probabilities. However, it is unknown whether the matching law can be accounted for by a deterministic strategy or not. To answer this question, we propose several deterministic Bayesian decision making models that have certain incorrect beliefs about an environment. We claim that a simple model produces behavior satisfying the matching law in static settings of a foraging task but not in dynamic settings. We found that the model that has a belief that the environment is volatile works well in the dynamic foraging task and exhibits undermatching, which is a slight deviation from the matching law observed in many experiments. This model also demonstrates the double-exponential reward history dependency of a choice and a heavier-tailed run-length distribution, as has recently been reported in experiments on monkeys.

  15. Nonlocal continuum electrostatic theory predicts surprisingly small energetic penalties for charge burial in proteins.

    PubMed

    Bardhan, Jaydeep P

    2011-09-14

    We study the energetics of burying charges, ion pairs, and ionizable groups in a simple protein model using nonlocal continuum electrostatics. Our primary finding is that the nonlocal response leads to markedly reduced solvent screening, comparable to the use of application-specific protein dielectric constants. Employing the same parameters as used in other nonlocal studies, we find that for a sphere of radius 13.4 Å containing a single +1e charge, the nonlocal solvation free energy varies less than 18 kcal/mol as the charge moves from the surface to the center, whereas the difference in the local Poisson model is ∼35 kcal/mol. Because an ion pair (salt bridge) generates a comparatively more rapidly varying Coulomb potential, energetics for salt bridges are even more significantly reduced in the nonlocal model. By varying the central parameter in nonlocal theory, which is an effective length scale associated with correlations between solvent molecules, nonlocal-model energetics can be varied from the standard local results to essentially zero; however, the existence of the reduction in charge-burial penalties is quite robust to variations in the protein dielectric constant and the correlation length. Finally, as a simple exploratory test of the implications of nonlocal response, we calculate glutamate pK(a) shifts and find that using standard protein parameters (ε(protein) = 2-4), nonlocal results match local-model predictions with much higher dielectric constants. Nonlocality may, therefore, be one factor in resolving discrepancies between measured protein dielectric constants and the model parameters often used to match titration experiments. Nonlocal models may hold significant promise to deepen our understanding of macromolecular electrostatics without substantially increasing computational complexity. © 2011 American Institute of Physics

  16. Clothing Matching for Visually Impaired Persons

    PubMed Central

    Yuan, Shuai; Tian, YingLi; Arditi, Aries

    2012-01-01

    Matching clothes is a challenging task for many blind people. In this paper, we present a proof of concept system to solve this problem. The system consists of 1) a camera connected to a computer to perform pattern and color matching process; 2) speech commands for system control and configuration; and 3) audio feedback to provide matching results for both color and patterns of clothes. This system can handle clothes in deficient color without any pattern, as well as clothing with multiple colors and complex patterns to aid both blind and color deficient people. Furthermore, our method is robust to variations of illumination, clothing rotation and wrinkling. To evaluate the proposed prototype, we collect two challenging databases including clothes without any pattern, or with multiple colors and different patterns under different conditions of lighting and rotation. Results reported here demonstrate the robustness and effectiveness of the proposed clothing matching system. PMID:22523465

  17. Fuzzy Matching Based on Gray-scale Difference for Quantum Images

    NASA Astrophysics Data System (ADS)

    Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia

    2018-05-01

    Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.

  18. Clothing Matching for Visually Impaired Persons.

    PubMed

    Yuan, Shuai; Tian, Yingli; Arditi, Aries

    2011-01-01

    Matching clothes is a challenging task for many blind people. In this paper, we present a proof of concept system to solve this problem. The system consists of 1) a camera connected to a computer to perform pattern and color matching process; 2) speech commands for system control and configuration; and 3) audio feedback to provide matching results for both color and patterns of clothes. This system can handle clothes in deficient color without any pattern, as well as clothing with multiple colors and complex patterns to aid both blind and color deficient people. Furthermore, our method is robust to variations of illumination, clothing rotation and wrinkling. To evaluate the proposed prototype, we collect two challenging databases including clothes without any pattern, or with multiple colors and different patterns under different conditions of lighting and rotation. Results reported here demonstrate the robustness and effectiveness of the proposed clothing matching system.

  19. VizieR Online Data Catalog: Star formation histories of LG dwarf galaxies (Weisz+, 2014)

    NASA Astrophysics Data System (ADS)

    Weisz, D. R.; Dolphin, A. E.; Skillman, E. D.; Holtzman, J.; Gilbert, K. M.; Dalcanton, J. J.; Williams, B. F.

    2017-03-01

    For this paper, we have selected only dwarf galaxies that are located within the zero surface velocity of the LG (~1 Mpc; e.g., van den Bergh 2000, The Galaxies of the Local Group (Cambridge: Cambridge Univ. Press) ; McConnachie 2012, J/AJ/144/4). This definition excludes some dwarfs that have been historically associated with the LG, such as GR8 and IC 5152, but which are located well beyond 1 Mpc. We have chosen to include two galaxies with WFPC2 imaging that are located on the periphery of the LG (Sex A and Sex B), because of their ambiguous association with the LG, the NGC 3109 sub-group, or perhaps neither (although see Bellazzini et al. 2013A&A...559L..11B for discussion of the possible association of these systems). We measured the SFH of each field using the maximum likelihood CMD fitting routine, MATCH (Dolphin 2002MNRAS.332...91D). Briefly, MATCH works as follows: it accepts a range of input parameters (e.g., initial mass function (IMF) slope, binary fraction, age and metallicity bin widths, etc.), uses these parameters to construct synthetic CMDs of simple stellar populations (SSPs), and then linearly combines them with a model foreground CMD to form a composite model CMD with a complex SFH. The composite model CMD is then convolved with the noise model from the artificial star tests (i.e., completeness, photometric uncertainties, and color/magnitude biases). The resulting model CMD is then compared to the observed CMD using a Poisson likelihood statistic. (3 data files).

  20. Job Preferences in the Anticipatory Socialization Phase: A Comparison of Two Matching Models.

    ERIC Educational Resources Information Center

    Moss, Mira K.; Frieze, Irene Hanson

    1993-01-01

    Responses from 86 business administration graduate students tested (1) a model matching self-concept to development of job preferences and (2) an expectancy-value model. Both models significantly predicted job preferences; a higher proportion of variance was explained by the expectancy-value model. (SK)

  1. Simple Process-Based Simulators for Generating Spatial Patterns of Habitat Loss and Fragmentation: A Review and Introduction to the G-RaFFe Model

    PubMed Central

    Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108

  2. Simple process-based simulators for generating spatial patterns of habitat loss and fragmentation: a review and introduction to the G-RaFFe model.

    PubMed

    Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sjöstrand, Torbjörn; Ask, Stefan; Christiansen, Jesper R.

    The Pythia program is a standard tool for the generation of events in high-energy collisions, comprising a coherent set of physics models for the evolution from a few-body hard process to a complex multiparticle final state. It contains a library of hard processes, models for initial- and final-state parton showers, matching and merging methods between hard processes and parton showers, multiparton interactions, beam remnants, string fragmentation and particle decays. It also has a set of utilities and several interfaces to external programs. Pythia 8.2 is the second main release after the complete rewrite from Fortran to C++, and now hasmore » reached such a maturity that it offers a complete replacement for most applications, notably for LHC physics studies. Lastly, the many new features should allow an improved description of data.« less

  4. Micromechanical response of articular cartilage to tensile load measured using nonlinear microscopy.

    PubMed

    Bell, J S; Christmas, J; Mansfield, J C; Everson, R M; Winlove, C P

    2014-06-01

    Articular cartilage (AC) is a highly anisotropic biomaterial, and its complex mechanical properties have been a topic of intense investigation for over 60 years. Recent advances in the field of nonlinear optics allow the individual constituents of AC to be imaged in living tissue without the need for exogenous contrast agents. Combining mechanical testing with nonlinear microscopy provides a wealth of information about microscopic responses to load. This work investigates the inhomogeneous distribution of strain in loaded AC by tracking the movement and morphological changes of individual chondrocytes using point pattern matching and Bayesian modeling. This information can be used to inform models of mechanotransduction and pathogenesis, and is readily extendable to various other connective tissues. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  5. Neuropsychological tests for predicting cognitive decline in older adults

    PubMed Central

    Baerresen, Kimberly M; Miller, Karen J; Hanson, Eric R; Miller, Justin S; Dye, Richelin V; Hartman, Richard E; Vermeersch, David; Small, Gary W

    2015-01-01

    Summary Aim To determine neuropsychological tests likely to predict cognitive decline. Methods A sample of nonconverters (n = 106) was compared with those who declined in cognitive status (n = 24). Significant univariate logistic regression prediction models were used to create multivariate logistic regression models to predict decline based on initial neuropsychological testing. Results Rey–Osterrieth Complex Figure Test (RCFT) Retention predicted conversion to mild cognitive impairment (MCI) while baseline Buschke Delay predicted conversion to Alzheimer’s disease (AD). Due to group sample size differences, additional analyses were conducted using a subsample of demographically matched nonconverters. Analyses indicated RCFT Retention predicted conversion to MCI and AD, and Buschke Delay predicted conversion to AD. Conclusion Results suggest RCFT Retention and Buschke Delay may be useful in predicting cognitive decline. PMID:26107318

  6. Delinquent-Victim Youth-Adapting a Trauma-Informed Approach for the Juvenile Justice System.

    PubMed

    Rapp, Lisa

    2016-01-01

    The connection between victimization and later delinquency is well established and most youth involved with the juvenile justice system have at least one if not multiple victimizations in their history. Poly-victimized youth or those presenting with complex trauma require specialized assessment and services to prevent deleterious emotional, physical, and social life consequences. Empirical studies have provided information which can guide practitioners work with these youth and families, yet many of the policies and practices of the juvenile justice system are counter to this model. Many youth-serving organizations are beginning to review their operations to better match a trauma-informed approach and in this article the author will highlight how a trauma-informed care model could be utilized to adapt the juvenile justice system.

  7. Sequential cohort design applying propensity score matching to analyze the comparative effectiveness of atorvastatin and simvastatin in preventing cardiovascular events.

    PubMed

    Helin-Salmivaara, Arja; Lavikainen, Piia; Aarnio, Emma; Huupponen, Risto; Korhonen, Maarit Jaana

    2014-01-01

    Sequential cohort design (SCD) applying matching for propensity scores (PS) in accrual periods has been proposed to mitigate bias caused by channeling when calendar time is a proxy for strong confounders. We studied the channeling of patients according to atorvastatin and simvastatin initiation in Finland, starting from the market introduction of atorvastatin in 1998, and explored the SCD PS approach to analyzing the comparative effectiveness of atorvastatin versus simvastatin in the prevention of cardiovascular events (CVE). Initiators of atorvastatin or simvastatin use in the 45-75-year age range in 1998-2006 were characterized by their propensity of receiving atorvastatin over simvastatin, as estimated for 17 six-month periods. Atorvastatin (10 mg) and simvastatin (20 mg) initiators were matched 1∶1 on the PS, as estimated for the whole cohort and within each period. Cox regression models were fitted conventionally, and also for the PS matched cohort and the periodically PS matched cohort, to estimate the hazard ratios (HR) for CVEs. Atorvastatin (10 mg) was associated with a 11%-12% lower incidence of CVE in comparison with simvastatin (20 mg). The HR estimates were the same for a conventional Cox model (0.88, 95% confidence interval 0.85-0.91), for the analysis in which the PS was used to match across all periods and the Cox model was adjusted for strong confounders (0.89, 0.85-0.92), and for the analysis in which PS matching was applied within sequential periods (0.88, 0.84-0.92). The HR from a traditional PS matched analysis was 0.80 (0.77-0.83). The SCD PS approach produced effect estimates similar to those obtained in matching for PS within the whole cohort and adjusting the outcome model for strong confounders, but at the cost of efficiency. A traditional PS matched analysis without further adjustment in the outcome model produced estimates further away from unity.

  8. Modeling the high-frequency complex modulus of silicone rubber using standing Lamb waves and an inverse finite element method.

    PubMed

    Jonsson, Ulf; Lindahl, Olof; Andersson, Britt

    2014-12-01

    To gain an understanding of the high-frequency elastic properties of silicone rubber, a finite element model of a cylindrical piezoelectric element, in contact with a silicone rubber disk, was constructed. The frequency-dependent elastic modulus of the silicone rubber was modeled by a fourparameter fractional derivative viscoelastic model in the 100 to 250 kHz frequency range. The calculations were carried out in the range of the first radial resonance frequency of the sensor. At the resonance, the hyperelastic effect of the silicone rubber was modeled by a hyperelastic compensating function. The calculated response was matched to the measured response by using the transitional peaks in the impedance spectrum that originates from the switching of standing Lamb wave modes in the silicone rubber. To validate the results, the impedance responses of three 5-mm-thick silicone rubber disks, with different radial lengths, were measured. The calculated and measured transitional frequencies have been compared in detail. The comparison showed very good agreement, with average relative differences of 0.7%, 0.6%, and 0.7% for the silicone rubber samples with radial lengths of 38.0, 21.4, and 11.0 mm, respectively. The average complex elastic moduli of the samples were (0.97 + 0.009i) GPa at 100 kHz and (0.97 + 0.005i) GPa at 250 kHz.

  9. Effects of different boundary conditions on the simulation of groundwater flow in a multi-layered coastal aquifer system (Taranto Gulf, southern Italy)

    NASA Astrophysics Data System (ADS)

    De Filippis, Giovanna; Foglia, Laura; Giudici, Mauro; Mehl, Steffen; Margiotta, Stefano; Negri, Sergio L.

    2017-11-01

    The evaluation of the accuracy or reasonableness of numerical models of groundwater flow is a complex task, due to the uncertainties in hydrodynamic properties and boundary conditions and the scarcity of good-quality field data. To assess model reliability, different calibration techniques are joined to evaluate the effects of different kinds of boundary conditions on the groundwater flow in a coastal multi-layered aquifer in southern Italy. In particular, both direct and indirect approaches for inverse modeling were joined through the calibration of one of the most uncertain parameters, namely the hydraulic conductivity of the karst deep hydrostratigraphic unit. The methodology proposed here, and applied to a real case study, confirmed that the selection of boundary conditions is among the most critical and difficult aspects of the characterization of a groundwater system for conceptual analysis or numerical simulation. The practical tests conducted in this study show that incorrect specification of boundary conditions prevents an acceptable match between the model response to the hydraulic stresses and the behavior of the natural system. Such effects have a negative impact on the applicability of numerical modeling to simulate groundwater dynamics in complex hydrogeological situations. This is particularly important for management of the aquifer system investigated in this work, which represents the only available freshwater resource of the study area, and is threatened by overexploitation and saltwater intrusion.

  10. Using complex auditory-visual samples to produce emergent relations in children with autism.

    PubMed

    Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P

    2010-03-01

    Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.

  11. Upping the Ante of Text Complexity in the Common Core State Standards: Examining Its Potential Impact on Young Readers

    ERIC Educational Resources Information Center

    Hiebert, Elfrieda H.; Mesmer, Heidi Anne E.

    2013-01-01

    The Common Core Standards for the English Language Arts (CCSS) provide explicit guidelines matching grade-level bands (e.g., 2-3, 4-5) with targeted text complexity levels. The CCSS staircase accelerates text expectations for students across Grades 2-12 in order to close a gap in the complexity of texts typically used in high school and those of…

  12. Multi-body modeling method for rollover using MADYMO

    NASA Astrophysics Data System (ADS)

    Liu, Changye; Lin, Zhigui; Lv, Juncheng; Luo, Qinyue; Qin, Zhenyao; Zhang, Pu; Chen, Tao

    2017-04-01

    Rollovers are complex road accidents causing a big deal of fatalities. FE model for rollover study will costtoo much time due to its long duration.A new multi-body modeling method is proposed in this paper which can save a lot of time and has high-fidelity meanwhile. Following works were carried out to validate this new method. First, a small van was tested following the FMVSS 208 protocol for the validation of the proposed modeling method. Second, a MADYMO model of this small van was reconstructed. The vehicle body was divided into two main parts, the deformable upper body and the rigid lower body, modeled by different waysbased on an FE model. The specific method of modeling is offered in this paper. Finally, the trajectories of the vehicle from test and simulation were comparedand the match was very good. Acceleration of left B pillar was taken into consideration, which turned out fitting the test result well in the time of event. The final deformation status of the vehicle in test and simulation showed similar trend. This validated model provides a reliable wayfor further research in occupant injuries during rollovers.

  13. [Preliminary use of HoloLens glasses in surgery of liver cancer].

    PubMed

    Shi, Lei; Luo, Tao; Zhang, Li; Kang, Zhongcheng; Chen, Jie; Wu, Feiyue; Luo, Jia

    2018-05-28

    To establish the preoperative three dimensional (3D) model of liver cancer, and to precisely match the preoperative planning with the target organs during the operation.
 Methods: The 3D model reconstruction based on magnetic resonance data, which was combined with virtual reality technology via HoloLens glasses, was applied in the operation of liver cancer to achieve preoperative 3D modeling and surgical planning, and to directly match it with the operative target organs during operation.
 Results: The 3D model reconstruction of liver cancer based on magnetic resonance data was completed. The exact match with the target organ was performed during the operation via HoloLens glasses leaded by the 3D model.
 Conclusion: Magnetic resonance data can be used for the 3D model reconstruction to improve preoperative assessment and accurate match during the operation.

  14. A neural model of motion processing and visual navigation by cortical area MST.

    PubMed

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  15. Phonemic accuracy development in children with cochlear implants up to five years of age by using Levenshtein distance.

    PubMed

    Faes, Jolien; Gillis, Joris; Gillis, Steven

    2016-01-01

    Phonemic accuracy of children with cochlear implants (CI) is often reported to be lower in comparison with normally hearing (NH) age-matched children. In this study, we compare phonemic accuracy development in the spontaneous speech of Dutch-speaking children with CI and NH age-matched peers. A dynamic cost model of Levenshtein distance is used to compute the accuracy of each word token. We set up a longitudinal design with monthly data for comparisons up to age two and a cross-sectional design with yearly data between three and five years of age. The main finding is that phonemic accuracy steadily increases throughout the period studied. Children with CI's accuracy is lower than that of their NH age mates, but this difference is not statistically significant in the earliest stages of lexical development. But accuracy of children with CI initially improves significantly less steeply than that of NH peers. Furthermore, the number of syllables in the target word and target word's complexity influence children's accuracy, as longer and more complex target words are less accurately produced. Up to age four, children with CI are significantly less accurate than NH children with increasing word length and word complexity. This difference has disappeared at age five. Finally, hearing age is shown to influence accuracy development of children with CI, while age of implant activation is not. This article informs the reader about phonemic accuracy development in children. The reader will be able to (a) discuss different metrics to measure phonemic accuracy development, (b) discuss phonemic accuracy of children with CI up to five years of age and compare them with NH children, (c) discuss the influence of target word's complexity and target word's syllable length on phonemic accuracy, (d) discuss the influence of hearing experience and age of implantation on phonemic accuracy of children with CI. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics.

    PubMed

    Sharma, Harshita; Alekseychuk, Alexander; Leskovsky, Peter; Hellwich, Olaf; Anand, R S; Zerbe, Norman; Hufnagl, Peter

    2012-10-04

    Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923.

  17. Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics

    PubMed Central

    2012-01-01

    Background Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. Methods The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. Results The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. Conclusion The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923. PMID:23035717

  18. Addressing the unmet need for visualizing conditional random fields in biological data

    PubMed Central

    2014-01-01

    Background The biological world is replete with phenomena that appear to be ideally modeled and analyzed by one archetypal statistical framework - the Graphical Probabilistic Model (GPM). The structure of GPMs is a uniquely good match for biological problems that range from aligning sequences to modeling the genome-to-phenome relationship. The fundamental questions that GPMs address involve making decisions based on a complex web of interacting factors. Unfortunately, while GPMs ideally fit many questions in biology, they are not an easy solution to apply. Building a GPM is not a simple task for an end user. Moreover, applying GPMs is also impeded by the insidious fact that the “complex web of interacting factors” inherent to a problem might be easy to define and also intractable to compute upon. Discussion We propose that the visualization sciences can contribute to many domains of the bio-sciences, by developing tools to address archetypal representation and user interaction issues in GPMs, and in particular a variety of GPM called a Conditional Random Field(CRF). CRFs bring additional power, and additional complexity, because the CRF dependency network can be conditioned on the query data. Conclusions In this manuscript we examine the shared features of several biological problems that are amenable to modeling with CRFs, highlight the challenges that existing visualization and visual analytics paradigms induce for these data, and document an experimental solution called StickWRLD which, while leaving room for improvement, has been successfully applied in several biological research projects. Software and tutorials are available at http://www.stickwrld.org/ PMID:25000815

  19. The trading rectangle strategy within book models

    NASA Astrophysics Data System (ADS)

    Matassini, Lorenzo

    2001-12-01

    We introduce a model of trading where traders interact through the insertion of orders in the book. This matching mechanism is a collection of the activity of agents: They can trade at the market price or place a limit order. The latter is valid until cancelled by the trader; to this end we introduce a threshold in time after which the probability of the order to be removed is strongly increased. There is essentially no source of randomness and all the traders share a common strategy, what we call trading rectangle. Since there are no fundamentalist rules, it is not so important to identify the right moment to enter in the market. Much more effort is required to decide when to sell. The model is able to reproduce many of the complex phenomena manifested in real stock markets, including the positive correlation between bid/ask spreads and volatility.

  20. Transferable Reactive Force Fields: Extensions of ReaxFF-lg to Nitromethane.

    PubMed

    Larentzos, James P; Rice, Betsy M

    2017-03-09

    Transferable ReaxFF-lg models of nitromethane that predict a variety of material properties over a wide range of thermodynamic states are obtained by screening a library of ∼6600 potentials that were previously optimized through the Multiple Objective Evolutionary Strategies (MOES) approach using a training set that included information for other energetic materials composed of carbon, hydrogen, nitrogen, and oxygen. Models that best match experimental nitromethane lattice constants at 4.2 K and 1 atm are evaluated for transferability to high-pressure states at room temperature and are shown to better predict various liquid- and solid-phase structural, thermodynamic, and transport properties as compared to the existing ReaxFF and ReaxFF-lg parametrizations. Although demonstrated for an energetic material, the library of ReaxFF-lg models is supplied to the scientific community to enable new research explorations of complex reactive phenomena in a variety of materials research applications.

  1. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  2. The Pitch-Matching Ability of High School Choral Students: A Justification for Continued Direct Instruction

    ERIC Educational Resources Information Center

    Riegle, Aaron M.; Gerrity, Kevin W.

    2011-01-01

    The purpose of this study was to determine the pitch-matching ability of high school choral students. Years of piano experience, middle school performance experience, and model were considered as variables that might affect pitch-matching ability. Gender of participants was also considered when identifying the effectiveness of each model.…

  3. Occult White Matter Damage Contributes to Intellectual Disability in Tuberous Sclerosis Complex

    ERIC Educational Resources Information Center

    Yu, Chunshui; Lin, Fuchun; Zhao, Li; Ye, Jing; Qin, Wen

    2009-01-01

    Whether patients with tuberous sclerosis complex (TSC) have brain normal-appearing white matter (NAWM) damage and whether such damage contributes to their intellectual disability were examined in 15 TSC patients and 15 gender- and age-matched healthy controls using diffusion tensor imaging (DTI). Histogram and region of interest (ROI) analyses of…

  4. An age-colour relationship for main-belt S-complex asteroids.

    PubMed

    Jedicke, Robert; Nesvorný, David; Whiteley, Robert; Ivezić Z, Zeljko; Jurić, Mario

    2004-05-20

    Asteroid collisions in the main belt eject fragments that may eventually land on Earth as meteorites. It has therefore been a long-standing puzzle in planetary science that laboratory spectra of the most populous class of meteorite (ordinary chondrites, OC) do not match the remotely observed surface spectra of their presumed (S-complex) asteroidal parent bodies. One of the proposed solutions to this perplexing observation is that 'space weathering' modifies the exposed planetary surfaces over time through a variety of processes (such as solar and cosmic ray bombardment, micro-meteorite bombardment, and so on). Space weathering has been observed on lunar samples, in Earth-based laboratory experiments, and there is good evidence from spacecraft data that the process is active on asteroid surfaces. Here, we present a measurement of the rate of space weathering on S-complex main-belt asteroids using a relationship between the ages of asteroid families and their colours. Extrapolating this age-colour relationship to very young ages yields a good match to the colour of freshly cut OC meteorite samples, lending strong support to a genetic relationship between them and the S-complex asteroids.

  5. Bingo! Externally-Supported Performance Intervention for Deficient Visual Search in Normal Aging, Parkinson’s Disease and Alzheimer’s Disease

    PubMed Central

    Laudate, Thomas M.; Neargarder, Sandy; Dunne, Tracy E.; Sullivan, Karen D.; Joshi, Pallavi; Gilmore, Grover C.; Riedel, Tatiana M.; Cronin-Golomb, Alice

    2011-01-01

    External support may improve task performance regardless of an individual’s ability to compensate for cognitive deficits through internally-generated mechanisms. We investigated if performance of a complex, familiar visual search task (the game of bingo) could be enhanced in groups with suboptimal vision by providing external support through manipulation of task stimuli. Participants were 19 younger adults, 14 individuals with probable Alzheimer’s disease (AD), 13 AD-matched healthy adults, 17 non-demented individuals with Parkinson’s disease (PD), and 20 PD-matched healthy adults. We varied stimulus contrast, size, and visual complexity during game play. The externally-supported performance interventions of increased stimulus size and decreased complexity resulted in improvements in performance by all groups. Performance improvement through increased stimulus size and decreased complexity was demonstrated by all groups. AD also obtained benefit from increasing contrast, presumably by compensating for their contrast sensitivity deficit. The general finding of improved performance across healthy and afflicted groups suggests the value of visual support as an easy-to-apply intervention to enhance cognitive performance. PMID:22066941

  6. Dynamic discrete tomography

    NASA Astrophysics Data System (ADS)

    Alpers, Andreas; Gritzmann, Peter

    2018-03-01

    We consider the problem of reconstructing the paths of a set of points over time, where, at each of a finite set of moments in time the current positions of points in space are only accessible through some small number of their x-rays. This particular particle tracking problem, with applications, e.g. in plasma physics, is the basic problem in dynamic discrete tomography. We introduce and analyze various different algorithmic models. In particular, we determine the computational complexity of the problem (and various of its relatives) and derive algorithms that can be used in practice. As a byproduct we provide new results on constrained variants of min-cost flow and matching problems.

  7. Spectroscopy Made Easy: A New Tool for Fitting Observations with Synthetic Spectra

    NASA Technical Reports Server (NTRS)

    Valenti, J. A.; Piskunov, N.

    1996-01-01

    We describe a new software package that may be used to determine stellar and atomic parameters by matching observed spectra with synthetic spectra generated from parameterized atmospheres. A nonlinear least squares algorithm is used to solve for any subset of allowed parameters, which include atomic data (log gf and van der Waals damping constants), model atmosphere specifications (T(sub eff, log g), elemental abundances, and radial, turbulent, and rotational velocities. LTE synthesis software handles discontiguous spectral intervals and complex atomic blends. As a demonstration, we fit 26 Fe I lines in the NSO Solar Atlas (Kurucz et al.), determining various solar and atomic parameters.

  8. Accurate modeling and evaluation of microstructures in complex materials

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman

    2018-02-01

    Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.

  9. Effects of Channel Modification on Detection and Dating of Fault Scarps

    NASA Astrophysics Data System (ADS)

    Sare, R.; Hilley, G. E.

    2016-12-01

    Template matching of scarp-like features could potentially generate morphologic age estimates for individual scarps over entire regions, but data noise and scarp modification limits detection of fault scarps by this method. Template functions based on diffusion in the cross-scarp direction may fail to accurately date scarps near channel boundaries. Where channels reduce scarp amplitudes, or where cross-scarp noise is significant, signal-to-noise ratios decrease and the scarp may be poorly resolved. In this contribution, we explore the bias in morphologic age of a complex scarp produced by systematic changes in fault scarp curvature. For example, fault scarps may be modified by encroaching channel banks and mass failure, lateral diffusion of material into a channel, or undercutting parallel to the base of a scarp. We quantify such biases on morphologic age estimates using a block offset model subject to two-dimensional linear diffusion. We carry out a synthetic study of the effects of two-dimensional transport on morphologic age calculated using a profile model, and compare these results to a well- studied and constrained site along the San Andreas Fault at Wallace Creek, CA. This study serves as a first step towards defining regions of high confidence in template matching results based on scarp length, channel geometry, and near-scarp topography.

  10. Research on the development of space target detecting system and three-dimensional reconstruction technology

    NASA Astrophysics Data System (ADS)

    Li, Dong; Wei, Zhen; Song, Dawei; Sun, Wenfeng; Fan, Xiaoyan

    2016-11-01

    With the development of space technology, the number of spacecrafts and debris are increasing year by year. The demand for detecting and identification of spacecraft is growing strongly, which provides support to the cataloguing, crash warning and protection of aerospace vehicles. The majority of existing approaches for three-dimensional reconstruction is scattering centres correlation, which is based on the radar high resolution range profile (HRRP). This paper proposes a novel method to reconstruct the threedimensional scattering centre structure of target from a sequence of radar ISAR images, which mainly consists of three steps. First is the azimuth scaling of consecutive ISAR images based on fractional Fourier transform (FrFT). The later is the extraction of scattering centres and matching between adjacent ISAR images using grid method. Finally, according to the coordinate matrix of scattering centres, the three-dimensional scattering centre structure is reconstructed using improved factorization method. The three-dimensional structure is featured with stable and intuitive characteristic, which provides a new way to improve the identification probability and reduce the complexity of the model matching library. A satellite model is reconstructed using the proposed method from four consecutive ISAR images. The simulation results prove that the method has gotten a satisfied consistency and accuracy.

  11. Measurement and evaluation of the relationships between capillary pressure, relative permeability, and saturation for surrogate fluids for laboratory study of geological carbon sequestration

    NASA Astrophysics Data System (ADS)

    Mori, H.; Trevisan, L.; Sakaki, T.; Cihan, A.; Smits, K. M.; Illangasekare, T. H.

    2013-12-01

    Multiphase flow models can be used to improve our understanding of the complex behavior of supercritical CO2 (scCO2) in deep saline aquifers to make predictions for the stable storage strategies. These models rely on constitutive relationships such as capillary pressure (Pc) - saturation (Sw) and relative permeability (kr) - saturation (Sw) as input parameters. However, for practical application of these models, such relationships for scCO2 and brine system are not readily available for geological formations. This is due to the complicated and expensive traditional methods often used to obtain these relationships in the laboratory through high pressure and/or high-temperature controls. A method that has the potential to overcome the difficulty in conducting such experiments is to replicate scCO2 and brine with surrogate fluids that capture the density and viscosity effects to obtain the constitutive relationships under ambient conditions. This study presents an investigation conducted to evaluate this method. An assessment of the method allows us to evaluate the prediction accuracy of multiphase models using the constitutive relationships developed from this approach. With this as a goal, the study reports multiple laboratory column experiments conducted to measure these relationships. The obtained relationships were then used in the multiphase flow simulator TOUGH2 T2VOC to explore capillary trapping mechanisms of scCO2. A comparison of the model simulation to experimental observation was used to assess the accuracy of the measured constitutive relationships. Experimental data confirmed, as expected, that the scaling method cannot be used to obtain the residual and irreducible saturations. The results also showed that the van Genuchten - Mualem model was not able to match the independently measured kr data obtained from column experiments. Simulated results of fluid saturations were compared with saturation measurements obtained using x-ray attenuations. This comparison demonstrated that the experimentally derived constitutive relationships matched the experimental data more accurately than the simulation using constitutive relationships derived from scaling methods and van Genuchten - Mualem model. However, simulated imbibition fronts did not match well, suggesting the need for further study. In general, the study demonstrated the feasibility of using surrogate fluids to obtain both Pc - Sw and kr - Sw relationships to be used in multiphase models of scCO2 migration and entrapment.

  12. The Software Architecture of Global Climate Models

    NASA Astrophysics Data System (ADS)

    Alexander, K. A.; Easterbrook, S. M.

    2011-12-01

    It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.

  13. A Combined Experimental and Computational Approach to Subject-Specific Analysis of Knee Joint Laxity

    PubMed Central

    Harris, Michael D.; Cyr, Adam J.; Ali, Azhar A.; Fitzpatrick, Clare K.; Rullkoetter, Paul J.; Maletsky, Lorin P.; Shelburne, Kevin B.

    2016-01-01

    Modeling complex knee biomechanics is a continual challenge, which has resulted in many models of varying levels of quality, complexity, and validation. Beyond modeling healthy knees, accurately mimicking pathologic knee mechanics, such as after cruciate rupture or meniscectomy, is difficult. Experimental tests of knee laxity can provide important information about ligament engagement and overall contributions to knee stability for development of subject-specific models to accurately simulate knee motion and loading. Our objective was to provide combined experimental tests and finite-element (FE) models of natural knee laxity that are subject-specific, have one-to-one experiment to model calibration, simulate ligament engagement in agreement with literature, and are adaptable for a variety of biomechanical investigations (e.g., cartilage contact, ligament strain, in vivo kinematics). Calibration involved perturbing ligament stiffness, initial ligament strain, and attachment location until model-predicted kinematics and ligament engagement matched experimental reports. Errors between model-predicted and experimental kinematics averaged <2 deg during varus–valgus (VV) rotations, <6 deg during internal–external (IE) rotations, and <3 mm of translation during anterior–posterior (AP) displacements. Engagement of the individual ligaments agreed with literature descriptions. These results demonstrate the ability of our constraint models to be customized for multiple individuals and simultaneously call attention to the need to verify that ligament engagement is in good general agreement with literature. To facilitate further investigations of subject-specific or population based knee joint biomechanics, data collected during the experimental and modeling phases of this study are available for download by the research community. PMID:27306137

  14. The influence of spelling ability on handwriting production: children with and without dyslexia.

    PubMed

    Sumner, Emma; Connelly, Vincent; Barnett, Anna L

    2014-09-01

    Current models of writing do not sufficiently address the complex relationship between the 2 transcription skills: spelling and handwriting. For children with dyslexia and beginning writers, it is conceivable that spelling ability will influence rate of handwriting production. Our aim in this study was to examine execution speed and temporal characteristics of handwriting when completing sentence-copying tasks that are free from composing demands and to determine the predictive value of spelling, pausing, and motor skill on handwriting production. Thirty-one children with dyslexia (Mage = 9 years 4 months) were compared with age-matched and spelling-ability matched children (Mage = 6 years 6 months). A digital writing tablet and Eye and Pen software were used to analyze handwriting. Children with dyslexia were able to execute handwriting at the same speed as the age-matched peers. However, they wrote less overall and paused more frequently while writing, especially within words. Combined spelling ability and within-word pausing accounted for over 76% of the variance in handwriting production of children with dyslexia, demonstrating that productivity relies on spelling capabilities. Motor skill did not significantly predict any additional variance in handwriting production. Reading ability predicted performance of the age-matched group, and pausing predicted performance for the spelling-ability group. The findings from the digital writing tablet highlight the interactive relationship between the transcription skills and how, if spelling is not fully automatized, it can constrain the rate of handwriting production. Practical implications are also addressed, emphasizing the need for more consideration to be given to what common handwriting tasks are assessing as a whole.

  15. Accuracy of DSM based on digital aerial image matching. (Polish Title: Dokładność NMPT tworzonego metodą automatycznego dopasowania cyfrowych zdjęć lotniczych)

    NASA Astrophysics Data System (ADS)

    Kubalska, J. L.; Preuss, R.

    2013-12-01

    Digital Surface Models (DSM) are used in GIS data bases as single product more often. They are also necessary to create other products such as3D city models, true-ortho and object-oriented classification. This article presents results of DSM generation for classification of vegetation in urban areas. Source data allowed producing DSM with using of image matching method and ALS data. The creation of DSM from digital images, obtained by Ultra Cam-D digital Vexcel camera, was carried out in Match-T by INPHO. This program optimizes the configuration of images matching process, which ensures high accuracy and minimize gap areas. The analysis of the accuracy of this process was made by comparison of DSM generated in Match-T with DSM generated from ALS data. Because of further purpose of generated DSM it was decided to create model in GRID structure with cell size of 1 m. With this parameter differential model from both DSMs was also built that allowed determining the relative accuracy of the compared models. The analysis indicates that the generation of DSM with multi-image matching method is competitive for the same surface model creation from ALS data. Thus, when digital images with high overlap are available, the additional registration of ALS data seems to be unnecessary.

  16. A consensus algorithm for approximate string matching and its application to QRS complex detection

    NASA Astrophysics Data System (ADS)

    Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.

    2016-08-01

    In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.

  17. Explaining match outcome in elite Australian Rules football using team performance indicators.

    PubMed

    Robertson, Sam; Back, Nicole; Bartlett, Jonathan D

    2016-01-01

    The relationships between team performance indicators and match outcome have been examined in many team sports, however are limited in Australian Rules football. Using data from the 2013 and 2014 Australian Football League (AFL) regular seasons, this study assessed the ability of commonly reported discrete team performance indicators presented in their relative form (standardised against their opposition for a given match) to explain match outcome (Win/Loss). Logistic regression and decision tree (chi-squared automatic interaction detection (CHAID)) analyses both revealed relative differences between opposing teams for "kicks" and "goal conversion" as the most influential in explaining match outcome, with two models achieving 88.3% and 89.8% classification accuracies, respectively. Models incorporating a smaller performance indicator set displayed a slightly reduced ability to explain match outcome (81.0% and 81.5% for logistic regression and CHAID, respectively). However, both were fit to 2014 data with reduced error in comparison to the full models. Despite performance similarities across the two analysis approaches, the CHAID model revealed multiple winning performance indicator profiles, thereby increasing its comparative feasibility for use in the field. Coaches and analysts may find these results useful in informing strategy and game plan development in Australian Rules football, with the development of team-specific models recommended in future.

  18. Reaction modeling of drainage quality in the Duluth Complex, northern Minnesota, USA

    USGS Publications Warehouse

    Seal, Robert; Lapakko, Kim; Piatak, Nadine; Woodruff, Laurel G.

    2015-01-01

    Reaction modeling can be a valuable tool in predicting the long-term behavior of waste material if representative rate constants can be derived from long-term leaching tests or other approaches. Reaction modeling using the REACT program of the Geochemist’s Workbench was conducted to evaluate long-term drainage quality affected by disseminated Cu-Ni-(Co-)-PGM sulfide mineralization in the basal zone of the Duluth Complex where significant resources have been identified. Disseminated sulfide minerals, mostly pyrrhotite and Cu-Fe sulfides, are hosted by clinopyroxene-bearing troctolites. Carbonate minerals are scarce to non-existent. Long-term simulations of up to 20 years of weathering of tailings used two different sets of rate constants: one based on published laboratory single-mineral dissolution experiments, and one based on leaching experiments using bulk material from the Duluth Complex conducted by the Minnesota Department of Natural Resources (MNDNR). The simulations included only plagioclase, olivine, clinopyroxene, pyrrhotite, and water as starting phases. Dissolved oxygen concentrations were assumed to be in equilibrium with atmospheric oxygen. The simulations based on the published single-mineral rate constants predicted that pyrrhotite would be effectively exhausted in less than two years and pH would rise accordingly. In contrast, only 20 percent of the pyrrhotite was depleted after two years using the MNDNR rate constants. Predicted pyrrhotite depletion by the simulation based on the MNDNR rate constant matched well with published results of laboratory tests on tailings. Modeling long-term weathering of mine wastes also can provide important insights into secondary reactions that may influence the permeability of tailings and thereby affect weathering behavior. Both models predicted the precipitation of a variety of secondary phases including goethite, gibbsite, and clay (nontronite).

  19. Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses

    PubMed Central

    Preyra, Colin

    2004-01-01

    Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940

  20. Matching CCD images to a stellar catalog using locality-sensitive hashing

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Yu, Jia-Zong; Peng, Qing-Yu

    2018-02-01

    The usage of a subset of observed stars in a CCD image to find their corresponding matched stars in a stellar catalog is an important issue in astronomical research. Subgraph isomorphic-based algorithms are the most widely used methods in star catalog matching. When more subgraph features are provided, the CCD images are recognized better. However, when the navigation feature database is large, the method requires more time to match the observing model. To solve this problem, this study investigates further and improves subgraph isomorphic matching algorithms. We present an algorithm based on a locality-sensitive hashing technique, which allocates quadrilateral models in the navigation feature database into different hash buckets and reduces the search range to the bucket in which the observed quadrilateral model is located. Experimental results indicate the effectivity of our method.

  1. Learning Probabilistic Features for Robotic Navigation Using Laser Sensors

    PubMed Central

    Aznar, Fidel; Pujol, Francisco A.; Pujol, Mar; Rizo, Ramón; Pujol, María-José

    2014-01-01

    SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N 2), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used. PMID:25415377

  2. Learning probabilistic features for robotic navigation using laser sensors.

    PubMed

    Aznar, Fidel; Pujol, Francisco A; Pujol, Mar; Rizo, Ramón; Pujol, María-José

    2014-01-01

    SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N(2)), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.

  3. Epoch-based Entropy for Early Screening of Alzheimer's Disease.

    PubMed

    Houmani, N; Dreyfus, G; Vialatte, F B

    2015-12-01

    In this paper, we introduce a novel entropy measure, termed epoch-based entropy. This measure quantifies disorder of EEG signals both at the time level and spatial level, using local density estimation by a Hidden Markov Model on inter-channel stationary epochs. The investigation is led on a multi-centric EEG database recorded from patients at an early stage of Alzheimer's disease (AD) and age-matched healthy subjects. We investigate the classification performances of this method, its robustness to noise, and its sensitivity to sampling frequency and to variations of hyperparameters. The measure is compared to two alternative complexity measures, Shannon's entropy and correlation dimension. The classification accuracies for the discrimination of AD patients from healthy subjects were estimated using a linear classifier designed on a development dataset, and subsequently tested on an independent test set. Epoch-based entropy reached a classification accuracy of 83% on the test dataset (specificity = 83.3%, sensitivity = 82.3%), outperforming the two other complexity measures. Furthermore, it was shown to be more stable to hyperparameter variations, and less sensitive to noise and sampling frequency disturbances than the other two complexity measures.

  4. Three-dimensional reconstruction of highly complex microscopic samples using scanning electron microscopy and optical flow estimation.

    PubMed

    Baghaie, Ahmadreza; Pahlavan Tafti, Ahmad; Owen, Heather A; D'Souza, Roshan M; Yu, Zeyun

    2017-01-01

    Scanning Electron Microscope (SEM) as one of the major research and industrial equipment for imaging of micro-scale samples and surfaces has gained extensive attention from its emerge. However, the acquired micrographs still remain two-dimensional (2D). In the current work a novel and highly accurate approach is proposed to recover the hidden third-dimension by use of multi-view image acquisition of the microscopic samples combined with pre/post-processing steps including sparse feature-based stereo rectification, nonlocal-based optical flow estimation for dense matching and finally depth estimation. Employing the proposed approach, three-dimensional (3D) reconstructions of highly complex microscopic samples were achieved to facilitate the interpretation of topology and geometry of surface/shape attributes of the samples. As a byproduct of the proposed approach, high-definition 3D printed models of the samples can be generated as a tangible means of physical understanding. Extensive comparisons with the state-of-the-art reveal the strength and superiority of the proposed method in uncovering the details of the highly complex microscopic samples.

  5. Robust pattern decoding in shape-coded structured light

    NASA Astrophysics Data System (ADS)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  6. Sources of interference in item and associative recognition memory.

    PubMed

    Osth, Adam F; Dennis, Simon

    2015-04-01

    A powerful theoretical framework for exploring recognition memory is the global matching framework, in which a cue's memory strength reflects the similarity of the retrieval cues being matched against the contents of memory simultaneously. Contributions at retrieval can be categorized as matches and mismatches to the item and context cues, including the self match (match on item and context), item noise (match on context, mismatch on item), context noise (match on item, mismatch on context), and background noise (mismatch on item and context). We present a model that directly parameterizes the matches and mismatches to the item and context cues, which enables estimation of the magnitude of each interference contribution (item noise, context noise, and background noise). The model was fit within a hierarchical Bayesian framework to 10 recognition memory datasets that use manipulations of strength, list length, list strength, word frequency, study-test delay, and stimulus class in item and associative recognition. Estimates of the model parameters revealed at most a small contribution of item noise that varies by stimulus class, with virtually no item noise for single words and scenes. Despite the unpopularity of background noise in recognition memory models, background noise estimates dominated at retrieval across nearly all stimulus classes with the exception of high frequency words, which exhibited equivalent levels of context noise and background noise. These parameter estimates suggest that the majority of interference in recognition memory stems from experiences acquired before the learning episode. (c) 2015 APA, all rights reserved).

  7. Quick probabilistic binary image matching: changing the rules of the game

    NASA Astrophysics Data System (ADS)

    Mustafa, Adnan A. Y.

    2016-09-01

    A Probabilistic Matching Model for Binary Images (PMMBI) is presented that predicts the probability of matching binary images with any level of similarity. The model relates the number of mappings, the amount of similarity between the images and the detection confidence. We show the advantage of using a probabilistic approach to matching in similarity space as opposed to a linear search in size space. With PMMBI a complete model is available to predict the quick detection of dissimilar binary images. Furthermore, the similarity between the images can be measured to a good degree if the images are highly similar. PMMBI shows that only a few pixels need to be compared to detect dissimilarity between images, as low as two pixels in some cases. PMMBI is image size invariant; images of any size can be matched at the same quick speed. Near-duplicate images can also be detected without much difficulty. We present tests on real images that show the prediction accuracy of the model.

  8. Three-dimensional wave field modeling by a collocated-grid finite-difference method in the anelastic model with surface topography

    NASA Astrophysics Data System (ADS)

    Wang, N.; Li, J.; Borisov, D.; Gharti, H. N.; Shen, Y.; Zhang, W.; Savage, B. K.

    2016-12-01

    We incorporate 3D anelastic attenuation into the collocated-grid finite-difference method on curvilinear grids (Zhang et al., 2012), using the rheological model of the generalized Maxwell body (Emmerich and Korn, 1987; Moczo and Kristek, 2005; Käser et al., 2007). We follow a conventional procedure to calculate the anelastic coefficients (Emmerich and Korn, 1987) determined by the Q(ω)-law, with a modification in the choice of frequency band and thus the relaxation frequencies that equidistantly cover the logarithmic frequency range. We show that such an optimization of anelastic coefficients is more accurate when using a fixed number of relaxation mechanisms to fit the frequency independent Q-factors. We use curvilinear grids to represent the surface topography. The velocity-stress form of the 3D isotropic anelastic wave equation is solved with a collocated-grid finite-difference method. Compared with the elastic case, we need to solve additional material-independent anelastic functions (Kristek and Moczo, 2003) for the mechanisms at each relaxation frequency. Based on the stress-strain relation, we calculate the spatial partial derivatives of the anelastic functions indirectly thereby saving computational storage and improving computational efficiency. The complex-frequency-shifted perfectly matched layer (CFS-PML) is used for the absorbing boundary condition based on the auxiliary difference equation (Zhang and Shen, 2010). The traction image method (Zhang and Chen, 2006) is employed for the free-surface boundary condition. We perform several numerical experiments including homogeneous full-space models and layered half-space models, considering both flat and 3D Gaussian-shape hill surfaces. The results match very well with those of the spectral-element method (Komatitisch and Tromp, 2002; Savage et al., 2010), verifying the simulations by our method in the anelastic model with surface topography.

  9. Resolving the fine-scale velocity structure of continental hyperextension at the Deep Galicia Margin using full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Davy, R. G.; Morgan, J. V.; Minshull, T. A.; Bayrakci, G.; Bull, J. M.; Klaeschen, D.; Reston, T. J.; Sawyer, D. S.; Lymer, G.; Cresswell, D.

    2018-01-01

    Continental hyperextension during magma-poor rifting at the Deep Galicia Margin is characterized by a complex pattern of faulting, thin continental fault blocks and the serpentinization, with local exhumation, of mantle peridotites along the S-reflector, interpreted as a detachment surface. In order to understand fully the evolution of these features, it is important to image seismically the structure and to model the velocity structure to the greatest resolution possible. Traveltime tomography models have revealed the long-wavelength velocity structure of this hyperextended domain, but are often insufficient to match accurately the short-wavelength structure observed in reflection seismic imaging. Here, we demonstrate the application of 2-D time-domain acoustic full-waveform inversion (FWI) to deep-water seismic data collected at the Deep Galicia Margin, in order to attain a high-resolution velocity model of continental hyperextension. We have used several quality assurance procedures to assess the velocity model, including comparison of the observed and modeled waveforms, checkerboard tests, testing of parameter and inversion strategy and comparison with the migrated reflection image. Our final model exhibits an increase in the resolution of subsurface velocities, with particular improvement observed in the westernmost continental fault blocks, with a clear rotation of the velocity field to match steeply dipping reflectors. Across the S-reflector, there is a sharpening in the velocity contrast, with lower velocities beneath S indicative of preferential mantle serpentinization. This study supports the hypothesis that normal faulting acts to hydrate the upper-mantle peridotite, observed as a systematic decrease in seismic velocities, consistent with increased serpentinization. Our results confirm the feasibility of applying the FWI method to sparse, deep-water crustal data sets.

  10. Leveraging Mechanism Simplicity and Strategic Averaging to Identify Signals from Highly Heterogeneous Spatial and Temporal Ozone Data

    NASA Astrophysics Data System (ADS)

    Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.

    2017-12-01

    We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.

  11. High-efficiency resonant coupled wireless power transfer via tunable impedance matching

    NASA Astrophysics Data System (ADS)

    Anowar, Tanbir Ibne; Barman, Surajit Das; Wasif Reza, Ahmed; Kumar, Narendra

    2017-10-01

    For magnetic resonant coupled wireless power transfer (WPT), the axial movement of near-field coupled coils adversely degrades the power transfer efficiency (PTE) of the system and often creates sub-resonance. This paper presents a tunable impedance matching technique based on optimum coupling tuning to enhance the efficiency of resonant coupled WPT system. The optimum power transfer model is analysed from equivalent circuit model via reflected load principle, and the adequate matching are achieved through the optimum tuning of coupling coefficients at both the transmitting and receiving end of the system. Both simulations and experiments are performed to evaluate the theoretical model of the proposed matching technique, and results in a PTE over 80% at close coil proximity without shifting the original resonant frequency. Compared to the fixed coupled WPT, the extracted efficiency shows 15.1% and 19.9% improvements at the centre-to-centre misalignment of 10 and 70 cm, respectively. Applying this technique, the extracted S21 parameter shows more than 10 dB improvements at both strong and weak couplings. Through the developed model, the optimum coupling tuning also significantly improves the performance over matching techniques using frequency tracking and tunable matching circuits.

  12. A comparison of low back kinetic estimates obtained through posture matching, rigid link modeling and an EMG-assisted model.

    PubMed

    Parkinson, R J; Bezaire, M; Callaghan, J P

    2011-07-01

    This study examined errors introduced by a posture matching approach (3DMatch) relative to dynamic three-dimensional rigid link and EMG-assisted models. Eighty-eight lifting trials of various combinations of heights (floor, 0.67, 1.2 m), asymmetry (left, right and center) and mass (7.6 and 9.7 kg) were videotaped while spine postures, ground reaction forces, segment orientations and muscle activations were documented and used to estimate joint moments and forces (L5/S1). Posture matching over predicted peak and cumulative extension moment (p < 0.0001 for all variables). There was no difference between peak compression estimates obtained with posture matching or EMG-assisted approaches (p = 0.7987). Posture matching over predicted cumulative (p < 0.0001) compressive loading due to a bias in standing, however, individualized bias correction eliminated the differences. Therefore, posture matching provides a method to analyze industrial lifting exposures that will predict kinetic values similar to those of more sophisticated models, provided necessary corrections are applied. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Validation of SmartRank: A likelihood ratio software for searching national DNA databases with complex DNA profiles.

    PubMed

    Benschop, Corina C G; van de Merwe, Linda; de Jong, Jeroen; Vanvooren, Vanessa; Kempenaers, Morgane; Kees van der Beek, C P; Barni, Filippo; Reyes, Eusebio López; Moulin, Léa; Pene, Laurent; Haned, Hinda; Sijen, Titia

    2017-07-01

    Searching a national DNA database with complex and incomplete profiles usually yields very large numbers of possible matches that can present many candidate suspects to be further investigated by the forensic scientist and/or police. Current practice in most forensic laboratories consists of ordering these 'hits' based on the number of matching alleles with the searched profile. Thus, candidate profiles that share the same number of matching alleles are not differentiated and due to the lack of other ranking criteria for the candidate list it may be difficult to discern a true match from the false positives or notice that all candidates are in fact false positives. SmartRank was developed to put forward only relevant candidates and rank them accordingly. The SmartRank software computes a likelihood ratio (LR) for the searched profile and each profile in the DNA database and ranks database entries above a defined LR threshold according to the calculated LR. In this study, we examined for mixed DNA profiles of variable complexity whether the true donors are retrieved, what the number of false positives above an LR threshold is and the ranking position of the true donors. Using 343 mixed DNA profiles over 750 SmartRank searches were performed. In addition, the performance of SmartRank and CODIS were compared regarding DNA database searches and SmartRank was found complementary to CODIS. We also describe the applicable domain of SmartRank and provide guidelines. The SmartRank software is open-source and freely available. Using the best practice guidelines, SmartRank enables obtaining investigative leads in criminal cases lacking a suspect. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Home Health Nursing Care and Hospital Use for Medically Complex Children.

    PubMed

    Gay, James C; Thurm, Cary W; Hall, Matthew; Fassino, Michael J; Fowler, Lisa; Palusci, John V; Berry, Jay G

    2016-11-01

    Home health nursing care (HH) may be a valuable approach to long-term optimization of health for children, particularly those with medical complexity who are prone to frequent and lengthy hospitalizations. We sought to assess the relationship between HH services and hospital use in children. Retrospective, matched cohort study of 2783 hospitalized children receiving postdischarge HH services by BAYADA Home Health Care across 19 states and 7361 matched controls not discharged to HH services from the Children's Hospital Association Case Mix database between January 2004 and September 2012. Subsequent hospitalizations, hospital days, readmissions, and costs of hospital care were assessed over the 12-month period after the initial hospitalization. Nonparametric Wilcoxon signed rank tests were used for comparisons between HH and non-HH users. Although HH cases had a higher percentage of complex chronic conditions (68.5% vs 65.4%), technology assistance (40.5% vs 35.7%), and neurologic impairment (40.7% vs 37.3%) than matched controls (P ≤ .003 for all), 30-day readmission rates were lower in HH patients (18.3% vs 21.5%, P = .001). At 12 months after the index admission, HH patients averaged fewer admissions (0.8 vs 1.0, P < .001), fewer days in the hospital (6.4 vs 6.6, P < .001), and lower hospital costs ($22 511 vs $24 194, P < .001) compared with matched controls. Children discharged to HH care experienced less hospital use than children with similar characteristics who did not use HH care. Further investigation is needed to understand how HH care affects the health and health services of children. Copyright © 2016 by the American Academy of Pediatrics.

  15. Match probabilities in a finite, subdivided population

    PubMed Central

    Malaspinas, Anna-Sapfo; Slatkin, Montgomery; Song, Yun S.

    2011-01-01

    We generalize a recently introduced graphical framework to compute the probability that haplotypes or genotypes of two individuals drawn from a finite, subdivided population match. As in the previous work, we assume an infinite-alleles model. We focus on the case of a population divided into two subpopulations, but the underlying framework can be applied to a general model of population subdivision. We examine the effect of population subdivision on the match probabilities and the accuracy of the product rule which approximates multi-locus match probabilities as a product of one-locus match probabilities. We quantify the deviation from predictions of the product rule by R, the ratio of the multi-locus match probability to the product of the one-locus match probabilities.We carry out the computation for two loci and find that ignoring subdivision can lead to underestimation of the match probabilities if the population under consideration actually has subdivision structure and the individuals originate from the same subpopulation. On the other hand, under a given model of population subdivision, we find that the ratio R for two loci is only slightly greater than 1 for a large range of symmetric and asymmetric migration rates. Keeping in mind that the infinite-alleles model is not the appropriate mutation model for STR loci, we conclude that, for two loci and biologically reasonable parameter values, population subdivision may lead to results that disfavor innocent suspects because of an increase in identity-by-descent in finite populations. On the other hand, for the same range of parameters, population subdivision does not lead to a substantial increase in linkage disequilibrium between loci. Those results are consistent with established practice. PMID:21266180

  16. Compression of strings with approximate repeats.

    PubMed

    Allison, L; Edgoose, T; Dix, T I

    1998-01-01

    We describe a model for strings of characters that is loosely based on the Lempel Ziv model with the addition that a repeated substring can be an approximate match to the original substring; this is close to the situation of DNA, for example. Typically there are many explanations for a given string under the model, some optimal and many suboptimal. Rather than commit to one optimal explanation, we sum the probabilities over all explanations under the model because this gives the probability of the data under the model. The model has a small number of parameters and these can be estimated from the given string by an expectation-maximization (EM) algorithm. Each iteration of the EM algorithm takes O(n2) time and a few iterations are typically sufficient. O(n2) complexity is impractical for strings of more than a few tens of thousands of characters and a faster approximation algorithm is also given. The model is further extended to include approximate reverse complementary repeats when analyzing DNA strings. Tests include the recovery of parameter estimates from known sources and applications to real DNA strings.

  17. Refractive-index-matched hydrogel materials for measuring flow-structure interactions

    NASA Astrophysics Data System (ADS)

    Byron, Margaret L.; Variano, Evan A.

    2013-02-01

    In imaging-based studies of flow around solid objects, it is useful to have materials that are refractive-index-matched to the surrounding fluid. However, materials currently in use are usually rigid and matched to liquids that are either expensive or highly viscous. This does not allow for measurements at high Reynolds number, nor accurate modeling of flexible structures. This work explores the use of two hydrogels (agarose and polyacrylamide) as refractive-index-matched models in water. These hydrogels are inexpensive, can be cast into desired shapes, and have flexibility that can be tuned to match biological materials. The use of water as the fluid phase allows this method to be implemented immediately in many experimental facilities and permits investigation of high-Reynolds-number phenomena. We explain fabrication methods and present a summary of the physical and optical properties of both gels, and then show measurements demonstrating the use of hydrogel models in quantitative imaging.

  18. Do Gaze Cues in Complex Scenes Capture and Direct the Attention of High Functioning Adolescents with ASD? Evidence from Eye-Tracking

    ERIC Educational Resources Information Center

    Freeth, M.; Chapman, P.; Ropar, D.; Mitchell, P.

    2010-01-01

    Visual fixation patterns whilst viewing complex photographic scenes containing one person were studied in 24 high-functioning adolescents with Autism Spectrum Disorders (ASD) and 24 matched typically developing adolescents. Over two different scene presentation durations both groups spent a large, strikingly similar proportion of their viewing…

  19. The State of the Field: Qualitative Analyses of Text Complexity

    ERIC Educational Resources Information Center

    Pearson, P. David; Hiebert, Elfrieda H.

    2014-01-01

    The purpose of this article is to understand the function, logic, and impact of qualitative systems for analyzing text complexity, focusing on their benefits and imperfections. We identified two primary functions for their use: (a) to match texts to reader ability so that readers read books that are within their grasp, and (b) to unearth, and then…

  20. The Complexities of Complex Memory Span: Storage and Processing Deficits in Specific Language Impairment

    ERIC Educational Resources Information Center

    Archibald, Lisa M. D.; Gathercole, Susan E.

    2007-01-01

    This study investigated the verbal and visuospatial processing and storage skills of children with SLI and typically developing children. Fourteen school-age children with SLI, and two groups of typically developing children matched either for age or language abilities, completed measures of processing speed and storage capacity, and a set of…

Top