Sample records for targets modeling experiments

  1. Computational modeling of joint U.S.-Russian experiments relevant to magnetic compression/magnetized target fusion (MAGO/MTF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheehey, P.T.; Faehl, R.J.; Kirkpatrick, R.C.

    1997-12-31

    Magnetized Target Fusion (MTF) experiments, in which a preheated and magnetized target plasma is hydrodynamically compressed to fusion conditions, present some challenging computational modeling problems. Recently, joint experiments relevant to MTF (Russian acronym MAGO, for Magnitnoye Obzhatiye, or magnetic compression) have been performed by Los Alamos National Laboratory and the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF). Modeling of target plasmas must accurately predict plasma densities, temperatures, fields, and lifetime; dense plasma interactions with wall materials must be characterized. Modeling of magnetically driven imploding solid liners, for compression of target plasmas, must address issues such as Rayleigh-Taylor instability growthmore » in the presence of material strength, and glide plane-liner interactions. Proposed experiments involving liner-on-plasma compressions to fusion conditions will require integrated target plasma and liner calculations. Detailed comparison of the modeling results with experiment will be presented.« less

  2. Asymmetries in visual search for conjunctive targets.

    PubMed

    Cohen, A

    1993-08-01

    Asymmetry is demonstrated between conjunctive targets in visual search with no detectable asymmetries between the individual features that compose these targets. Experiment 1 demonstrated this phenomenon for targets composed of color and shape. Experiment 2 and 4 demonstrate this asymmetry for targets composed of size and orientation and for targets composed of contrast level and orientation, respectively. Experiment 3 demonstrates that search rate of individual features cannot predict search rate for conjunctive targets. These results demonstrate the need for 2 levels of representations: one of features and one of conjunction of features. A model related to the modified feature integration theory is proposed to account for these results. The proposed model and other models of visual search are discussed.

  3. Modeling and Depletion Simulations for a High Flux Isotope Reactor Cycle with a Representative Experiment Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chandler, David; Betzler, Ben; Hirtz, Gregory John

    2016-09-01

    The purpose of this report is to document a high-fidelity VESTA/MCNP High Flux Isotope Reactor (HFIR) core model that features a new, representative experiment loading. This model, which represents the current, high-enriched uranium fuel core, will serve as a reference for low-enriched uranium conversion studies, safety-basis calculations, and other research activities. A new experiment loading model was developed to better represent current, typical experiment loadings, in comparison to the experiment loading included in the model for Cycle 400 (operated in 2004). The new experiment loading model for the flux trap target region includes full length 252Cf production targets, 75Se productionmore » capsules, 63Ni production capsules, a 188W production capsule, and various materials irradiation targets. Fully loaded 238Pu production targets are modeled in eleven vertical experiment facilities located in the beryllium reflector. Other changes compared to the Cycle 400 model are the high-fidelity modeling of the fuel element side plates and the material composition of the control elements. Results obtained from the depletion simulations with the new model are presented, with a focus on time-dependent isotopic composition of irradiated fuel and single cycle isotope production metrics.« less

  4. Spatial frequency dependence of target signature for infrared performance modeling

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd; Olson, Jeffrey

    2011-05-01

    The standard model used to describe the performance of infrared imagers is the U.S. Army imaging system target acquisition model, based on the targeting task performance metric. The model is characterized by the resolution and sensitivity of the sensor as well as the contrast and task difficulty of the target set. The contrast of the target is defined as a spatial average contrast. The model treats the contrast of the target set as spatially white, or constant, over the bandlimit of the sensor. Previous experiments have shown that this assumption is valid under normal conditions and typical target sets. However, outside of these conditions, the treatment of target signature can become the limiting factor affecting model performance accuracy. This paper examines target signature more carefully. The spatial frequency dependence of the standard U.S. Army RDECOM CERDEC Night Vision 12 and 8 tracked vehicle target sets is described. The results of human perception experiments are modeled and evaluated using both frequency dependent and independent target signature definitions. Finally the function of task difficulty and its relationship to a target set is discussed.

  5. Numerical Modeling of Complex Targets for High-Energy- Density Experiments with Ion Beams and other Drivers

    DOE PAGES

    Koniges, Alice; Liu, Wangyi; Lidia, Steven; ...

    2016-04-01

    We explore the simulation challenges and requirements for experiments planned on facilities such as the NDCX-II ion accelerator at LBNL, currently undergoing commissioning. Hydrodynamic modeling of NDCX-II experiments include certain lower temperature effects, e.g., surface tension and target fragmentation, that are not generally present in extreme high-energy laser facility experiments, where targets are completely vaporized in an extremely short period of time. Target designs proposed for NDCX-II range from metal foils of order one micron thick (thin targets) to metallic foam targets several tens of microns thick (thick targets). These high-energy-density experiments allow for the study of fracture as wellmore » as the process of bubble and droplet formation. We incorporate these physics effects into a code called ALE-AMR that uses a combination of Arbitrary Lagrangian Eulerian hydrodynamics and Adaptive Mesh Refinement. Inclusion of certain effects becomes tricky as we must deal with non-orthogonal meshes of various levels of refinement in three dimensions. A surface tension model used for droplet dynamics is implemented in ALE-AMR using curvature calculated from volume fractions. Thick foam target experiments provide information on how ion beam induced shock waves couple into kinetic energy of fluid flow. Although NDCX-II is not fully commissioned, experiments are being conducted that explore material defect production and dynamics.« less

  6. Statistical analysis of target acquisition sensor modeling experiments

    NASA Astrophysics Data System (ADS)

    Deaver, Dawne M.; Moyer, Steve

    2015-05-01

    The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.

  7. Validating models of target acquisition performance in the dismounted soldier context

    NASA Astrophysics Data System (ADS)

    Glaholt, Mackenzie G.; Wong, Rachel K.; Hollands, Justin G.

    2018-04-01

    The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models' designs and parameters, and the characteristics of the behavioral paradigm.

  8. Heterogeneity effects in visual search predicted from the group scanning model.

    PubMed

    Macquistan, A D

    1994-12-01

    The group scanning model of feature integration theory (Treisman & Gormican, 1988) suggests that subjects search visual displays serially by groups, but process items within each group in parallel. The size of these groups is determined by the discriminability of the targets in the background of distractors. When the target is poorly discriminable, the size of the scanned group will be small, and search will be slow. The model predicts that group size will be smallest when targets of an intermediate value on a perceptual dimension are presented in a heterogeneous background of distractors that have higher and lower values on the same dimension. Experiment 1 demonstrates this effect. Experiment 2 controls for a possible confound of decision complexity in Experiment 1. For simple feature targets, the group scanning model provides a good account of the visual search process.

  9. Prediction of homoprotein and heteroprotein complexes by protein docking and template‐based modeling: A CASP‐CAPRI experiment

    PubMed Central

    Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen‐You; Schneidman‐Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez‐Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Rie Lee, Gyu; Seok, Chaok; Qin, Sanbo; Zhou, Huan‐Xiang; Ritchie, David W.; Maigret, Bernard; Devignes, Marie‐Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaël A.G.; Bates, Paul A.; Ben‐Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S.; Weng, Zhiping; Vreven, Thom; Pierce, Brian G.; Borrman, Tyler M.; Yu, Jinchao; Ochsenbein, Françoise; Guerois, Raphaël; Vangone, Anna; Rodrigues, João P.G.L.M.; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S.J.; Visscher, Koen; Kastritis, Panagiotis L.; Bonvin, Alexandre M.J.J.; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X.; Kim, Hyung‐Rae; Roy, Amit; Han, Xusi; Esquivel‐Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J.; Fuller, Jonathan C.; Wade, Rebecca C.; Anishchenko, Ivan; Kundrotas, Petras J.; Vakser, Ilya A.; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero‐Durana, Miguel; Jiménez‐García, Brian; Moal, Iain H.; Férnandez‐Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R.; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A.; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S.; Gray, Jeffrey J.; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey

    2016-01-01

    ABSTRACT We present the results for CAPRI Round 30, the first joint CASP‐CAPRI experiment, which brought together experts from the protein structure prediction and protein–protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact‐sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology‐built subunit models and the smaller pair‐wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. Proteins 2016; 84(Suppl 1):323–348. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:27122118

  10. Contrasting Predictions of the Extended Comparator Hypothesis and Acquisition-Focused Models of Learning Concerning Retrospective Revaluation

    PubMed Central

    McConnell, Bridget L.; Urushihara, Kouji; Miller, Ralph R.

    2009-01-01

    Three conditioned suppression experiments with rats investigated contrasting predictions made by the extended comparator hypothesis and acquisition-focused models of learning, specifically, modified SOP and the revised Rescorla-Wagner model, concerning retrospective revaluation. Two target cues (X and Y) were partially reinforced using a stimulus relative validity design (i.e., AX-Outcome/ BX-No outcome/ CY-Outcome/ DY-No outcome), and subsequently one of the companion cues for each target was extinguished in compound (BC-No outcome). In Experiment 1, which used spaced trials for relative validity training, greater suppression was observed to target cue Y for which the excitatory companion cue had been extinguished relative to target cue X for which the nonexcitatory companion cue had been extinguished. Experiment 2 replicated these results in a sensory preconditioning preparation. Experiment 3 massed the trials during relative validity training, and the opposite pattern of data was observed. The results are consistent with the predictions of the extended comparator hypothesis. Furthermore, this set of experiments is unique in being able to differentiate between these models without invoking higher-order comparator processes. PMID:20141324

  11. Dynamic model of target charging by short laser pulse interactions

    NASA Astrophysics Data System (ADS)

    Poyé, A.; Dubois, J.-L.; Lubrano-Lavaderci, F.; D'Humières, E.; Bardon, M.; Hulin, S.; Bailly-Grandvaux, M.; Ribolzi, J.; Raffestin, D.; Santos, J. J.; Nicolaï, Ph.; Tikhonchuk, V.

    2015-10-01

    A model providing an accurate estimate of the charge accumulation on the surface of a metallic target irradiated by a high-intensity laser pulse of fs-ps duration is proposed. The model is confirmed by detailed comparisons with specially designed experiments. Such a model is useful for understanding the electromagnetic pulse emission and the quasistatic magnetic field generation in laser-plasma interaction experiments.

  12. Dynamic model of target charging by short laser pulse interactions.

    PubMed

    Poyé, A; Dubois, J-L; Lubrano-Lavaderci, F; D'Humières, E; Bardon, M; Hulin, S; Bailly-Grandvaux, M; Ribolzi, J; Raffestin, D; Santos, J J; Nicolaï, Ph; Tikhonchuk, V

    2015-10-01

    A model providing an accurate estimate of the charge accumulation on the surface of a metallic target irradiated by a high-intensity laser pulse of fs-ps duration is proposed. The model is confirmed by detailed comparisons with specially designed experiments. Such a model is useful for understanding the electromagnetic pulse emission and the quasistatic magnetic field generation in laser-plasma interaction experiments.

  13. Validating An Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments

    NASA Astrophysics Data System (ADS)

    Catanzarite, Joseph; Burke, Christopher J.; Li, Jie; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    The Kepler Mission is developing an Analytic Completeness Model (ACM) to estimate detection completeness contours as a function of exoplanet radius and period for each target star. Accurate completeness contours are necessary for robust estimation of exoplanet occurrence rates.The main components of the ACM for a target star are: detection efficiency as a function of SNR, the window function (WF) and the one-sigma depth function (OSDF). (Ref. Burke et al. 2015). The WF captures the falloff in transit detection probability at long periods that is determined by the observation window (the duration over which the target star has been observed). The OSDF is the transit depth (in parts per million) that yields SNR of unity for the full transit train. It is a function of period, and accounts for the time-varying properties of the noise and for missing or deweighted data.We are performing flux-level transit injection (FLTI) experiments on selected Kepler target stars with the goal of refining and validating the ACM. “Flux-level” injection machinery inserts exoplanet transit signatures directly into the flux time series, as opposed to “pixel-level” injection, which inserts transit signatures into the individual pixels using the pixel response function. See Jie Li's poster: ID #2493668, "Flux-level transit injection experiments with the NASA Pleiades Supercomputer" for details, including performance statistics.Since FLTI is affordable for only a small subset of the Kepler targets, the ACM is designed to apply to most Kepler target stars. We validate this model using “deep” FLTI experiments, with ~500,000 injection realizations on each of a small number of targets and “shallow” FLTI experiments with ~2000 injection realizations on each of many targets. From the results of these experiments, we identify anomalous targets, model their behavior and refine the ACM accordingly.In this presentation, we discuss progress in validating and refining the ACM, and we compare our detection efficiency curves with those derived from the associated pixel-level transit injection experiments.Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.

  14. Target charging in short-pulse-laser-plasma experiments.

    PubMed

    Dubois, J-L; Lubrano-Lavaderci, F; Raffestin, D; Ribolzi, J; Gazave, J; Compant La Fontaine, A; d'Humières, E; Hulin, S; Nicolaï, Ph; Poyé, A; Tikhonchuk, V T

    2014-01-01

    Interaction of high-intensity laser pulses with solid targets results in generation of large quantities of energetic electrons that are the origin of various effects such as intense x-ray emission, ion acceleration, and so on. Some of these electrons are escaping the target, leaving behind a significant positive electric charge and creating a strong electromagnetic pulse long after the end of the laser pulse. We propose here a detailed model of the target electric polarization induced by a short and intense laser pulse and an escaping electron bunch. A specially designed experiment provides direct measurements of the target polarization and the discharge current in the function of the laser energy, pulse duration, and target size. Large-scale numerical simulations describe the energetic electron generation and their emission from the target. The model, experiment, and numerical simulations demonstrate that the hot-electron ejection may continue long after the laser pulse ends, enhancing significantly the polarization charge.

  15. Heavy Ion Fusion Science Virtual National Laboratory 4th Quarter 2009 Milestone Report: Measure and simulate target temperature and dynamic response in optimized NDCX-I configurations with initial diagnostics suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bieniosek, F.M.; Barnard, J.J.; Henestroza, E.

    2009-09-30

    This milestone has been met. The effort contains two main components: (1) Experimental results of warm dense matter target experiments on optimized NDCX-I configurations that include measurements of target temperature and transient target behavior. (2) A theoretical model of the target response to beam heating that includes an equilibrium heating model of the target foil and a model for droplet formation in the target for comparison with experimental results. The experiments on ion-beam target heating use a 300-350-keV K{sup +} pulsed beam from the Neutralized Compression Drift Experiment (NDCX-I) accelerator at LBNL. The NDCX-I accelerator delivers an uncompressed pulse beammore » of several microseconds with a typical power density of >100 kW/cm{sup 2} over a final focus spot size of about 1 mm. An induction bunching module the NDCX-I compresses a portion of the beam pulse to reach a much higher power density over 2 nanoseconds. Under these conditions the free-standing foil targets are rapidly heated to temperatures to over 4000 K. We model the target thermal dynamics using the equation of heat conduction for the temperature T(x,t) as a function of time (t) and spatial dimension along the beam direction (x). The competing cooling processes release energy from the surface of the foil due to evaporation, radiation, and thermionic (Richardson) emission. A description of the experimental configuration of the target chamber and results from initial beam-target experiments are reported in our FY08 4th Quarter and FY09 2nd Quarter Milestone Reports. The WDM target diagnostics include a high-speed multichannel optical pyrometer, optical streak camera, VISAR, and high-speed gated cameras. The fast optical pyrometer is a unique and significant new diagnostic which provides valuable information on the temperature evolution of the heated target.« less

  16. Probabilistic neural networks modeling of the 48-h LC50 acute toxicity endpoint to Daphnia magna.

    PubMed

    Niculescu, S P; Lewis, M A; Tigner, J

    2008-01-01

    Two modeling experiments based on the maximum likelihood estimation paradigm and targeting prediction of the Daphnia magna 48-h LC50 acute toxicity endpoint for both organic and inorganic compounds are reported. The resulting models computational algorithms are implemented as basic probabilistic neural networks with Gaussian kernel (statistical corrections included). The first experiment uses strictly D. magna information for 971 structures as training/learning data and the resulting model targets practical applications. The second experiment uses the same training/learning information plus additional data on another 29 compounds whose endpoint information is originating from D. pulex and Ceriodaphnia dubia. It only targets investigation of the effect of mixing strictly D. magna 48-h LC50 modeling information with small amounts of similar information estimated from related species, and this is done as part of the validation process. A complementary 81 compounds dataset (involving only strictly D. magna information) is used to perform external testing. On this external test set, the Gaussian character of the distribution of the residuals is confirmed for both models. This allows the use of traditional statistical methodology to implement computation of confidence intervals for the unknown measured values based on the models predictions. Examples are provided for the model targeting practical applications. For the same model, a comparison with other existing models targeting the same endpoint is performed.

  17. Modeling the effects of contrast enhancement on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Fanning, Jonathan D.

    2008-04-01

    Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content, by better utilizing the available gray levels either globally or locally. This paper assesses the range-performance effects of various contrast enhancement algorithms for target identification with well contrasted vehicles. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing linearly scaled images and various contrast enhancement processed images. Contrast enhancement is modeled in the US Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of feature saturation or enhancement. To account for the equivalent blur associated with each contrast enhancement algorithm, an additional effective MTF was calculated and added to the model. The measured results are compared with the predicted performance based on the target task difficulty metric used in NVThermIP.

  18. Active machine learning-driven experimentation to determine compound effects on protein patterns.

    PubMed

    Naik, Armaghan W; Kangas, Joshua D; Sullivan, Devin P; Murphy, Robert F

    2016-02-03

    High throughput screening determines the effects of many conditions on a given biological target. Currently, to estimate the effects of those conditions on other targets requires either strong modeling assumptions (e.g. similarities among targets) or separate screens. Ideally, data-driven experimentation could be used to learn accurate models for many conditions and targets without doing all possible experiments. We have previously described an active machine learning algorithm that can iteratively choose small sets of experiments to learn models of multiple effects. We now show that, with no prior knowledge and with liquid handling robotics and automated microscopy under its control, this learner accurately learned the effects of 48 chemical compounds on the subcellular localization of 48 proteins while performing only 29% of all possible experiments. The results represent the first practical demonstration of the utility of active learning-driven biological experimentation in which the set of possible phenotypes is unknown in advance.

  19. The empathy impulse: A multinomial model of intentional and unintentional empathy for pain.

    PubMed

    Cameron, C Daryl; Spring, Victoria L; Todd, Andrew R

    2017-04-01

    Empathy for pain is often described as automatic. Here, we used implicit measurement and multinomial modeling to formally quantify unintentional empathy for pain: empathy that occurs despite intentions to the contrary. We developed the pain identification task (PIT), a sequential priming task wherein participants judge the painfulness of target experiences while trying to avoid the influence of prime experiences. Using multinomial modeling, we distinguished 3 component processes underlying PIT performance: empathy toward target stimuli (Intentional Empathy), empathy toward prime stimuli (Unintentional Empathy), and bias to judge target stimuli as painful (Response Bias). In Experiment 1, imposing a fast (vs. slow) response deadline uniquely reduced Intentional Empathy. In Experiment 2, inducing imagine-self (vs. imagine-other) perspective-taking uniquely increased Unintentional Empathy. In Experiment 3, Intentional and Unintentional Empathy were stronger toward targets with typical (vs. atypical) pain outcomes, suggesting that outcome information matters and that effects on the PIT are not reducible to affective priming. Typicality of pain outcomes more weakly affected task performance when target stimuli were merely categorized rather than judged for painfulness, suggesting that effects on the latter are not reducible to semantic priming. In Experiment 4, Unintentional Empathy was stronger for participants who engaged in costly donation to cancer charities, but this parameter was also high for those who donated to an objectively worse but socially more popular charity, suggesting that overly high empathy may facilitate maladaptive altruism. Theoretical and practical applications of our modeling approach for understanding variation in empathy are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Exogenous spatial attention decreases audiovisual integration.

    PubMed

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W

    2015-02-01

    Multisensory integration (MSI) and spatial attention are both mechanisms through which the processing of sensory information can be facilitated. Studies on the interaction between spatial attention and MSI have mainly focused on the interaction between endogenous spatial attention and MSI. Most of these studies have shown that endogenously attending a multisensory target enhances MSI. It is currently unclear, however, whether and how exogenous spatial attention and MSI interact. In the current study, we investigated the interaction between these two important bottom-up processes in two experiments. In Experiment 1 the target location was task-relevant, and in Experiment 2 the target location was task-irrelevant. Valid or invalid exogenous auditory cues were presented before the onset of unimodal auditory, unimodal visual, and audiovisual targets. We observed reliable cueing effects and multisensory response enhancement in both experiments. To examine whether audiovisual integration was influenced by exogenous spatial attention, the amount of race model violation was compared between exogenously attended and unattended targets. In both Experiment 1 and Experiment 2, a decrease in MSI was observed when audiovisual targets were exogenously attended, compared to when they were not. The interaction between exogenous attention and MSI was less pronounced in Experiment 2. Therefore, our results indicate that exogenous attention diminishes MSI when spatial orienting is relevant. The results are discussed in terms of models of multisensory integration and attention.

  1. TOPEX/POSEIDON orbit maintenance maneuver design

    NASA Technical Reports Server (NTRS)

    Bhat, R. S.; Frauenholz, R. B.; Cannell, Patrick E.

    1990-01-01

    The Ocean Topography Experiment (TOPEX/POSEIDON) mission orbit requirements are outlined, as well as its control and maneuver spacing requirements including longitude and time targeting. A ground-track prediction model dealing with geopotential, luni-solar gravity, and atmospheric-drag perturbations is considered. Targeting with all modeled perturbations is discussed, and such ground-track prediction errors as initial semimajor axis, orbit-determination, maneuver-execution, and atmospheric-density modeling errors are assessed. A longitude targeting strategy for two extreme situations is investigated employing all modeled perturbations and prediction errors. It is concluded that atmospheric-drag modeling errors are the prevailing ground-track prediction error source early in the mission during high solar flux, and that low solar-flux levels expected late in the experiment stipulate smaller maneuver magnitudes.

  2. Extending the Duluth Model to Workplace Bullying: A Modification and Adaptation of the Workplace Power-Control Wheel.

    PubMed

    Scott, Hannah S

    2018-03-01

    Workplace bullying (WB) is an increasingly prevalent topic in the nursing literature. Recently, a new concept has been introduced into WB research to explain the motivations of WB instigators using elements of the Power-Control Wheel (PCW). Initially, this wheel was designed to assist intimate partner violence (IPV) targets/victims identify patterns of abuse and intervene with male batterers/instigators. Research examining IPV and victims/survivors of WB demonstrate that targets often share common abusive experiences, including intimidation, coercion and threats, isolation, and economic and emotional abuse. This article demonstrates clear support for the Duluth Model and its application to WB target experiences. Applications of this model to identify WB and assist individuals to identify and describe experiences of abusive work environments are discussed.

  3. The relationship study between image features and detection probability based on psychology experiments

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Chen, Yu-hua; Wang, Ji-yuan; Gao, Hong-sheng; Wang, Ji-jun; Su, Rong-hua; Mao, Wei

    2011-04-01

    Detection probability is an important index to represent and estimate target viability, which provides basis for target recognition and decision-making. But it will expend a mass of time and manpower to obtain detection probability in reality. At the same time, due to the different interpretation of personnel practice knowledge and experience, a great difference will often exist in the datum obtained. By means of studying the relationship between image features and perception quantity based on psychology experiments, the probability model has been established, in which the process is as following.Firstly, four image features have been extracted and quantified, which affect directly detection. Four feature similarity degrees between target and background were defined. Secondly, the relationship between single image feature similarity degree and perception quantity was set up based on psychological principle, and psychological experiments of target interpretation were designed which includes about five hundred people for interpretation and two hundred images. In order to reduce image features correlativity, a lot of artificial synthesis images have been made which include images with single brightness feature difference, images with single chromaticity feature difference, images with single texture feature difference and images with single shape feature difference. By analyzing and fitting a mass of experiments datum, the model quantitys have been determined. Finally, by applying statistical decision theory and experimental results, the relationship between perception quantity with target detection probability has been found. With the verification of a great deal of target interpretation in practice, the target detection probability can be obtained by the model quickly and objectively.

  4. Modeling criterion shifts and target checking in prospective memory monitoring.

    PubMed

    Horn, Sebastian S; Bayen, Ute J

    2015-01-01

    Event-based prospective memory (PM) involves remembering to perform intended actions after a delay. An important theoretical issue is whether and how people monitor the environment to execute an intended action when a target event occurs. Performing a PM task often increases the latencies in ongoing tasks. However, little is known about the reasons for this cost effect. This study uses diffusion model analysis to decompose monitoring processes in the PM paradigm. Across 4 experiments, performing a PM task increased latencies in an ongoing lexical decision task. A large portion of this effect was explained by consistent increases in boundary separation; additional increases in nondecision time emerged in a nonfocal PM task and explained variance in PM performance (Experiment 1), likely reflecting a target-checking strategy before and after the ongoing decision (Experiment 2). However, we found that possible target-checking strategies may depend on task characteristics. That is, instructional emphasis on the importance of ongoing decisions (Experiment 3) or the use of focal targets (Experiment 4) eliminated the contribution of nondecision time to the cost of PM, but left participants in a mode of increased cautiousness. The modeling thus sheds new light on the cost effect seen in many PM studies and suggests that people approach ongoing activities more cautiously when they need to remember an intended action. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  5. Pushing typists back on the learning curve: contributions of multiple linguistic units in the acquisition of typing skill.

    PubMed

    Yamaguchi, Motonori; Logan, Gordon D

    2014-11-01

    The present study investigated the way people acquire and control skilled performance in the context of typewriting. Typing skill was degraded by changing the location of a key (target key) while retaining the locations of other keys to disable an association between the letter and the key. We conducted 4 experiments: Experiment 1 demonstrated that disabling a letter-key association affected not only the execution of the target keystroke but also the planning of other keystrokes for words involving the target key. In Experiments 2-4, typists practiced with a new target location and then transferred to a condition in which they typed the practiced words with the original key location (Experiment 2) or typed new words with the practiced key location (Experiments 3 and 4). Experiment 2 showed that the newly acquired letter-key association interfered with the execution of the original keystroke but not planning. Experiments 3 and 4 demonstrated that acquisition of the new letter-key association depended on multiple levels of linguistic units. Experiment 4 demonstrated that acquisition of the new association depended on sequences both before and after the target keystroke. We discuss implications of the results for 2 prominent approaches to modeling sequential behavior: hierarchical control and recurrent network models. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  6. Modelling debris and shrapnel generation in inertial confinement fusion experiments

    DOE PAGES

    Eder, D. C.; Fisher, A. C.; Koniges, A. E.; ...

    2013-10-24

    Modelling and mitigation of damage are crucial for safe and economical operation of high-power laser facilities. Experiments at the National Ignition Facility use a variety of targets with a range of laser energies spanning more than two orders of magnitude (~14 kJ to ~1.9 MJ). Low-energy inertial confinement fusion experiments are used to study early-time x-ray load symmetry on the capsule, shock timing, and other physics issues. For these experiments, a significant portion of the target is not completely vaporized and late-time (hundreds of ns) simulations are required to study the generation of debris and shrapnel from these targets. Damagemore » to optics and diagnostics from shrapnel is a major concern for low-energy experiments. Here, we provide the first full-target simulations of entire cryogenic targets, including the Al thermal mechanical package and Si cooling rings. We use a 3D multi-physics multi-material hydrodynamics code, ALE-AMR, for these late-time simulations. The mass, velocity, and spatial distribution of shrapnel are calculated for three experiments with laser energies ranging from 14 to 250 kJ. We calculate damage risk to optics and diagnostics for these three experiments. For the lowest energy re-emit experiment, we provide a detailed analysis of the effects of shrapnel impacts on optics and diagnostics and compare with observations of damage sites.« less

  7. Maximize, minimize or target - optimization for a fitted response from a designed experiment

    DOE PAGES

    Anderson-Cook, Christine Michaela; Cao, Yongtao; Lu, Lu

    2016-04-01

    One of the common goals of running and analyzing a designed experiment is to find a location in the design space that optimizes the response of interest. Depending on the goal of the experiment, we may seek to maximize or minimize the response, or set the process to hit a particular target value. After the designed experiment, a response model is fitted and the optimal settings of the input factors are obtained based on the estimated response model. Furthermore, the suggested optimal settings of the input factors are then used in the production environment.

  8. HARP targets pion production cross section and yield measurements: Implications for MiniBooNE neutrino flux

    NASA Astrophysics Data System (ADS)

    Wickremasinghe, Don Athula Abeyarathna

    The prediction of the muon neutrino flux from a 71.0 cm long beryllium target for the MiniBooNE experiment is based on a measured pion production cross section which was taken from a short beryllium target (2.0 cm thick - 5% nuclear interaction length) in the Hadron Production (HARP) experiment at CERN. To verify the extrapolation to our longer target, HARP also measured the pion production from 20.0 cm and 40.0 cm beryllium targets. The measured production yields on targets of 50% and 100% nuclear interaction lengths in the kinematic rage of momentum from 0.75 GeV/c to 6.5 GeV/c and the range of angle from 30 mrad to 210 mrad are presented along with an update of the short target cross sections. The best fitted extended Sanford-Wang (SW) model parameterization for updated short beryllium target positive pion production cross section is presented. Yield measurements for all three targets are also compared with that from the Monte Carlo predictions in the MiniBooNE experiment for different SW parameterization. The comparisons of muon neutrino flux predictions for updated SW model is presented.

  9. Multiple lesion track structure model

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Cucinotta, Francis A.; Shinn, Judy L.

    1992-01-01

    A multilesion cell kinetic model is derived, and radiation kinetic coefficients are related to the Katz track structure model. The repair-related coefficients are determined from the delayed plating experiments of Yang et al. for the C3H10T1/2 cell system. The model agrees well with the x ray and heavy ion experiments of Yang et al. for the immediate plating, delaying plating, and fractionated exposure protocols employed by Yang. A study is made of the effects of target fragments in energetic proton exposures and of the repair-deficient target-fragment-induced lesions.

  10. Active machine learning-driven experimentation to determine compound effects on protein patterns

    PubMed Central

    Naik, Armaghan W; Kangas, Joshua D; Sullivan, Devin P; Murphy, Robert F

    2016-01-01

    High throughput screening determines the effects of many conditions on a given biological target. Currently, to estimate the effects of those conditions on other targets requires either strong modeling assumptions (e.g. similarities among targets) or separate screens. Ideally, data-driven experimentation could be used to learn accurate models for many conditions and targets without doing all possible experiments. We have previously described an active machine learning algorithm that can iteratively choose small sets of experiments to learn models of multiple effects. We now show that, with no prior knowledge and with liquid handling robotics and automated microscopy under its control, this learner accurately learned the effects of 48 chemical compounds on the subcellular localization of 48 proteins while performing only 29% of all possible experiments. The results represent the first practical demonstration of the utility of active learning-driven biological experimentation in which the set of possible phenotypes is unknown in advance. DOI: http://dx.doi.org/10.7554/eLife.10047.001 PMID:26840049

  11. Simulation of target interpretation based on infrared image features and psychology principle

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Chen, Yu-hua; Gao, Hong-sheng; Wang, Zhan-feng; Wang, Ji-jun; Su, Rong-hua; Huang, Yan-ping

    2009-07-01

    It's an important and complicated process in target interpretation that target features extraction and identification, which effect psychosensorial quantity of interpretation person to target infrared image directly, and decide target viability finally. Using statistical decision theory and psychology principle, designing four psychophysical experiment, the interpretation model of the infrared target is established. The model can get target detection probability by calculating four features similarity degree between target region and background region, which were plotted out on the infrared image. With the verification of a great deal target interpretation in practice, the model can simulate target interpretation and detection process effectively, get the result of target interpretation impersonality, which can provide technique support for target extraction, identification and decision-making.

  12. Efficient Modeling and Active Learning Discovery of Biological Responses

    PubMed Central

    Naik, Armaghan W.; Kangas, Joshua D.; Langmead, Christopher J.; Murphy, Robert F.

    2013-01-01

    High throughput and high content screening involve determination of the effect of many compounds on a given target. As currently practiced, screening for each new target typically makes little use of information from screens of prior targets. Further, choices of compounds to advance to drug development are made without significant screening against off-target effects. The overall drug development process could be made more effective, as well as less expensive and time consuming, if potential effects of all compounds on all possible targets could be considered, yet the cost of such full experimentation would be prohibitive. In this paper, we describe a potential solution: probabilistic models that can be used to predict results for unmeasured combinations, and active learning algorithms for efficiently selecting which experiments to perform in order to build those models and determining when to stop. Using simulated and experimental data, we show that our approaches can produce powerful predictive models without exhaustive experimentation and can learn them much faster than by selecting experiments at random. PMID:24358322

  13. Experimental impact cratering provides ground truth data for understanding planetary-scale collision processes

    NASA Astrophysics Data System (ADS)

    Poelchau, Michael H.; Deutsch, Alex; Kenkmann, Thomas

    2013-04-01

    Impact cratering is generally accepted as one of the primary processes that shape planetary surfaces in the solar system. While post-impact analysis of craters by remote sensing or field work gives many insights into this process, impact cratering experiments have several advantages for impact research: 1) excavation and ejection processes can be directly observed, 2) physical parameters of the experiment are defined and can be varied, and 3) cratered target material can be analyzed post-impact in an unaltered, uneroded state. The main goal of the MEMIN project is to comprehensively quantify impact processes by conducting a stringently controlled experimental impact cratering campaign on the meso-scale with a multidisciplinary analytical approach. As a unique feature we use two-stage light gas guns capable of producing impact craters in the decimeter size-range in solid rocks that, in turn, allow detailed spatial analysis of petrophysical, structural, and geochemical changes in target rocks and ejecta. In total, we have carried out 24 experiments at the facilities of the Fraunhofer EMI, Freiburg - Germany. Steel, aluminum, and iron meteorite projectiles ranging in diameter from 2.5 to 12 mm were accelerated to velocities ranging from 2.5 to 7.8 km/s. Targets were solid rocks, namely sandstone, quartzite and tuff that were either dry or saturated with water. In the experimental setup, high speed framing cameras monitored the impact process, ultrasound sensors were attached to the target to record the passage of the shock wave, and special particle catchers were positioned opposite of the target surface to capture the ejected target and projectile material. In addition to the cratering experiments, planar shock recovery experiments were performed on the target material, and numerical models of the cratering process were developed. The experiments resulted in craters with diameters up to 40 cm, which is unique in laboratory cratering research. Target porosity exponentially reduces crater volumes and cratering efficiency relative to non-porous rocks, and also yields less steep ejecta angles. Microstructural analysis of the subsurface shows a zone of pervasive grain crushing and pore space reduction. This is in good agreement with new mesoscale numerical models, which are able to quantify localized shock pressure behavior in the target's pore space. Planar shock recovery experiments confirm these local pressure excursions, based on microanalysis of shock metamorphic features in quartz. Saturation of porous target rocks with water counteracts many of the effects of porosity. Post-impact analysis of projectile remnants shows that during mixing of projectile and target melts, the Fe of the projectile is preferentially partitioned into target melt to a greater degree than Ni and Co. We plan to continue evaluating the experimental results in combination with numerical models. These models help to quantify and evaluate cratering processes, while experimental data serve as benchmarks to validate the improved numerical models, thus helping to "bridge the gap" between experiments and nature. The results confirm and expand current crater scaling laws, and make an application to craters on planetary surfaces possible.

  14. Dynamic interactions between visual working memory and saccade target selection

    PubMed Central

    Schneegans, Sebastian; Spencer, John P.; Schöner, Gregor; Hwang, Seongmin; Hollingworth, Andrew

    2014-01-01

    Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. PMID:25228628

  15. Their pain, our pleasure: stereotype content and schadenfreude.

    PubMed

    Cikara, Mina; Fiske, Susan T

    2013-09-01

    People often fail to empathize with others, and sometimes even experience schadenfreude-pleasure at others' misfortunes. One potent predictor of schadenfreude is envy, which, according to the stereotype content model, is elicited by high-status, competitive targets. Here we review our recent research program investigating the relationships among stereotypes, envy, schadenfreude, and harm. Experiment 1 demonstrates that stereotypes are sufficient to influence affective responses to targets' misfortunes; participants not only report feeling less negative when misfortunes befall high-status, competitive targets as compared to other targets, they also smile more (assessed with facial EMG). Experiment 2 replicates the self-report findings from Experiment 1 and assesses behavioral tendencies toward envied targets; participants are more willing to endorse harming high-status, competitive targets as compared to other targets. Experiment 3 turns off the schadenfreude response by manipulating status and competition-relevant information regarding envied targets. Finally, Experiment 4 investigates affective and neural markers of intergroup envy and schadenfreude in the context of a long-standing sports rivalry and the extent to which neurophysiological correlates of schadenfreude are related to self-reported likelihood of harming rival team fans. We conclude with implications and future directions. © 2013 New York Academy of Sciences.

  16. Their pain, our pleasure: stereotype content and schadenfreude

    PubMed Central

    Cikara, Mina; Fiske, Susan T.

    2015-01-01

    People often fail to empathize with others, and sometimes even experience schadenfreude – pleasure at others' misfortunes. One potent predictor of schadenfreude is envy, which, according to the stereotype content model, is elicited by high-status, competitive targets. Here we review our recent research program investigating the relationships among stereotypes, envy, schadenfreude, and harm. Experiment 1 demonstrates that stereotypes are sufficient to influence affective responses to targets' misfortunes; participants not only report feeling less negative when misfortunes befall high-status, competitive targets as compared to other targets, they also smile more (assessed with facial EMG). Experiment 2 replicates the self-report findings from Experiment 1 and assesses behavioral tendencies toward envied targets; participants are more willing to endorse harming high-status, competitive targets as compared to other targets. Experiment 3 turns off the schadenfreude response by manipulating status and competition-relevant information regarding envied targets. Finally, Experiment 4 investigates affective and neural markers of intergroup envy and schadenfreude in the context of a long-standing sports rivalry and the extent to which neurophysiological correlates of schadenfreude are related to self-reported likelihood of harming rival team fans. We conclude with implications and future directions. PMID:25708079

  17. Evaluation of an imputed pitch velocity model of the auditory tau effect.

    PubMed

    Henry, Molly J; McAuley, J Devin; Zaleha, Marta

    2009-08-01

    This article extends an imputed pitch velocity model of the auditory kappa effect proposed by Henry and McAuley (2009a) to the auditory tau effect. Two experiments were conducted using an AXB design in which listeners judged the relative pitch of a middle target tone (X) in ascending and descending three-tone sequences. In Experiment 1, sequences were isochronous, establishing constant fast, medium, and slow velocity conditions. No systematic distortions in perceived target pitch were observed, and thresholds were similar across velocity conditions. Experiment 2 introduced to-be-ignored variations in target timing. Variations in target timing that deviated from constant velocity conditions introduced systematic distortions in perceived target pitch, indicative of a robust auditory tau effect. Consistent with an auditory motion hypothesis, the magnitude of the tau effect was larger at faster velocities. In addition, the tau effect was generally stronger for descending sequences than for ascending sequences. Combined with previous work on the auditory kappa effect, the imputed velocity model and associated auditory motion hypothesis provide a unified quantitative account of both auditory tau and kappa effects. In broader terms, these findings add support to the view that pitch and time relations in auditory patterns are fundamentally interdependent.

  18. HARP targets pion production cross section and yield measurements. Implications for MiniBooNE neutrino flux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wickremasinghe, Don Athula Abeyarathna

    2015-07-01

    The prediction of the muon neutrino flux from a 71.0 cm long beryllium target for the MiniBooNE experiment is based on a measured pion production cross section which was taken from a short beryllium target (2.0 cm thick - 5% nuclear interaction length) in the Hadron Production (HARP) experiment at CERN. To verify the extrapolation to our longer target, HARP also measured the pion production from 20.0 cm and 40.0 cm beryllium targets. The measured production yields, d 2N π± (p; θ )=dpd Ω, on targets of 50% and 100% nuclear interaction lengths in the kinematic rage of momentum frommore » 0.75 GeV/c to 6.5 GeV/c and the range of angle from 30 mrad to 210 mrad are presented along with an update of the short target cross sections. The best fitted extended Sanford-Wang (SW) model parameterization for updated short beryllium target π + production cross section is presented. Yield measurements for all three targets are also compared with that from the Monte Carlo predictions in the MiniBooNE experiment for different SW parameterization. The comparisons of v μ flux predictions for updated SW model is presented.« less

  19. Limitations of contrast enhancement for infrared target identification

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Fanning, Jonathan D.

    2009-05-01

    Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content. Automatic contrast enhancement techniques do not always achieve this improvement. In some cases, the contrast can increase to a level of target saturation. This paper assesses the range-performance effects of contrast enhancement for target identification as a function of image saturation. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing contrast enhancement processed images at various levels of saturation. Contrast enhancement is modeled in the U.S. Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of specific feature saturation or enhancement. The measured results follow the predicted performance based on the target task difficulty metric used in NVThermIP for the non-saturated cases. The saturated images reduce the information contained in the target and performance suffers. The model treats the contrast of the target as uniform over spatial frequency. As the contrast is enhanced, the model assumes that the contrast is enhanced uniformly over the spatial frequencies. After saturation, the spatial cues that differentiate one tank from another are located in a limited band of spatial frequencies. A frequency dependent treatment of target contrast is needed to predict performance of over-processed images.

  20. Spatial covert attention increases contrast sensitivity across the CSF: support for signal enhancement

    NASA Technical Reports Server (NTRS)

    Carrasco, M.; Penpeci-Talgar, C.; Eckstein, M.

    2000-01-01

    This study is the first to report the benefits of spatial covert attention on contrast sensitivity in a wide range of spatial frequencies when a target alone was presented in the absence of a local post-mask. We used a peripheral precue (a small circle indicating the target location) to explore the effects of covert spatial attention on contrast sensitivity as assessed by orientation discrimination (Experiments 1-4), detection (Experiments 2 and 3) and localization (Experiment 3) tasks. In all four experiments the target (a Gabor patch ranging in spatial frequency from 0.5 to 10 cpd) was presented alone in one of eight possible locations equidistant from fixation. Contrast sensitivity was consistently higher for peripherally- than for neutrally-cued trials, even though we eliminated variables (distracters, global masks, local masks, and location uncertainty) that are known to contribute to an external noise reduction explanation of attention. When observers were presented with vertical and horizontal Gabor patches an external noise reduction signal detection model accounted for the cueing benefit in a discrimination task (Experiment 1). However, such a model could not account for this benefit when location uncertainty was reduced, either by: (a) Increasing overall performance level (Experiment 2); (b) increasing stimulus contrast to enable fine discriminations of slightly tilted suprathreshold stimuli (Experiment 3); and (c) presenting a local post-mask (Experiment 4). Given that attentional benefits occurred under conditions that exclude all variables predicted by the external noise reduction model, these results support the signal enhancement model of attention.

  1. Multi-beam effects on backscatter and its saturation in experiments with conditions relevant to ignition

    DOE PAGES

    Kirkwood, R. K.; Michel, P.; London, R.; ...

    2011-05-26

    To optimize the coupling to indirect drive targets in the National Ignition Campaign (NIC) at the National Ignition Facility, a model of stimulated scattering produced by multiple laser beams is used. The model has shown that scatter of the 351 nm beams can be significantly enhanced over single beam predictions in ignition relevant targets by the interaction of the multiple crossing beams with a millimeter scale length, 2.5 keV, 0.02 - 0.05 x critical density, plasma. The model uses a suite of simulation capabilities and its key aspects are benchmarked with experiments at smaller laser facilities. The model has alsomore » influenced the design of the initial targets used for NIC by showing that both the stimulated Brillouin scattering (SBS) and stimulated Raman scattering (SRS) can be reduced by the reduction of the plasma density in the beam intersection volume that is caused by an increase in the diameter of the laser entrance hole (LEH). In this model, a linear wave response leads to a small gain exponent produced by each crossing quad of beams (<~1 per quad) which amplifies the scattering that originates in the target interior where the individual beams are separated and crosses many or all other beams near the LEH as it exits the target. As a result all 23 crossing quads of beams produce a total gain exponent of several or greater for seeds of light with wavelengths in the range that is expected for scattering from the interior (480 to 580 nm for SRS). This means that in the absence of wave saturation, the overall multi-beam scatter will be significantly larger than the expectations for single beams. The potential for non-linear saturation of the Langmuir waves amplifying SRS light is also analyzed with a two dimensional, vectorized, particle in cell code (2D VPIC) that is benchmarked by amplification experiments in a plasma with normalized parameters similar to ignition targets. The physics of cumulative scattering by multiple crossing beams that simultaneously amplify the same SBS light wave is further demonstrated in experiments that benchmark the linear models for the ion waves amplifying SBS. Here, the expectation from this model and its experimental benchmarks is shown to be consistent with observations of stimulated Raman scatter in the first series of energetic experiments with ignition targets, confirming the importance of the multi-beam scattering model for optimizing coupling.« less

  2. Modeling pressure rise in gas targets

    NASA Astrophysics Data System (ADS)

    Jahangiri, P.; Lapi, S. E.; Publicover, J.; Buckley, K.; Martinez, D. M.; Ruth, T. J.; Hoehr, C.

    2017-05-01

    The purpose of this work is to introduce a universal mathematical model to explain a gas target behaviour at steady-state time scale. To obtain our final goal, an analytical model is proposed to study the pressure rise in the targets used to produce medical isotopes on low-energy cyclotrons. The model is developed based on the assumption that during irradiation the system reaches steady-state. The model is verified by various experiments performed at different beam currents, gas type, and initial pressures at 13 MeV cyclotron at TRIUMF. Excellent agreement is achieved.

  3. Comparison of Geant4 multiple Coulomb scattering models with theory for radiotherapy protons

    NASA Astrophysics Data System (ADS)

    Makarova, Anastasia; Gottschalk, Bernard; Sauerwein, Wolfgang

    2017-08-01

    Usually, Monte Carlo models are validated against experimental data. However, models of multiple Coulomb scattering (MCS) in the Gaussian approximation are exceptional in that we have theories which are probably more accurate than the experiments which have, so far, been done to test them. In problems directly sensitive to the distribution of angles leaving the target, the relevant theory is the Molière/Fano/Hanson variant of Molière theory (Gottschalk et al 1993 Nucl. Instrum. Methods Phys. Res. B 74 467-90). For transverse spreading of the beam in the target itself, the theory of Preston and Koehler (Gottschalk (2012 arXiv:1204.4470)) holds. Therefore, in this paper we compare Geant4 simulations, using the Urban and Wentzel models of MCS, with theory rather than experiment, revealing trends which would otherwise be obscured by experimental scatter. For medium-energy (radiotherapy) protons, and low-Z (water-like) target materials, Wentzel appears to be better than Urban in simulating the distribution of outgoing angles. For beam spreading in the target itself, the two models are essentially equal.

  4. Comparison of Geant4 multiple Coulomb scattering models with theory for radiotherapy protons.

    PubMed

    Makarova, Anastasia; Gottschalk, Bernard; Sauerwein, Wolfgang

    2017-07-06

    Usually, Monte Carlo models are validated against experimental data. However, models of multiple Coulomb scattering (MCS) in the Gaussian approximation are exceptional in that we have theories which are probably more accurate than the experiments which have, so far, been done to test them. In problems directly sensitive to the distribution of angles leaving the target, the relevant theory is the Molière/Fano/Hanson variant of Molière theory (Gottschalk et al 1993 Nucl. Instrum. Methods Phys. Res. B 74 467-90). For transverse spreading of the beam in the target itself, the theory of Preston and Koehler (Gottschalk (2012 arXiv:1204.4470)) holds. Therefore, in this paper we compare Geant4 simulations, using the Urban and Wentzel models of MCS, with theory rather than experiment, revealing trends which would otherwise be obscured by experimental scatter. For medium-energy (radiotherapy) protons, and low-Z (water-like) target materials, Wentzel appears to be better than Urban in simulating the distribution of outgoing angles. For beam spreading in the target itself, the two models are essentially equal.

  5. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  6. Identification of ground targets from airborne platforms

    NASA Astrophysics Data System (ADS)

    Doe, Josh; Boettcher, Evelyn; Miller, Brian

    2009-05-01

    The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) sensor performance models predict the ability of soldiers to perform a specified military discrimination task using an EO/IR sensor system. Increasingly EO/IR systems are being used on manned and un-manned aircraft for surveillance and target acquisition tasks. In response to this emerging requirement, the NVESD Modeling and Simulation division has been tasked to compare target identification performance between ground-to-ground and air-to-ground platforms for both IR and visible spectra for a set of wheeled utility vehicles. To measure performance, several forced choice experiments were designed and administered and the results analyzed. This paper describes these experiments and reports the results as well as the NVTherm model calibration factors derived for the infrared imagery.

  7. Polyurethane Foam Impact Experiments and Simulations

    NASA Astrophysics Data System (ADS)

    Kipp, M. E.; Chhabildas, L. C.; Reinhart, W. D.; Wong, M. K.

    1999-06-01

    Uniaxial strain impact experiments with a rigid polyurethane foam of nominal density 0.22g/cc are reported. A 6 mm thick foam impactor is mounted on the face of a projectile and impacts a thin (1 mm) target plate of aluminum or copper, on which the rear free surface velocity history is acquired with a VISAR. Impact velocities ranged from 300 to 1500 m/s. The velocity record monitors the initial shock from the foam transmitted through the target, followed by a reverberation within the target plate as the wave interacts with the compressed foam at the impact interface and the free recording surface. These one-dimensional uniaxial strain impact experiments were modeled using a traditional p-alpha porous material model for the distended polyurethane, which generally captured the motion imparted to the target by the foam. Some of the high frequency aspects of the data, reflecting the heterogeneous nature of the foam, can be recovered with computations of fully 3-dimensional explicit representations of this porous material.

  8. Investigating the flow of information during speaking: the impact of morpho-phonological, associative, and categorical picture distractors on picture naming

    PubMed Central

    Bölte, Jens; Böhl, Andrea; Dobel, Christian; Zwitserlood, Pienie

    2015-01-01

    In three experiments, participants named target pictures by means of German compound words (e.g., Gartenstuhl–garden chair), each accompanied by two different distractor pictures (e.g., lawn mower and swimming pool). Targets and distractor pictures were semantically related either associatively (garden chair and lawn mower) or by a shared semantic category (garden chair and wardrobe). Within each type of semantic relation, target and distractor pictures either shared morpho-phonological (word-form) information (Gartenstuhl with Gartenzwerg, garden gnome, and Gartenschlauch, garden hose) or not. A condition with two completely unrelated pictures served as baseline. Target naming was facilitated when distractor and target pictures were morpho-phonologically related. This is clear evidence for the activation of word-form information of distractor pictures. Effects were larger for associatively than for categorically related distractors and targets, which constitute evidence for lexical competition. Mere categorical relatedness, in the absence of morpho-phonological overlap, resulted in null effects (Experiments 1 and 2), and only speeded target naming when effects reflect only conceptual, but not lexical processing (Experiment 3). Given that distractor pictures activate their word forms, the data cannot be easily reconciled with discrete serial models. The results fit well with models that allow information to cascade forward from conceptual to word-form levels. PMID:26528209

  9. Investigating the flow of information during speaking: the impact of morpho-phonological, associative, and categorical picture distractors on picture naming.

    PubMed

    Bölte, Jens; Böhl, Andrea; Dobel, Christian; Zwitserlood, Pienie

    2015-01-01

    In three experiments, participants named target pictures by means of German compound words (e.g., Gartenstuhl-garden chair), each accompanied by two different distractor pictures (e.g., lawn mower and swimming pool). Targets and distractor pictures were semantically related either associatively (garden chair and lawn mower) or by a shared semantic category (garden chair and wardrobe). Within each type of semantic relation, target and distractor pictures either shared morpho-phonological (word-form) information (Gartenstuhl with Gartenzwerg, garden gnome, and Gartenschlauch, garden hose) or not. A condition with two completely unrelated pictures served as baseline. Target naming was facilitated when distractor and target pictures were morpho-phonologically related. This is clear evidence for the activation of word-form information of distractor pictures. Effects were larger for associatively than for categorically related distractors and targets, which constitute evidence for lexical competition. Mere categorical relatedness, in the absence of morpho-phonological overlap, resulted in null effects (Experiments 1 and 2), and only speeded target naming when effects reflect only conceptual, but not lexical processing (Experiment 3). Given that distractor pictures activate their word forms, the data cannot be easily reconciled with discrete serial models. The results fit well with models that allow information to cascade forward from conceptual to word-form levels.

  10. Comparison of fragments created by low- and hyper-velocity impacts

    NASA Astrophysics Data System (ADS)

    Hanada, T.; Liou, J.-C.

    This paper summarizes two new satellite impact experiments. The objective of the experiments was to investigate the outcome of low- and hyper-velocity impacts on two identical target satellites. The first experiment was performed at a low-velocity of 1.5 km/s using a 40-g aluminum alloy sphere. The second experiment was performed at a hyper-velocity of 4.4 km/s using a 4-g aluminum alloy sphere. The target satellites were 15 cm × 15 cm × 15 cm in size and 800 g in mass. The ratios of impact energy to target mass for the two experiments were approximately the same. The target satellites were completely fragmented in both experiments, although there were some differences in the characteristics of the fragments. The projectile of the low-velocity impact experiment was partially fragmented while the projectile of the hyper-velocity impact experiment was completely fragmented beyond recognition. To date, approximately 1500 fragments from each impact experiment have been collected for detailed analysis. Each piece has been weighed, measured, and analyzed based on the analytic method used in the NASA Standard Breakup Model (2000 revision). These fragments account for about 95% of the target mass for both impact experiments. Preliminary analysis results will be presented in this paper.

  11. Docking and scoring protein complexes: CAPRI 3rd Edition.

    PubMed

    Lensink, Marc F; Méndez, Raúl; Wodak, Shoshana J

    2007-12-01

    The performance of methods for predicting protein-protein interactions at the atomic scale is assessed by evaluating blind predictions performed during 2005-2007 as part of Rounds 6-12 of the community-wide experiment on Critical Assessment of PRedicted Interactions (CAPRI). These Rounds also included a new scoring experiment, where a larger set of models contributed by the predictors was made available to groups developing scoring functions. These groups scored the uploaded set and submitted their own best models for assessment. The structures of nine protein complexes including one homodimer were used as targets. These targets represent biologically relevant interactions involved in gene expression, signal transduction, RNA, or protein processing and membrane maintenance. For all the targets except one, predictions started from the experimentally determined structures of the free (unbound) components or from models derived by homology, making it mandatory for docking methods to model the conformational changes that often accompany association. In total, 63 groups and eight automatic servers, a substantial increase from previous years, submitted docking predictions, of which 1994 were evaluated here. Fifteen groups submitted 305 models for five targets in the scoring experiment. Assessment of the predictions reveals that 31 different groups produced models of acceptable and medium accuracy-but only one high accuracy submission-for all the targets, except the homodimer. In the latter, none of the docking procedures reproduced the large conformational adjustment required for correct assembly, underscoring yet again that handling protein flexibility remains a major challenge. In the scoring experiment, a large fraction of the groups attained the set goal of singling out the correct association modes from incorrect solutions in the limited ensembles of contributed models. But in general they seemed unable to identify the best models, indicating that current scoring methods are probably not sensitive enough. With the increased focus on protein assemblies, in particular by structural genomics efforts, the growing community of CAPRI predictors is engaged more actively than ever in the development of better scoring functions and means of modeling conformational flexibility, which hold promise for much progress in the future. (c) 2007 Wiley-Liss, Inc.

  12. Focused attention improves working memory: implications for flexible-resource and discrete-capacity models.

    PubMed

    Souza, Alessandra S; Rerko, Laura; Lin, Hsuan-Yu; Oberauer, Klaus

    2014-10-01

    Performance in working memory (WM) tasks depends on the capacity for storing objects and on the allocation of attention to these objects. Here, we explored how capacity models need to be augmented to account for the benefit of focusing attention on the target of recall. Participants encoded six colored disks (Experiment 1) or a set of one to eight colored disks (Experiment 2) and were cued to recall the color of a target on a color wheel. In the no-delay condition, the recall-cue was presented after a 1,000-ms retention interval, and participants could report the retrieved color immediately. In the delay condition, the recall-cue was presented at the same time as in the no-delay condition, but the opportunity to report the color was delayed. During this delay, participants could focus attention exclusively on the target. Responses deviated less from the target's color in the delay than in the no-delay condition. Mixture modeling assigned this benefit to a reduction in guessing (Experiments 1 and 2) and transposition errors (Experiment 2). We tested several computational models implementing flexible or discrete capacity allocation, aiming to explain both the effect of set size, reflecting the limited capacity of WM, and the effect of delay, reflecting the role of attention to WM representations. Both models fit the data better when a spatially graded source of transposition error is added to its assumptions. The benefits of focusing attention could be explained by allocating to this object a higher proportion of the capacity to represent color.

  13. A Robotics-Based Approach to Modeling of Choice Reaching Experiments on Visual Attention

    PubMed Central

    Strauss, Soeren; Heinke, Dietmar

    2012-01-01

    The paper presents a robotics-based model for choice reaching experiments on visual attention. In these experiments participants were asked to make rapid reach movements toward a target in an odd-color search task, i.e., reaching for a green square among red squares and vice versa (e.g., Song and Nakayama, 2008). Interestingly these studies found that in a high number of trials movements were initially directed toward a distractor and only later were adjusted toward the target. These “curved” trajectories occurred particularly frequently when the target in the directly preceding trial had a different color (priming effect). Our model is embedded in a closed-loop control of a LEGO robot arm aiming to mimic these reach movements. The model is based on our earlier work which suggests that target selection in visual search is implemented through parallel interactions between competitive and cooperative processes in the brain (Heinke and Humphreys, 2003; Heinke and Backhaus, 2011). To link this model with the control of the robot arm we implemented a topological representation of movement parameters following the dynamic field theory (Erlhagen and Schoener, 2002). The robot arm is able to mimic the results of the odd-color search task including the priming effect and also generates human-like trajectories with a bell-shaped velocity profile. Theoretical implications and predictions are discussed in the paper. PMID:22529827

  14. Capture of shrinking targets with realistic shrink patterns.

    PubMed

    Hoffmann, Errol R; Chan, Alan H S; Dizmen, Coskun

    2013-01-01

    Previous research [Hoffmann, E. R. 2011. "Capture of Shrinking Targets." Ergonomics 54 (6): 519-530] reported experiments for capture of shrinking targets where the target decreased in size at a uniform rate. This work extended this research for targets having a shrink-size versus time pattern that of an aircraft receding from an observer. In Experiment 1, the time to capture the target in this case was well correlated in terms of Fitts' index of difficulty, measured at the time of capture of the target, a result that is in agreement with the 'balanced' model of Johnson and Hart [Johnson, W. W., and Hart, S. G. 1987. "Step Tracking Shrinking Targets." Proceedings of the human factors society 31st annual meeting, New York City, October 1987, 248-252]. Experiment 2 measured the probability of target capture for varying initial target sizes and target shrink rates constant, defined as the time for the target to shrink to half its initial size. Data of shrink time constant for 50% probability of capture were related to initial target size but did not greatly affect target capture as the rate of target shrinking decreased rapidly with time.

  15. Action-perception dissociation in response to target acceleration.

    PubMed

    Dubrowski, Adam; Carnahan, Heather

    2002-05-01

    The purpose of this study was to investigate whether information about the acceleration characteristics of a moving target can be used for both action and perception. Also of interest was whether prior movement experience altered perceptual judgements. Participants manually intercepted targets moving with various acceleration, velocity and movement time characteristics. They also made perceptual judgements about the acceleration characteristics of these targets either with or without prior manual interception experience. Results showed that while aiming kinematics were sensitive to the acceleration characteristics of the target, participants were only able to perceptually discriminate the velocity characteristics of target motion, even after performing interceptive actions to the same targets. These results are discussed in terms of a two channel (action-perception) model of visuomotor control.

  16. Acoustic Scattering from Munitions in the Underwater Environment: Measurements and Modeling

    NASA Astrophysics Data System (ADS)

    Williams, K.; Kargl, S. G.; Espana, A.

    2017-12-01

    Acoustical scattering from elastic targets has been a subject of research for several decades. However, the introduction of those targets into the ocean environment brings new complexities to quantitative prediction of that scattering. The goal of our work has been to retain as much of the target physics as possible while also handling the propagation to and from the target in the multi-path ocean environment. Testing of the resulting predictions has been carried out via ocean experiments in which munitions are deployed on and within the sediment. We will present the overall philosophy used in the modeling and then compare model results to measurements. A 60 cm long 30 cd diameter aluminum cylinder will be used as a canonical example and then a sample of results for a variety of munitions will be shown. Finally, we will discuss the use of both the models and measurements in assessing the ability of sonar to discriminate munitions from other man-made targets. The difficulty of this challenge will be made apparent via results from a recent experiment in which both munitions and man-made "clutter" were deployed on a rippled sand interface.

  17. Kinetic analysis of the effects of target structure on siRNA efficiency

    NASA Astrophysics Data System (ADS)

    Chen, Jiawen; Zhang, Wenbing

    2012-12-01

    RNAi efficiency for target cleavage and protein expression is related to the target structure. Considering the RNA-induced silencing complex (RISC) as a multiple turnover enzyme, we investigated the effect of target mRNA structure on siRNA efficiency with kinetic analysis. The 4-step model was used to study the target cleavage kinetic process: hybridization nucleation at an accessible target site, RISC-mRNA hybrid elongation along with mRNA target structure melting, target cleavage, and enzyme reactivation. At this model, the terms accounting for the target accessibility, stability, and the seed and the nucleation site effects are all included. The results are in good agreement with that of experiments which show different arguments about the structure effects on siRNA efficiency. It shows that the siRNA efficiency is influenced by the integrated factors of target's accessibility, stability, and the seed effects. To study the off-target effects, a simple model of one siRNA binding to two mRNA targets was designed. By using this model, the possibility for diminishing the off-target effects by the concentration of siRNA was discussed.

  18. Illusory conjunctions of pitch and duration in unfamiliar tone sequences.

    PubMed

    Thompson, W F; Hall, M D; Pressing, J

    2001-02-01

    In 3 experiments, the authors examined short-term memory for pitch and duration in unfamiliar tone sequences. Participants were presented a target sequence consisting of 2 tones (Experiment 1) or 7 tones (Experiments 2 and 3) and then a probe tone. Participants indicated whether the probe tone matched 1 of the target tones in both pitch and duration. Error rates were relatively low if the probe tone matched 1 of the target tones or if it differed from target tones in pitch, duration, or both. Error rates were remarkably high, however, if the probe tone combined the pitch of 1 target tone with the duration of a different target tone. The results suggest that illusory conjunctions of these dimensions frequently occur. A mathematical model is presented that accounts for the relative contribution of pitch errors, duration errors, and illusory conjunctions of pitch and duration.

  19. Identification of the feedforward component in manual control with predictable target signals.

    PubMed

    Drop, Frank M; Pool, Daan M; Damveld, Herman J; van Paassen, Marinus M; Mulder, Max

    2013-12-01

    In the manual control of a dynamic system, the human controller (HC) often follows a visible and predictable reference path. Compared with a purely feedback control strategy, performance can be improved by making use of this knowledge of the reference. The operator could effectively introduce feedforward control in conjunction with a feedback path to compensate for errors, as hypothesized in literature. However, feedforward behavior has never been identified from experimental data, nor have the hypothesized models been validated. This paper investigates human control behavior in pursuit tracking of a predictable reference signal while being perturbed by a quasi-random multisine disturbance signal. An experiment was done in which the relative strength of the target and disturbance signals were systematically varied. The anticipated changes in control behavior were studied by means of an ARX model analysis and by fitting three parametric HC models: two different feedback models and a combined feedforward and feedback model. The ARX analysis shows that the experiment participants employed control action on both the error and the target signal. The control action on the target was similar to the inverse of the system dynamics. Model fits show that this behavior can be modeled best by the combined feedforward and feedback model.

  20. The roles of shared vs. distinctive conceptual features in lexical access

    PubMed Central

    Vieth, Harrison E.; McMahon, Katie L.; de Zubicaray, Greig I.

    2014-01-01

    Contemporary models of spoken word production assume conceptual feature sharing determines the speed with which objects are named in categorically-related contexts. However, statistical models of concept representation have also identified a role for feature distinctiveness, i.e., features that identify a single concept and serve to distinguish it quickly from other similar concepts. In three experiments we investigated whether distinctive features might explain reports of counter-intuitive semantic facilitation effects in the picture word interference (PWI) paradigm. In Experiment 1, categorically-related distractors matched in terms of semantic similarity ratings (e.g., zebra and pony) and manipulated with respect to feature distinctiveness (e.g., a zebra has stripes unlike other equine species) elicited interference effects of comparable magnitude. Experiments 2 and 3 investigated the role of feature distinctiveness with respect to reports of facilitated naming with part-whole distractor-target relations (e.g., a hump is a distinguishing part of a CAMEL, whereas knee is not, vs. an unrelated part such as plug). Related part distractors did not influence target picture naming latencies significantly when the part denoted by the related distractor was not visible in the target picture (whether distinctive or not; Experiment 2). When the part denoted by the related distractor was visible in the target picture, non-distinctive part distractors slowed target naming significantly at SOA of −150 ms (Experiment 3). Thus, our results show that semantic interference does occur for part-whole distractor-target relations in PWI, but only when distractors denote features shared with the target and other category exemplars. We discuss the implications of these results for some recently developed, novel accounts of lexical access in spoken word production. PMID:25278914

  1. [Model and analysis of spectropolarimetric BRDF of painted target based on GA-LM method].

    PubMed

    Chen, Chao; Zhao, Yong-Qiang; Luo, Li; Pan, Quan; Cheng, Yong-Mei; Wang, Kai

    2010-03-01

    Models based on microfacet were used to describe spectropolarimetric BRDF (short for bidirectional reflectance distribution function) with experimental data. And the spectropolarimetric BRDF values of targets were measured with the comparison to the standard whiteboard, which was considered as Lambert and had a uniform reflectance rate up to 98% at arbitrary angle of view. And then the relationships between measured spectropolarimetric BRDF values and the angles of view, as well as wavelengths which were in a range of 400-720 nm were analyzed in details. The initial value needed to be input to the LM optimization method was difficult to get and greatly impacted the results. Therefore, optimization approach which combines genetic algorithm and Levenberg-Marquardt (LM) was utilized aiming to retrieve parameters of nonlinear models, and the initial values were obtained using GA approach. Simulated experiments were used to test the efficiency of the adopted optimization method. And the simulated experiment ensures the optimization method to have a good performance and be able to retrieve the parameters of nonlinear model efficiently. The correctness of the models was validated by real outdoor sampled data. The parameters of DoP model retrieved are the refraction index of measured targets. The refraction index of the same color painted target but with different materials was also obtained. Conclusion has been drawn that the refraction index from these two targets are very near and this slight difference could be understood by the difference in the conditions of paint targets' surface, not the material of the targets.

  2. Integrated modelling framework for short pulse high energy density physics experiments

    NASA Astrophysics Data System (ADS)

    Sircombe, N. J.; Hughes, S. J.; Ramsay, M. G.

    2016-03-01

    Modelling experimental campaigns on the Orion laser at AWE, and developing a viable point-design for fast ignition (FI), calls for a multi-scale approach; a complete description of the problem would require an extensive range of physics which cannot realistically be included in a single code. For modelling the laser-plasma interaction (LPI) we need a fine mesh which can capture the dispersion of electromagnetic waves, and a kinetic model for each plasma species. In the dense material of the bulk target, away from the LPI region, collisional physics dominates. The transport of hot particles generated by the action of the laser is dependent on their slowing and stopping in the dense material and their need to draw a return current. These effects will heat the target, which in turn influences transport. On longer timescales, the hydrodynamic response of the target will begin to play a role as the pressure generated from isochoric heating begins to take effect. Recent effort at AWE [1] has focussed on the development of an integrated code suite based on: the particle in cell code EPOCH, to model LPI; the Monte-Carlo electron transport code THOR, to model the onward transport of hot electrons; and the radiation hydrodynamics code CORVUS, to model the hydrodynamic response of the target. We outline the methodology adopted, elucidate on the advantages of a robustly integrated code suite compared to a single code approach, demonstrate the integrated code suite's application to modelling the heating of buried layers on Orion, and assess the potential of such experiments for the validation of modelling capability in advance of more ambitious HEDP experiments, as a step towards a predictive modelling capability for FI.

  3. Display size effects in visual search: analyses of reaction time distributions as mixtures.

    PubMed

    Reynolds, Ann; Miller, Jeff

    2009-05-01

    In a reanalysis of data from Cousineau and Shiffrin (2004) and two new visual search experiments, we used a likelihood ratio test to examine the full distributions of reaction time (RT) for evidence that the display size effect is a mixture-type effect that occurs on only a proportion of trials, leaving RT in the remaining trials unaffected, as is predicted by serial self-terminating search models. Experiment 1 was a reanalysis of Cousineau and Shiffrin's data, for which a mixture effect had previously been established by a bimodal distribution of RTs, and the results confirmed that the likelihood ratio test could also detect this mixture. Experiment 2 applied the likelihood ratio test within a more standard visual search task with a relatively easy target/distractor discrimination, and Experiment 3 applied it within a target identification search task within the same types of stimuli. Neither of these experiments provided any evidence for the mixture-type display size effect predicted by serial self-terminating search models. Overall, these results suggest that serial self-terminating search models may generally be applicable only with relatively difficult target/distractor discriminations, and then only for some participants. In addition, they further illustrate the utility of analysing full RT distributions in addition to mean RT.

  4. An Experiment Quantifying The Effect Of Clutter On Target Detection

    NASA Astrophysics Data System (ADS)

    Weathersby, Marshall R.; Schmieder, David E.

    1985-01-01

    Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 lines pairs per target (LP/TGT), while at the higher SCRs it was found that a resolution of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.

  5. Leading, but not trailing, primes influence temporal order perception: further evidence for an attentional account of perceptual latency priming.

    PubMed

    Scharlau, Ingrid

    2002-11-01

    Presenting a masked prime leading a target influences the perceived onset of the masking target (perceptual latency priming; Scharlau & Neumann, in press). This priming effect is explained by the asynchronous updating model (Neumann, 1982; Scharlau & Neumann, in press): The prime initiates attentional allocation toward its location, which renders a trailing target at the same place consciously available earlier. In three experiments, this perceptual latency priming by leading primes was examined jointly with the effects of trailing primes in order to compare the explanation of the asynchronous updating model with the onset-averaging and the P-center hypotheses. Experiment 1 showed that an attended, as well as an unattended, prime leads to perceptual latency priming. In addition, a large effect of trailing primes on the onset of a target was found. As Experiment 2 demonstrated, this effect is quite robust, although smaller than that of a leading prime. In Experiment 3, masked primes were used. Under these conditions, no influence of trailing primes could be found, whereas perceptual latency priming persisted. Thus, a nonattentional explanation for the effect of trailing primes seems likely.

  6. When Sufficiently Processed, Semantically Related Distractor Pictures Hamper Picture Naming.

    PubMed

    Matushanskaya, Asya; Mädebach, Andreas; Müller, Matthias M; Jescheniak, Jörg D

    2016-11-01

    Prominent speech production models view lexical access as a competitive process. According to these models, a semantically related distractor picture should interfere with target picture naming more strongly than an unrelated one. However, several studies failed to obtain such an effect. Here, we demonstrate that semantic interference is obtained, when the distractor picture is sufficiently processed. Participants named one of two pictures presented in close temporal succession, with color cueing the target. Experiment 1 induced the prediction that the target appears first. When this prediction was violated (distractor first), semantic interference was observed. Experiment 2 ruled out that the time available for distractor processing was the driving force. These results show that semantically related distractor pictures interfere with the naming response when they are sufficiently processed. The data thus provide further support for models viewing lexical access as a competitive process.

  7. Recent CCQE results from MINERvA

    NASA Astrophysics Data System (ADS)

    Ghosh, Anushree; Minerva Collaboration

    2017-01-01

    The MINER νA detector situated in Fermilab, is designed to make precision cross section measurements for neutrino scattering processes on various nuclei. I will present the two most recent results from the MINER νA charged current quasi-elastic (CCQE) studies. The event sample for both analyses are the CCQE-like final state topology and contain contributions from quasi-elastic and inelastic processes where pions are absorbed in the nucleus. One of the analyses is the MINER νA experiment's first double-differential scattering cross sections for antineutrinos on the hydrocarbon target in the few-GeV range relevant to experiments such as DUNE and NOvA. We compare to models produced by different model generators, and are able to draw first conclusions about the predictions of these models. Another analysis, is the CCQE-like analysis for neutrinos on the nuclear targets of carbon, iron and lead. The ratio of differential cross sections on these targets to the differential cross section on the hydrocarbon target are examined to study nuclear effects.

  8. Oblique impacts into low impedance layers

    NASA Astrophysics Data System (ADS)

    Stickle, A. M.; Schultz, P. H.

    2009-12-01

    Planetary impacts occur indiscriminately, in all locations and materials. Varied geologic settings can have significant effects on the impact process, including the coupling between the projectile and target, the final damage patterns and modes of deformation that occur. For example, marine impact craters are not identical to impacts directly into bedrock or into sedimentary materials, though many of the same fundamental processes occur. It is therefore important, especially when considering terrestrial impacts, to understand how a low impedance sedimentary layer over bedrock affects the deformation process during and after a hypervelocity impact. As a first step, detailed comparisons between impacts and hydrocode models were performed. Experiments performed at the NASA Ames Vertical Gun Range of oblique impacts into polymethylmethacrylate (PMMA) targets with low impedance layers were performed and compared to experiments of targets without low impedance layers, as well as to hydrocode models under identical conditions. Impact velocities ranged from 5 km/s to 5.6 km/s, with trajectories from 30 degrees to 90 degrees above the horizontal. High-speed imaging provided documentation of the sequence and location of failure due to impact, which was compared to theoretical models. Plasticine and ice were used to construct the low impedance layers. The combination of experiments and models reveals the modes of failure due to a hypervelocity impact. How such failure is manifested at large scales can present a challenge for hydrocodes. CTH models tend to overestimate the amount of damage occurring within the targets and have difficulties perfectly reproducing morphologies; nevertheless, they provide significant and useful information about the failure modes and style within the material. CTH models corresponding to the experiments allow interpretation of the underlying processes involved as well as provide a benchmark for the experimental analysis. The transparency of PMMA allows a clear view of failure patterns within the target, providing a 3D picture of the final damage, as well as damage formation and propagation. Secondly, PMMA has mechanical properties similar to those of brittle rocks in the upper crust, making it an appropriate material for comparison to geologic materials. An impact into a PMMA target with a one-projectile-diameter thick plasticine layer causes damage distinct from an impact into a PMMA target without a low impedance layer. The extent of the final damage is much less in the target with the low impedance layer and begins to form at later times, there is little to no crater visible on the surface, and the formation and propagation of the damage is completely different, creating distinct subsurface damage patterns. Three-dimensional CTH hydrocode models show that the pressure history of material around and underneath the impact point is also different when a low impedance layer is present, leading to the variations in damage forming within the targets.

  9. The effect of spatial organization of targets and distractors on the capacity to selectively memorize objects in visual short-term memory.

    PubMed

    Abbes, Aymen Ben; Gavault, Emmanuelle; Ripoll, Thierry

    2014-01-01

    We conducted a series of experiments to explore how the spatial configuration of objects influences the selection and the processing of these objects in a visual short-term memory task. We designed a new experiment in which participants had to memorize 4 targets presented among 4 distractors. Targets were cued during the presentation of distractor objects. Their locations varied according to 4 spatial configurations. From the first to the last configuration, the distance between targets' locations was progressively increased. The results revealed a high capacity to select and memorize targets embedded among distractors even when targets were extremely distant from each other. This capacity is discussed in relation to the unitary conception of attention, models of split attention, and the competitive interaction model. Finally, we propose that the spatial dispersion of objects has different effects on attentional allocation and processing stages. Thus, when targets are extremely distant from each other, attentional allocation becomes more difficult while processing becomes easier. This finding implicates that these 2 aspects of attention need to be more clearly distinguished in future research.

  10. Two-dimensional hidden semantic information model for target saliency detection and eyetracking identification

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao

    2018-01-01

    Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.

  11. Enhancing the Effectiveness of Carbon Dioxide Flooding by Managing Asphaltene Precipitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deo, Milind D.

    2002-02-21

    This project was undertaken to understand fundamental aspects of carbon dioxide (CO2) induced asphaltene precipitation. Oil and asphaltene samples from the Rangely field in Colorado were used for most of the project. The project consisted of pure component and high-pressure, thermodynamic experiments, thermodynamic modeling, kinetic experiments and modeling, targeted corefloods and compositional modeling.

  12. Drug accumulation by means of noninvasive magnetic drug delivery system

    NASA Astrophysics Data System (ADS)

    Chuzawa, M.; Mishima, F.; Akiyama, Y.; Nishijima, S.

    2011-11-01

    The medication is one of the most general treatment methods, but drugs diffuse in the normal tissues other than the target part by the blood circulation. Therefore, side effect in the medication, particularly for a drug with strong effect such as anti-cancer drug, are a serious issue. Drug Delivery System (DDS) which accumulates the drug locally in the human body is one of the techniques to solve the side-effects. Magnetic Drug Delivery System (MDDS) is one of the active DDSs, which uses the magnetic force. The objective of this study is to accumulate the ferromagnetic drugs noninvasively in the deep part of the body by using MDDS. It is necessary to generate high magnetic field and magnetic gradient at the target part to reduce the side-effects to the tissues with no diseases. The biomimetic model was composed, which consists of multiple model organs connected with diverged blood vessel model. The arrangement of magnetic field was examined to accumulate ferromagnetic drug particles in the target model organ by using a superconducting bulk magnet which can generate high magnetic fields. The arrangement of magnet was designed to generate high and stable magnetic field at the target model organ. The accumulation experiment of ferromagnetic particles has been conducted. In this study, rotating HTS bulk magnet around the axis of blood vessels by centering on the target part was suggested, and the model experiment for magnet rotation was conducted. As a result, the accumulation of the ferromagnetic particles to the target model organ in the deep part was confirmed.

  13. Monte-Carlo Geant4 numerical simulation of experiments at 247-MeV proton microscope

    NASA Astrophysics Data System (ADS)

    Kantsyrev, A. V.; Skoblyakov, A. V.; Bogdanov, A. V.; Golubev, A. A.; Shilkin, N. S.; Yuriev, D. S.; Mintsev, V. B.

    2018-01-01

    A radiographic facility for an investigation of fast dynamic processes with areal density of targets up to 5 g/cm2 is under development on the basis of high-current proton linear accelerator at the Institute for Nuclear Research (Troitsk, Russia). A virtual model of the proton microscope developed in a software toolkit Geant4 is presented in the article. Fullscale Monte-Carlo numerical simulation of static radiographic experiments at energy of a proton beam 247 MeV was performed. The results of simulation of proton radiography experiments with static model of shock-compressed xenon are presented. The results of visualization of copper and polymethyl methacrylate step wedges static targets also described.

  14. Testing light dark matter coannihilation with fixed-target experiments

    DOE PAGES

    Izaguirre, Eder; Kahn, Yonatan; Krnjaic, Gordan; ...

    2017-09-01

    In this paper, we introduce a novel program of fixed-target searches for thermal-origin Dark Matter (DM), which couples inelastically to the Standard Model. Since the DM only interacts by transitioning to a heavier state, freeze-out proceeds via coannihilation and the unstable heavier state is depleted at later times. For sufficiently large mass splittings, direct detection is kinematically forbidden and indirect detection is impossible, so this scenario can only be tested with accelerators. Here we propose new searches at proton and electron beam fixed-target experiments to probe sub-GeV coannihilation, exploiting the distinctive signals of up- and downscattering as well as decaymore » of the excited state inside the detector volume. We focus on a representative model in which DM is a pseudo-Dirac fermion coupled to a hidden gauge field (dark photon), which kinetically mixes with the visible photon. We define theoretical targets in this framework and determine the existing bounds by reanalyzing results from previous experiments. We find that LSND, E137, and BaBar data already place strong constraints on the parameter space consistent with a thermal freeze-out origin, and that future searches at Belle II and MiniBooNE, as well as recently-proposed fixed-target experiments such as LDMX and BDX, can cover nearly all remaining gaps. We also briefly comment on the discovery potential for proposed beam dump and neutrino experiments which operate at much higher beam energies.« less

  15. Testing light dark matter coannihilation with fixed-target experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izaguirre, Eder; Kahn, Yonatan; Krnjaic, Gordan

    In this paper, we introduce a novel program of fixed-target searches for thermal-origin Dark Matter (DM), which couples inelastically to the Standard Model. Since the DM only interacts by transitioning to a heavier state, freeze-out proceeds via coannihilation and the unstable heavier state is depleted at later times. For sufficiently large mass splittings, direct detection is kinematically forbidden and indirect detection is impossible, so this scenario can only be tested with accelerators. Here we propose new searches at proton and electron beam fixed-target experiments to probe sub-GeV coannihilation, exploiting the distinctive signals of up- and downscattering as well as decaymore » of the excited state inside the detector volume. We focus on a representative model in which DM is a pseudo-Dirac fermion coupled to a hidden gauge field (dark photon), which kinetically mixes with the visible photon. We define theoretical targets in this framework and determine the existing bounds by reanalyzing results from previous experiments. We find that LSND, E137, and BaBar data already place strong constraints on the parameter space consistent with a thermal freeze-out origin, and that future searches at Belle II and MiniBooNE, as well as recently-proposed fixed-target experiments such as LDMX and BDX, can cover nearly all remaining gaps. We also briefly comment on the discovery potential for proposed beam dump and neutrino experiments which operate at much higher beam energies.« less

  16. Testing light dark matter coannihilation with fixed-target experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izaguirre, Eder; Kahn, Yonatan; Krnjaic, Gordan

    In this paper, we introduce a novel program of fixed-target searches for thermal-origin Dark Matter (DM), which couples inelastically to the Standard Model. Since the DM only interacts by transitioning to a heavier state, freeze-out proceeds via coannihilation and the unstable heavier state is depleted at later times. For sufficiently large mass splittings, direct detection is kinematically forbidden and indirect detection is impossible, so this scenario can only be tested with accelerators. Here we propose new searches at proton and electron beam fixed-target experiments to probe sub-GeV coannihilation, exploiting the distinctive signals of up- and down-scattering as well as decaymore » of the excited state inside the detector volume. We focus on a representative model in which DM is a pseudo-Dirac fermion coupled to a hidden gauge field (dark photon), which kinetically mixes with the visible photon. We define theoretical targets in this framework and determine the existing bounds by reanalyzing results from previous experiments. We find that LSND, E137, and BaBar data already place strong constraints on the parameter space consistent with a thermal freeze-out origin, and that future searches at Belle II and MiniBooNE, as well as recently-proposed fixed-target experiments such as LDMX and BDX, can cover nearly all remaining gaps. We also briefly comment on the discovery potential for proposed beam dump and neutrino experiments which operate at much higher beam energies.« less

  17. Performance of a Liner-on-Target Injector for Staged Z-Pinch Experiments

    NASA Astrophysics Data System (ADS)

    Conti, F.; Valenzuela, J. C.; Narkis, J.; Krasheninnikov, I.; Beg, F.; Wessel, F. J.; Ruskov, E.; Rahman, H. U.; McGee, E.

    2016-10-01

    We present the design and characterization of a compact liner-on-target injector, used in the Staged Z-pinch experiments conducted on the UNR-NTF Zebra Facility. Previous experiments and analysis indicate that high-Z gas liners produce a uniform and efficient implosion on a low-Z target plasma. The liner gas shell is produced by an annular solenoid valve and a converging-diverging nozzle designed to achieve a collimated, supersonic, Mach-5 flow. The on-axis target is produced by a coaxial plasma gun, where a high voltage pulse is applied to ionize neutral gas and accelerate the plasma by the J-> × B-> force. Measurements of the liner and target dynamics, resolved by interferometry in space and time, fast imaging, and collection of the emitted light, are presented. The results are compared to the predictions from Computational Fluid Dynamics and MHD simulations that model the injector. Optimization of the design parameters, for upcoming Staged Z-pinch experiments, will be discussed. Advanced Research Projects Agency - Energy, DE-AR0000569.

  18. Moving target detection method based on improved Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.

  19. Modeling protein complexes with BiGGER.

    PubMed

    Krippahl, Ludwig; Moura, José J; Palma, P Nuno

    2003-07-01

    This article describes the method and results of our participation in the Critical Assessment of PRediction of Interactions (CAPRI) experiment, using the protein docking program BiGGER (Bimolecular complex Generation with Global Evaluation and Ranking) (Palma et al., Proteins 2000;39:372-384). Of five target complexes (CAPRI targets 2, 4, 5, 6, and 7), only one was successfully predicted (target 6), but BiGGER generated reasonable models for targets 4, 5, and 7, which could have been identified if additional biochemical information had been available. Copyright 2003 Wiley-Liss, Inc.

  20. Risk, Need, and Responsivity (RNR): It All Depends

    ERIC Educational Resources Information Center

    Taxman, Faye S.; Thanner, Meridith

    2006-01-01

    Target populations have always been a thorny issue for correctional programs. In this experiment of seamless treatment for probationers in two sites, offenders were randomly assigned to the seamless model (drug treatment incorporated into probation supervision) or traditional referral model to services in the community. The experiment blocked on…

  1. Modeling Hohlraum-Based Laser Plasma Instability Experiments

    NASA Astrophysics Data System (ADS)

    Meezan, N. B.

    2005-10-01

    Laser fusion targets must control laser-plasma instabilities (LPI) in order to perform as designed. We present analyses of recent hohlraum LPI experiments from the Omega laser facility. The targets, gold hohlraums filled with gas or SiO2 foam, are preheated by several 3φ beams before an interaction beam (2φ or 3φ) is fired along the hohlraum axis. The experiments are simulated in 2-D and 3-D using the code hydra. The choice of electron thermal conduction model in hydra strongly affects the simulated plasma conditions. This work is part of a larger effort to systematically explore the usefulness of linear gain as a design tool for fusion targets. We find that the measured Raman and Brillouin backscatter scale monotonically with the peak linear gain calculated for the target; however, linear gain is not sufficient to explain all trends in the data. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-ENG-48.

  2. An infrastructure for accurate characterization of single-event transients in digital circuits.

    PubMed

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-11-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure.

  3. An infrastructure for accurate characterization of single-event transients in digital circuits☆

    PubMed Central

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-01-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure. PMID:24748694

  4. 3-D Modeling of Planar Target-Mount Perturbation Experiments on OMEGA

    NASA Astrophysics Data System (ADS)

    Collins, T. J. B.; Marshall, F. J.; Marozas, J. A.; Bonino, M. J.; Forties, R.; Goncharov, V. N.; Igumenshchev, I. V.; McKenty, P. W.; Smalyuk, V. A.

    2008-11-01

    OMEGA cryogenic targets are suspended in the target chamber using four spider silks attached to a C-shaped mount. The spider silks are typically composed of two entwined protein strands comparable to 1 μm in diameter. The silks and mount refract the incident laser light and cast shadows on the target surface. Experiments to measure the effects of the silks on target illumination have been performed in planar geometry using silks suspended parallel to a 20-μm-thick laser-driven target. The evolution of the surface perturbations introduced by the silks was measured using x-ray backlighting. The results of these experiments will be compared to simulations performed with DRACO, employing three-dimensional (3-D) planar hydrodynamics and a new 3-D refractive ray-trace package written specifically for this geometry. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC52-08NA28302.

  5. An investigation of phonology and orthography in spoken-word recognition.

    PubMed

    Slowiaczek, Louisa M; Soltano, Emily G; Wieting, Shani J; Bishop, Karyn L

    2003-02-01

    The possible influence of initial phonological and/or orthographic information on spoken-word processing was examined in six experiments modelled after and extending the work Jakimik, Cole, and Rudnicky (1985). Following Jakimik et al., Experiment 1 used polysyllabic primes with monosyllabic targets (e.g., BUCKLE-BUCK/[symbol: see text]; MYSTERY-MISS,/[symbol: see text]). Experiments 2, 3, and 4 used polysyllabic primes and polysyllabic targets whose initial syllables shared phonological information (e.g., NUISANCE-NOODLE,/[symbol: see text]), orthographic information (e.g., RATIO-RATIFY,/[symbol: see text]), both (e.g., FUNNEL-FUNNY,/[symbol: see text]), or were unrelated (e.g., SERMON-NOODLE,/[symbol: see text]). Participants engaged in a lexical decision (Experiments 1, 3, and 4) or a shadowing (Experiment 2) task with a single-trial (Experiments 2 and 3) or subsequent-trial (Experiments 1 and 4) priming procedure. Experiment 5 tested primes and targets that varied in the number of shared graphemes while holding shared phonemes constant at one. Experiment 6 used the procedures of Experiment 2 but a low proportion of related trials. Results revealed that response times were facilitated for prime-target pairs that shared initial phonological and orthographic information. These results were confirmed under conditions when strategic processing was greatly reduced suggesting that phonological and orthographic information is automatically activated during spoken-word processing.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurt, Christopher J.; Freels, James D.; Hobbs, Randy W.

    There has been a considerable effort over the previous few years to demonstrate and optimize the production of plutonium-238 ( 238Pu) at the High Flux Isotope Reactor (HFIR). This effort has involved resources from multiple divisions and facilities at the Oak Ridge National Laboratory (ORNL) to demonstrate the fabrication, irradiation, and chemical processing of targets containing neptunium-237 ( 237Np) dioxide (NpO 2)/aluminum (Al) cermet pellets. A critical preliminary step to irradiation at the HFIR is to demonstrate the safety of the target under irradiation via documented experiment safety analyses. The steady-state thermal safety analyses of the target are simulated inmore » a finite element model with the COMSOL Multiphysics code that determines, among other crucial parameters, the limiting maximum temperature in the target. Safety analysis efforts for this model discussed in the present report include: (1) initial modeling of single and reduced-length pellet capsules in order to generate an experimental knowledge base that incorporate initial non-linear contact heat transfer and fission gas equations, (2) modeling efforts for prototypical designs of partially loaded and fully loaded targets using limited available knowledge of fabrication and irradiation characteristics, and (3) the most recent and comprehensive modeling effort of a fully coupled thermo-mechanical approach over the entire fully loaded target domain incorporating burn-up dependent irradiation behavior and measured target and pellet properties, hereafter referred to as the production model. These models are used to conservatively determine several important steady-state parameters including target stresses and temperatures, the limiting condition of which is the maximum temperature with respect to the melting point. The single pellet model results provide a basis for the safety of the irradiations, followed by parametric analyses in the initial prototypical designs that were necessary due to the limiting fabrication and irradiation data available. The calculated parameters in the final production target model are the most accurate and comprehensive, while still conservative. Over 210 permutations in irradiation time and position were evaluated, and are supported by the most recent inputs and highest fidelity methodology. The results of these analyses show that the models presented in this report provide a robust and reliable basis for previous, current and future experiment safety analyses. In addition, they reveal an evolving knowledge of the steady-state behavior of the NpO 2/Al pellets under irradiation for a variety of target encapsulations and potential conditions.« less

  7. Competitive hybridization models

    NASA Astrophysics Data System (ADS)

    Cherepinsky, Vera; Hashmi, Ghazala; Mishra, Bud

    2010-11-01

    Microarray technology, in its simplest form, allows one to gather abundance data for target DNA molecules, associated with genomes or gene-expressions, and relies on hybridizing the target to many short probe oligonucleotides arrayed on a surface. While for such multiplexed reactions conditions are optimized to make the most of each individual probe-target interaction, subsequent analysis of these experiments is based on the implicit assumption that a given experiment yields the same result regardless of whether it was conducted in isolation or in parallel with many others. It has been discussed in the literature that this assumption is frequently false, and its validity depends on the types of probes and their interactions with each other. We present a detailed physical model of hybridization as a means of understanding probe interactions in a multiplexed reaction. Ultimately, the model can be derived from a system of ordinary differential equations (ODE’s) describing kinetic mass action with conservation-of-mass equations completing the system. We examine pairwise probe interactions in detail and present a model of “competition” between the probes for the target—especially, when the target is effectively in short supply. These effects are shown to be predictable from the affinity constants for each of the four probe sequences involved, namely, the match and mismatch sequences for both probes. These affinity constants are calculated from the thermodynamic parameters such as the free energy of hybridization, which are in turn computed according to the nearest neighbor (NN) model for each probe and target sequence. Simulations based on the competitive hybridization model explain the observed variability in the signal of a given probe when measured in parallel with different groupings of other probes or individually. The results of the simulations can be used for experiment design and pooling strategies, based on which probes have been shown to have a strong effect on each other’s signal in the in silico experiment. These results are aimed at better design of multiplexed reactions on arrays used in genotyping (e.g., HLA typing, SNP, or CNV detection, etc.) and mutation analysis (e.g., cystic fibrosis, cancer, autism, etc.).

  8. Drug-Target Interaction Prediction through Label Propagation with Linear Neighborhood Information.

    PubMed

    Zhang, Wen; Chen, Yanlin; Li, Dingfang

    2017-11-25

    Interactions between drugs and target proteins provide important information for the drug discovery. Currently, experiments identified only a small number of drug-target interactions. Therefore, the development of computational methods for drug-target interaction prediction is an urgent task of theoretical interest and practical significance. In this paper, we propose a label propagation method with linear neighborhood information (LPLNI) for predicting unobserved drug-target interactions. Firstly, we calculate drug-drug linear neighborhood similarity in the feature spaces, by considering how to reconstruct data points from neighbors. Then, we take similarities as the manifold of drugs, and assume the manifold unchanged in the interaction space. At last, we predict unobserved interactions between known drugs and targets by using drug-drug linear neighborhood similarity and known drug-target interactions. The experiments show that LPLNI can utilize only known drug-target interactions to make high-accuracy predictions on four benchmark datasets. Furthermore, we consider incorporating chemical structures into LPLNI models. Experimental results demonstrate that the model with integrated information (LPLNI-II) can produce improved performances, better than other state-of-the-art methods. The known drug-target interactions are an important information source for computational predictions. The usefulness of the proposed method is demonstrated by cross validation and the case study.

  9. Implosion of multilayered cylindrical targets driven by intense heavy ion beams.

    PubMed

    Piriz, A R; Portugues, R F; Tahir, N A; Hoffmann, D H H

    2002-11-01

    An analytical model for the implosion of a multilayered cylindrical target driven by an intense heavy ion beam has been developed. The target is composed of a cylinder of frozen hydrogen or deuterium, which is enclosed in a thick shell of solid lead. This target has been designed for future high-energy-density matter experiments to be carried out at the Gesellschaft für Schwerionenforschung, Darmstadt. The model describes the implosion dynamics including the motion of the incident shock and the first reflected shock and allows for calculation of the physical conditions of the hydrogen at stagnation. The model predicts that the conditions of the compressed hydrogen are not sensitive to significant variations in target and beam parameters. These predictions are confirmed by one-dimensional numerical simulations and thus allow for a robust target design.

  10. "The empathy impulse: A multinomial model of intentional and unintentional empathy for pain": Correction.

    PubMed

    2018-04-01

    Reports an error in "The empathy impulse: A multinomial model of intentional and unintentional empathy for pain" by C. Daryl Cameron, Victoria L. Spring and Andrew R. Todd ( Emotion , 2017[Apr], Vol 17[3], 395-411). In this article, there was an error in the calculation of some of the effect sizes. The w effect size was manually computed incorrectly. The incorrect number of total observations was used, which affected the final effect size estimates. This computing error does not change any of the results or interpretations about model fit based on the G² statistic, or about significant differences across conditions in process parameters. Therefore, it does not change any of the hypothesis tests or conclusions. The w statistics for overall model fit should be .02 instead of .04 in Study 1, .01 instead of .02 in Study 2, .01 instead of .03 for the OIT in Study 3 (model fit for the PIT remains the same: .00), and .02 instead of .03 in Study 4. The corrected tables can be seen here: http://osf.io/qebku at the Open Science Framework site for the article. (The following abstract of the original article appeared in record 2017-01641-001.) Empathy for pain is often described as automatic. Here, we used implicit measurement and multinomial modeling to formally quantify unintentional empathy for pain: empathy that occurs despite intentions to the contrary. We developed the pain identification task (PIT), a sequential priming task wherein participants judge the painfulness of target experiences while trying to avoid the influence of prime experiences. Using multinomial modeling, we distinguished 3 component processes underlying PIT performance: empathy toward target stimuli (Intentional Empathy), empathy toward prime stimuli (Unintentional Empathy), and bias to judge target stimuli as painful (Response Bias). In Experiment 1, imposing a fast (vs. slow) response deadline uniquely reduced Intentional Empathy. In Experiment 2, inducing imagine-self (vs. imagine-other) perspective-taking uniquely increased Unintentional Empathy. In Experiment 3, Intentional and Unintentional Empathy were stronger toward targets with typical (vs. atypical) pain outcomes, suggesting that outcome information matters and that effects on the PIT are not reducible to affective priming. Typicality of pain outcomes more weakly affected task performance when target stimuli were merely categorized rather than judged for painfulness, suggesting that effects on the latter are not reducible to semantic priming. In Experiment 4, Unintentional Empathy was stronger for participants who engaged in costly donation to cancer charities, but this parameter was also high for those who donated to an objectively worse but socially more popular charity, suggesting that overly high empathy may facilitate maladaptive altruism. Theoretical and practical applications of our modeling approach for understanding variation in empathy are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. Shielded Heavy-Ion Environment Linear Detector (SHIELD): an experiment for the Radiation and Technology Demonstration (RTD) Mission.

    PubMed

    Shavers, M R; Cucinotta, F A; Miller, J; Zeitlin, C; Heilbronn, L; Wilson, J W; Singleterry, R C

    2001-01-01

    Radiological assessment of the many cosmic ion species of widely distributed energies requires the use of theoretical transport models to accurately describe diverse physical processes related to nuclear reactions in spacecraft structures, planetary atmospheres and surfaces, and tissues. Heavy-ion transport models that were designed to characterize shielded radiation fields have been validated through comparison with data from thick-target irradiation experiments at particle accelerators. With the RTD Mission comes a unique opportunity to validate existing radiation transport models and guide the development of tools for shield design. For the first time, transport properties will be measured in free-space to characterize the shielding effectiveness of materials that are likely to be aboard interplanetary space missions. Target materials composed of aluminum, advanced composite spacecraft structure and other shielding materials, helium (a propellant) and tissue equivalent matrices will be evaluated. Large solid state detectors will provide kinetic energy and charge identification for incident heavy-ions and for secondary ions created in the target material. Transport calculations using the HZETRN model suggest that 8 g cm -2 thick targets would be adequate to evaluate the shielding effectiveness during solar minimum activity conditions for a period of 30 days or more.

  12. Integrated framework for developing search and discrimination metrics

    NASA Astrophysics Data System (ADS)

    Copeland, Anthony C.; Trivedi, Mohan M.

    1997-06-01

    This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.

  13. An analytical approach of thermodynamic behavior in a gas target system on a medical cyclotron.

    PubMed

    Jahangiri, Pouyan; Zacchia, Nicholas A; Buckley, Ken; Bénard, François; Schaffer, Paul; Martinez, D Mark; Hoehr, Cornelia

    2016-01-01

    An analytical model has been developed to study the thermo-mechanical behavior of gas targets used to produce medical isotopes, assuming that the system reaches steady-state. It is based on an integral analysis of the mass and energy balance of the gas-target system, the ideal gas law, and the deformation of the foil. The heat transfer coefficients for different target bodies and gases have been calculated. Excellent agreement is observed between experiments performed at TRIUMF's 13 MeV cyclotron and the model. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Modelling the effects of the radiation reaction force on the interaction of thin foils with ultra-intense laser fields

    NASA Astrophysics Data System (ADS)

    Duff, M. J.; Capdessus, R.; Del Sorbo, D.; Ridgers, C. P.; King, M.; McKenna, P.

    2018-06-01

    The effects of the radiation reaction (RR) force on thin foils undergoing radiation pressure acceleration (RPA) are investigated. Using QED-particle-in-cell simulations, the influence of the RR force on the collective electron dynamics within the target can be examined. The magnitude of the RR force is found to be strongly dependent on the target thickness, leading to effects which can be observed on a macroscopic scale, such as changes to the distribution of the emitted radiation and the target dynamics. This suggests that such parameters may be controlled in experiments at multi-PW laser facilities. In addition, the effects of the RR force are characterized in terms of an average radiation emission angle. We present an analytical model which, for the first time, describes the effect of the RR force on the collective electron dynamics within the ‘light-sail’ regime of RPA. The predictions of this model can be tested in future experiments with ultra-high intensity lasers interacting with solid targets.

  15. Kalman filter data assimilation: targeting observations and parameter estimation.

    PubMed

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  16. Kalman filter data assimilation: Targeting observations and parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex

    2014-06-15

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less

  17. Can representational trajectory reveal the nature of an internal model of gravity?

    PubMed

    De Sá Teixeira, Nuno; Hecht, Heiko

    2014-05-01

    The memory for the vanishing location of a horizontally moving target is usually displaced forward in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, this downward displacement has been shown to increase with time (representational trajectory). However, the degree to which different kinematic events change the temporal profile of these displacements remains to be determined. The present article attempts to fill this gap. In the first experiment, we replicate the finding that representational momentum for downward-moving targets is bigger than for upward motions, showing, moreover, that it increases rapidly during the first 300 ms, stabilizing afterward. This temporal profile, but not the increased error for descending targets, is shown to be disrupted when eye movements are not allowed. In the second experiment, we show that the downward drift with time emerges even for static targets. Finally, in the third experiment, we report an increased error for upward-moving targets, as compared with downward movements, when the display is compatible with a downward ego-motion by including vection cues. Thus, the errors in the direction of gravity are compatible with the perceived event and do not merely reflect a retinotopic bias. Overall, these results provide further evidence for an internal model of gravity in the visual representational system.

  18. Modelling eye movements in a categorical search task

    PubMed Central

    Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris

    2013-01-01

    We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720

  19. Why are there eccentricity effects in visual search? Visual and attentional hypotheses.

    PubMed

    Wolfe, J M; O'Neill, P; Bennett, S C

    1998-01-01

    In standard visual search experiments, observers search for a target item among distracting items. The locations of target items are generally random within the display and ignored as a factor in data analysis. Previous work has shown that targets presented near fixation are, in fact, found more efficiently than are targets presented at more peripheral locations. This paper proposes that the primary cause of this "eccentricity effect" (Carrasco, Evert, Chang, & Katz, 1995) is an attentional bias that allocates attention preferentially to central items. The first four experiments dealt with the possibility that visual, and not attentional, factors underlie the eccentricity effect. They showed that the eccentricity effect cannot be accounted for by the peripheral reduction in visual sensitivity, peripheral crowding, or cortical magnification. Experiment 5 tested the attention allocation model and also showed that RT x set size effects can be independent of eccentricity effects. Experiment 6 showed that the effective set size in a search task depends, in part, on the eccentricity of the target because observers search from fixation outward.

  20. Modeling of Dense Plasma Effects in Short-Pulse Laser Experiments

    NASA Astrophysics Data System (ADS)

    Walton, Timothy; Golovkin, Igor; Macfarlane, Joseph; Prism Computational Sciences, Madison, WI Team

    2016-10-01

    Warm and Hot Dense Matter produced in short-pulse laser experiments can be studied with new high resolving power x-ray spectrometers. Data interpretation implies accurate modeling of the early-time heating dynamics and the radiation conditions that are generated. Producing synthetic spectra requires a model that describes the major physical processes that occur inside the target, including the hot-electron generation and relaxation phases and the effect of target heating. An important issue concerns the sensitivity of the predicted K-line shifts to the continuum lowering model that is used. We will present a set of PrismSPECT spectroscopic simulations using various continuum lowering models: Hummer/Mihalas, Stewart-Pyatt, and Ecker-Kroll and discuss their effect on the formation of K-shell features. We will also discuss recently implemented models for dense plasma shifts for H-like, He-like and neutral systems.

  1. Docking and scoring protein interactions: CAPRI 2009.

    PubMed

    Lensink, Marc F; Wodak, Shoshana J

    2010-11-15

    Protein docking algorithms are assessed by evaluating blind predictions performed during 2007-2009 in Rounds 13-19 of the community-wide experiment on critical assessment of predicted interactions (CAPRI). We evaluated the ability of these algorithms to sample docking poses and to single out specific association modes in 14 targets, representing 11 distinct protein complexes. These complexes play important biological roles in RNA maturation, G-protein signal processing, and enzyme inhibition and function. One target involved protein-RNA interactions not previously considered in CAPRI, several others were hetero-oligomers, or featured multiple interfaces between the same protein pair. For most targets, predictions started from the experimentally determined structures of the free (unbound) components, or from models built from known structures of related or similar proteins. To succeed they therefore needed to account for conformational changes and model inaccuracies. In total, 64 groups and 12 web-servers submitted docking predictions of which 4420 were evaluated. Overall our assessment reveals that 67% of the groups, more than ever before, produced acceptable models or better for at least one target, with many groups submitting multiple high- and medium-accuracy models for two to six targets. Forty-one groups including four web-servers participated in the scoring experiment with 1296 evaluated models. Scoring predictions also show signs of progress evidenced from the large proportion of correct models submitted. But singling out the best models remains a challenge, which also adversely affects the ability to correctly rank docking models. With the increased interest in translating abstract protein interaction networks into realistic models of protein assemblies, the growing CAPRI community is actively developing more efficient and reliable docking and scoring methods for everyone to use. © 2010 Wiley-Liss, Inc.

  2. Primate social attention: Species differences and effects of individual experience in humans, great apes, and macaques.

    PubMed

    Kano, Fumihiro; Shepherd, Stephen V; Hirata, Satoshi; Call, Josep

    2018-01-01

    When viewing social scenes, humans and nonhuman primates focus on particular features, such as the models' eyes, mouth, and action targets. Previous studies reported that such viewing patterns vary significantly across individuals in humans, and also across closely-related primate species. However, the nature of these individual and species differences remains unclear, particularly among nonhuman primates. In large samples of human and nonhuman primates, we examined species differences and the effects of experience on patterns of gaze toward social movies. Experiment 1 examined the species differences across rhesus macaques, nonhuman apes (bonobos, chimpanzees, and orangutans), and humans while they viewed movies of various animals' species-typical behaviors. We found that each species had distinct viewing patterns of the models' faces, eyes, mouths, and action targets. Experiment 2 tested the effect of individuals' experience on chimpanzee and human viewing patterns. We presented movies depicting natural behaviors of chimpanzees to three groups of chimpanzees (individuals from a zoo, a sanctuary, and a research institute) differing in their early social and physical experiences. We also presented the same movies to human adults and children differing in their expertise with chimpanzees (experts vs. novices) or movie-viewing generally (adults vs. preschoolers). Individuals varied within each species in their patterns of gaze toward models' faces, eyes, mouths, and action targets depending on their unique individual experiences. We thus found that the viewing patterns for social stimuli are both individual- and species-specific in these closely-related primates. Such individual/species-specificities are likely related to both individual experience and species-typical temperament, suggesting that primate individuals acquire their unique attentional biases through both ontogeny and evolution. Such unique attentional biases may help them learn efficiently about their particular social environments.

  3. An interference model of visual working memory.

    PubMed

    Oberauer, Klaus; Lin, Hsuan-Yu

    2017-01-01

    The article introduces an interference model of working memory for information in a continuous similarity space, such as the features of visual objects. The model incorporates the following assumptions: (a) Probability of retrieval is determined by the relative activation of each retrieval candidate at the time of retrieval; (b) activation comes from 3 sources in memory: cue-based retrieval using context cues, context-independent memory for relevant contents, and noise; (c) 1 memory object and its context can be held in the focus of attention, where it is represented with higher precision, and partly shielded against interference. The model was fit to data from 4 continuous-reproduction experiments testing working memory for colors or orientations. The experiments involved variations of set size, kind of context cues, precueing, and retro-cueing of the to-be-tested item. The interference model fit the data better than 2 competing models, the Slot-Averaging model and the Variable-Precision resource model. The interference model also fared well in comparison to several new models incorporating alternative theoretical assumptions. The experiments confirm 3 novel predictions of the interference model: (a) Nontargets intrude in recall to the extent that they are close to the target in context space; (b) similarity between target and nontarget features improves recall, and (c) precueing-but not retro-cueing-the target substantially reduces the set-size effect. The success of the interference model shows that working memory for continuous visual information works according to the same principles as working memory for more discrete (e.g., verbal) contents. Data and model codes are available at https://osf.io/wgqd5/. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Algorithm research on infrared imaging target extraction based on GAC model

    NASA Astrophysics Data System (ADS)

    Li, Yingchun; Fan, Youchen; Wang, Yanqing

    2016-10-01

    Good target detection and tracking technique is significantly meaningful to increase infrared target detection distance and enhance resolution capacity. For the target detection problem about infrared imagining, firstly, the basic principles of level set method and GAC model are is analyzed in great detail. Secondly, "convergent force" is added according to the defect that GAC model is stagnant outside the deep concave region and cannot reach deep concave edge to build the promoted GAC model. Lastly, the self-adaptive detection method in combination of Sobel operation and GAC model is put forward by combining the advantages that subject position of the target could be detected with Sobel operator and the continuous edge of the target could be obtained through GAC model. In order to verify the effectiveness of the model, the two groups of experiments are carried out by selecting the images under different noise effects. Besides, the comparative analysis is conducted with LBF and LIF models. The experimental result shows that target could be better locked through LIF and LBF algorithms for the slight noise effect. The accuracy of segmentation is above 0.8. However, as for the strong noise effect, the target and noise couldn't be distinguished under the strong interference of GAC, LIF and LBF algorithms, thus lots of non-target parts are extracted during iterative process. The accuracy of segmentation is below 0.8. The accurate target position is extracted through the algorithm proposed in this paper. Besides, the accuracy of segmentation is above 0.8.

  5. Grants4Targets - an innovative approach to translate ideas from basic research into novel drugs.

    PubMed

    Lessl, Monika; Schoepe, Stefanie; Sommer, Anette; Schneider, Martin; Asadullah, Khusru

    2011-04-01

    Collaborations between industry and academia are steadily gaining importance. To combine expertises Bayer Healthcare has set up a novel open innovation approach called Grants4Targets. Ideas on novel drug targets can easily be submitted to http://www.grants4targets.com. After a review process, grants are provided to perform focused experiments to further validate the proposed targets. In addition to financial support specific know-how on target validation and drug discovery is provided. Experienced scientists are nominated as project partners and, depending on the project, tools or specific models are provided. Around 280 applications have been received and 41 projects granted. According to our experience, this type of bridging fund combined with joint efforts provides a valuable tool to foster drug discovery collaborations. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. The role of experience in location estimation: Target distributions shift location memory biases.

    PubMed

    Lipinski, John; Simmering, Vanessa R; Johnson, Jeffrey S; Spencer, John P

    2010-04-01

    Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. Cognition, 93, 75-97]. This conflicts with earlier results showing that location estimation is biased relative to the spatial distribution of targets [Spencer, J. P., & Hund, A. M. (2002). Prototypes and particulars: Geometric and experience-dependent spatial categories. Journal of Experimental Psychology: General, 131, 16-37]. Here, we resolve this controversy by using a task based on Huttenlocher et al. (Experiment 4) with minor modifications to enhance our ability to detect experience-dependent effects. Results after the first block of trials replicate the pattern reported in Huttenlocher et al. After additional experience, however, participants showed biases that significantly shifted according to the target distributions. These results are consistent with the Dynamic Field Theory, an alternative theory of spatial cognition that integrates long-term memory traces across trials relative to the perceived structure of the task space. Copyright 2009 Elsevier B.V. All rights reserved.

  7. National Centers for Environmental Prediction

    Science.gov Websites

    Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar Observing system Research and Predictability EXperiment (THORPEX) Targeted Obs Targeted Observations Cyclone University Research Court College Park, MD 20740 Page Author: EMC Webmaster Page generated:Sunday, 27-May

  8. More "C" Please: Commentary on Arch and Craske's (2011) "Addressing Relapse in Cognitive Behavioral Therapy for Panic Disorder"

    ERIC Educational Resources Information Center

    Nadler, Wayne P.

    2012-01-01

    Comments are offered to clarify the learning model proposed by Arch and Craske (2011) based on extensive clinical experience with the CBT model for treating panic disorder developed by Barlow and Craske (1990). Suggestions are made regarding treatment targets and several cases are offered as examples of how choice of treatment target can make a…

  9. Overcoming default categorical bias in spatial memory.

    PubMed

    Sampaio, Cristina; Wang, Ranxiao Frances

    2010-12-01

    In the present study, we investigated whether a strong default categorical bias can be overcome in spatial memory by using alternative membership information. In three experiments, we tested location memory in a circular space while providing participants with an alternative categorization. We found that visual presentation of the boundaries of the alternative categories (Experiment 1) did not induce the use of the alternative categories in estimation. In contrast, visual cuing of the alternative category membership of a target (Experiment 2) and unique target feature information associated with each alternative category (Experiment 3) successfully led to the use of the alternative categories in estimation. Taken together, the results indicate that default categorical bias in spatial memory can be overcome when appropriate cues are provided. We discuss how these findings expand the category adjustment model (Huttenlocher, Hedges, & Duncan, 1991) in spatial memory by proposing a retrieval-based category adjustment (RCA) model.

  10. When suppressing one stereotype leads to rebound of another: on the procedural nature of stereotype rebound.

    PubMed

    Geeraert, Nicolas

    2013-09-01

    A known consequence of stereotype suppression is post-suppressional rebound (PSR), an ironic activation of the suppressed stereotype. This is typically explained as an unintended by-product from a dual-process model of mental control. Relying on this model, stereotype rebound is believed to be conceptual. Alternative accounts predict PSR to be featural or procedural. According to the latter account, stereotype rebound would not be limited to the suppressed social category, but could occur for a target from any social category. The occurrence of procedural stereotype rebound was examined across five experiments. Suppression of one particular stereotype consistently led to rebound for social targets belonging to the same or a different stereotype in an essay-writing task (Experiments 1-3) and led to facilitation in recognition of stereotype-consistent words (Experiment 4). Finally, stereotype suppression was shown to impact on assessments of stereotype use but not on heuristic thinking (Experiment 5).

  11. Additive and interactive effects in semantic priming: Isolating lexical and decision processes in the lexical decision task.

    PubMed

    Yap, Melvin J; Balota, David A; Tan, Sarah E

    2013-01-01

    The present study sheds light on the interplay between lexical and decision processes in the lexical decision task by exploring the effects of lexical decision difficulty on semantic priming effects. In 2 experiments, we increased lexical decision difficulty by either using transposed letter wordlike nonword distracters (e.g., JUGDE; Experiment 1) or by visually degrading targets (Experiment 2). Although target latencies were considerably slowed by both difficulty manipulations, stimulus quality-but not nonword type-moderated priming effects, consistent with recent work by Lupker and Pexman (2010). To characterize these results in a more fine-grained manner, data were also analyzed at the level of response time (RT) distributions, using a combination of ex-Gaussian, quantile, and diffusion model analyses. The results indicate that for clear targets, priming was reflected by distributional shifting of comparable magnitude across different nonword types. In contrast, priming of degraded targets was reflected by shifting and an increase in the tail of the distribution. We discuss how these findings, along with others, can be accommodated by an embellished multistage activation model that incorporates retrospective prime retrieval and decision-based mechanisms.

  12. Are All Interventions Created Equal? A Multi-Threat Approach to Tailoring Stereotype Threat Interventions

    PubMed Central

    Shapiro, Jenessa R.; Williams, Amy M.; Hambarchyan, Mariam

    2013-01-01

    To date, stereotype threat interventions have been considered interchangeable. Across 4 experiments, the present research demonstrates that stereotype threat interventions need to be tailored to the specific form of experienced stereotype threat to be effective. The Multi-Threat Framework (Shapiro & Neuberg, 2007) distinguishes between group-as-target stereotype threats—concerns that a stereotype-relevant performance will reflect poorly on the abilities of one’s group—and self-as-target stereotype threats—concerns that a stereotype-relevant performance will reflect poorly on one’s own abilities. The present experiments explored Black college students’ performance on diagnostic intelligence tests (Experiments 1 and 3) and women’s interest (Experiment 2) and performance (Experiment 4) in science, technology, engineering, and math (STEM). Across the 4 experiments, participants were randomly assigned to experience either a group-as-target or self-as-target stereotype threat. Experiments 1 and 2 revealed that role model interventions were successful at protecting only against group-as-target stereotype threats, and Experiments 3 and 4 revealed that self-affirmation interventions were successful at protecting only against self-as-target stereotype threats. The present research provides an experimental test of the Multi-Threat Framework across different negatively stereotyped groups (Black students, female students), different negatively stereotyped domains (general intelligence, STEM), and different outcomes (test performance, career interest). This research suggests that interventions should address the range of possible stereotype threats to effectively protect individuals against these threats. Through an appreciation of the distinct forms of stereotype threats and the ways in which interventions work to reduce them, this research aims to facilitate a more complete understanding of stereotype threat. PMID:23088232

  13. Are all interventions created equal? A multi-threat approach to tailoring stereotype threat interventions.

    PubMed

    Shapiro, Jenessa R; Williams, Amy M; Hambarchyan, Mariam

    2013-02-01

    To date, stereotype threat interventions have been considered interchangeable. Across 4 experiments, the present research demonstrates that stereotype threat interventions need to be tailored to the specific form of experienced stereotype threat to be effective. The Multi-Threat Framework (Shapiro & Neuberg, 2007) distinguishes between group-as-target stereotype threats-concerns that a stereotype-relevant performance will reflect poorly on the abilities of one's group-and self-as-target stereotype threats-concerns that a stereotype-relevant performance will reflect poorly on one's own abilities. The present experiments explored Black college students' performance on diagnostic intelligence tests (Experiments 1 and 3) and women's interest (Experiment 2) and performance (Experiment 4) in science, technology, engineering, and math (STEM). Across the 4 experiments, participants were randomly assigned to experience either a group-as-target or self-as-target stereotype threat. Experiments 1 and 2 revealed that role model interventions were successful at protecting only against group-as-target stereotype threats, and Experiments 3 and 4 revealed that self-affirmation interventions were successful at protecting only against self-as-target stereotype threats. The present research provides an experimental test of the Multi-Threat Framework across different negatively stereotyped groups (Black students, female students), different negatively stereotyped domains (general intelligence, STEM), and different outcomes (test performance, career interest). This research suggests that interventions should address the range of possible stereotype threats to effectively protect individuals against these threats. Through an appreciation of the distinct forms of stereotype threats and the ways in which interventions work to reduce them, this research aims to facilitate a more complete understanding of stereotype threat. (c) 2013 APA, all rights reserved.

  14. Experience of targeting subsidies on insecticide-treated nets: what do we know and what are the knowledge gaps?

    PubMed

    Worrall, Eve; Hill, Jenny; Webster, Jayne; Mortimer, Julia

    2005-01-01

    Widespread coverage of vulnerable populations with insecticide-treated nets (ITNs) constitutes an important component of the Roll Back Malaria (RBM) strategy to control malaria. The Abuja Targets call for 60% coverage of children under 5 years of age and pregnant women by 2005; but current coverage in Africa is unacceptably low. The RBM 'Strategic Framework for Coordinated National Action in Scaling-up Insecticide-Treated Netting Programmes in Africa' promotes coordinated national action and advocates sustained public provision of targeted subsidies to maximise public health benefits, alongside support and stimulation of the private sector. Several countries have already planned or initiated targeted subsidy schemes either on a pilot scale or on a national scale, and have valuable experience which can inform future interventions. The WHO RBM 'Workshop on mapping models for delivering ITNs through targeted subsidies' held in Zambia in 2003 provided an opportunity to share and document these country experiences. This paper brings together experiences presented at the workshop with other information on experiences of targeting subsidies on ITNs, net treatment kits and retreatment services (ITN products) in order to describe alternative approaches, highlight their similarities and differences, outline lessons learnt, and identify gaps in knowledge. We find that while there is a growing body of knowledge on different approaches to targeting ITN subsidies, there are significant gaps in knowledge in crucial areas. Key questions regarding how best to target, how much it will cost and what outcomes (levels of coverage) to expect remain unanswered. High quality, well-funded monitoring and evaluation of alternative approaches to targeting ITN subsidies is vital to develop a knowledge base so that countries can design and implement effective strategies to target ITN subsidies.

  15. Internal models of target motion: expected dynamics overrides measured kinematics in timing manual interceptions.

    PubMed

    Zago, Myrka; Bosco, Gianfranco; Maffei, Vincenzo; Iosa, Marco; Ivanenko, Yuri P; Lacquaniti, Francesco

    2004-04-01

    Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. Here we present evidence in favor of a different view: the brain makes the best estimate about target motion based on measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from expected dynamics (kinetics). We projected a virtual target moving vertically downward on a wide screen with different randomized laws of motion. In the first series of experiments, subjects were asked to intercept this target by punching a real ball that fell hidden behind the screen and arrived in synchrony with the visual target. Subjects systematically timed their motor responses consistent with the assumption of gravity effects on an object's mass, even when the visual target did not accelerate. With training, the gravity model was not switched off but adapted to nonaccelerating targets by shifting the time of motor activation. In the second series of experiments, there was no real ball falling behind the screen. Instead the subjects were required to intercept the visual target by clicking a mousebutton. In this case, subjects timed their responses consistent with the assumption of uniform motion in the absence of forces, even when the target actually accelerated. Overall, the results are in accord with the theory that motor responses evoked by visual kinematics are modulated by a prior of the target dynamics. The prior appears surprisingly resistant to modifications based on performance errors.

  16. Multivariate methods for evaluating the efficiency of electrodialytic removal of heavy metals from polluted harbour sediments.

    PubMed

    Pedersen, Kristine Bondo; Kirkelund, Gunvor M; Ottosen, Lisbeth M; Jensen, Pernille E; Lejon, Tore

    2015-01-01

    Chemometrics was used to develop a multivariate model based on 46 previously reported electrodialytic remediation experiments (EDR) of five different harbour sediments. The model predicted final concentrations of Cd, Cu, Pb and Zn as a function of current density, remediation time, stirring rate, dry/wet sediment, cell set-up as well as sediment properties. Evaluation of the model showed that remediation time and current density had the highest comparative influence on the clean-up levels. Individual models for each heavy metal showed variance in the variable importance, indicating that the targeted heavy metals were bound to different sediment fractions. Based on the results, a PLS model was used to design five new EDR experiments of a sixth sediment to achieve specified clean-up levels of Cu and Pb. The removal efficiencies were up to 82% for Cu and 87% for Pb and the targeted clean-up levels were met in four out of five experiments. The clean-up levels were better than predicted by the model, which could hence be used for predicting an approximate remediation strategy; the modelling power will however improve with more data included. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. The effect of spatial organization of targets and distractors on the capacity to selectively memorize objects in visual short-term memory

    PubMed Central

    Abbes, Aymen Ben; Gavault, Emmanuelle; Ripoll, Thierry

    2014-01-01

    We conducted a series of experiments to explore how the spatial configuration of objects influences the selection and the processing of these objects in a visual short-term memory task. We designed a new experiment in which participants had to memorize 4 targets presented among 4 distractors. Targets were cued during the presentation of distractor objects. Their locations varied according to 4 spatial configurations. From the first to the last configuration, the distance between targets’ locations was progressively increased. The results revealed a high capacity to select and memorize targets embedded among distractors even when targets were extremely distant from each other. This capacity is discussed in relation to the unitary conception of attention, models of split attention, and the competitive interaction model. Finally, we propose that the spatial dispersion of objects has different effects on attentional allocation and processing stages. Thus, when targets are extremely distant from each other, attentional allocation becomes more difficult while processing becomes easier. This finding implicates that these 2 aspects of attention need to be more clearly distinguished in future research. PMID:25339978

  18. Evanescent acoustic waves: Production and scattering by resonant targets

    NASA Astrophysics Data System (ADS)

    Osterhoudt, Curtis F.

    Small targets with acoustic resonances which may be excited by incident acoustic planewaves are shown to possess high-Q modes ("organ-pipe" modes) which may be suitable for ocean-based calibration and ranging purposes. The modes are modeled using a double point-source model; this, along with acoustic reciprocity and inversion symmetry, is shown to adequately model the backscattering form functions of the modes at low frequencies. The backscattering form-functions are extended to apply to any bistatic acoustic experiment using the targets when the target response is dominated by the modes in question. An interface between two fluids which each approximate an unbounded half-space has been produced in the laboratory. The fluids have different sound speeds. When sound is incident on this interface at beyond the critical angle from within the first fluid, the second fluid is made to evince a region dominated by evanescent acoustic energy. Such a system is shown to be an possible laboratory-based proxy for a flat sediment bottom in the ocean, or sloped (unrippled) bottom in littoral environments. The evanescent sound field is characterized and shown to have complicated features despite the simplicity of its production. Notable among these features is the presence of dips in the soundfield amplitude, or "quasi-nulls". These are proposed to be extremely important when considering the return from ocean-based experiments. The soundfield features are also shown to be accurately predicted and characterized by wavenumber-integration software. The targets which exhibit organ-pipe modes in the free-field are shown to also be excited by the evanescent waves, and may be used as soundfield probes when the target returns are well characterized. Alternately, if the soundfield is well-known, the target parameters may be extracted from back- or bistatic-scattering experiments in evanescent fields. It is shown that the spatial decay rate as measured by a probe directly in the evanescent field is half that as measured by backscattering experiments on horizontal and vertical cylinders driven at the fundamental mode, and it is demonstrated that this is explained by the principle of acoustic reciprocity.

  19. Fluid mechanics aspects of magnetic drug targeting.

    PubMed

    Odenbach, Stefan

    2015-10-01

    Experiments and numerical simulations using a flow phantom for magnetic drug targeting have been undertaken. The flow phantom is a half y-branched tube configuration where the main tube represents an artery from which a tumour-supplying artery, which is simulated by the side branch of the flow phantom, branches off. In the experiments a quantification of the amount of magnetic particles targeted towards the branch by a magnetic field applied via a permanent magnet is achieved by impedance measurement using sensor coils. Measuring the targeting efficiency, i.e. the relative amount of particles targeted to the side branch, for different field configurations one obtains targeting maps which combine the targeting efficiency with the magnetic force densities in characteristic points in the flow phantom. It could be shown that targeting efficiency depends strongly on the magnetic field configuration. A corresponding numerical model has been set up, which allows the simulation of targeting efficiency for variable field configuration. With this simulation good agreement of targeting efficiency with experimental data has been found. Thus, the basis has been laid for future calculations of optimal field configurations in clinical applications of magnetic drug targeting. Moreover, the numerical model allows the variation of additional parameters of the drug targeting process and thus an estimation of the influence, e.g. of the fluid properties on the targeting efficiency. Corresponding calculations have shown that the non-Newtonian behaviour of the fluid will significantly influence the targeting process, an aspect which has to be taken into account, especially recalling the fact that the viscosity of magnetic suspensions depends strongly on the magnetic field strength and the mechanical load.

  20. Planetary and Primitive Object Strength Measurements and Sampling Apparatus

    NASA Technical Reports Server (NTRS)

    Ahrens, Thomas J.

    1997-01-01

    We present experimental data and a model for the low-velocity (subsonic, 0 - 1000 m/s) penetration of brittle materials by both solid and hollow (i.e., coring) penetrators. The experiments show that penetration is proportional to momentum/frontal area of the penetrator. Because of the buildup of a cap in front of blunt penetrators, the presence or absence of a streamlined or sharp front end usually has a negligible effect for impact into targets with strength. The model accurately predicts the dependence of penetration depth on the various parameters of the target-penetrator system, as well as the qualitative condition of the target material ingested by a corer. In particular, penetration depth is approximately inversely proportional to the static bearing strength of the target. The bulk density of the target material has only a small effect on penetration, whereas friction can be significant, especially at higher impact velocities, for consolidated materials. This trend is reversed for impacts into unconsolidated materials. The present results suggest that the depth of penetration is a good measure of the strength, but not the density, of a consolidated target. Both experiments and model results show that, if passage through the mouth of a coring penetrator requires initially porous target material to be compressed to less than 26% porosity, the sample collected by the corer will be highly fragmented. If the final porosity remains above 26%, then most materials, except cohesionless materials, such as dry sand, will be collected as a compressed slug of material.

  1. Experimental design and data analysis of Ago-RIP-Seq experiments for the identification of microRNA targets.

    PubMed

    Tichy, Diana; Pickl, Julia Maria Anna; Benner, Axel; Sültmann, Holger

    2017-03-31

    The identification of microRNA (miRNA) target genes is crucial for understanding miRNA function. Many methods for the genome-wide miRNA target identification have been developed in recent years; however, they have several limitations including the dependence on low-confident prediction programs and artificial miRNA manipulations. Ago-RNA immunoprecipitation combined with high-throughput sequencing (Ago-RIP-Seq) is a promising alternative. However, appropriate statistical data analysis algorithms taking into account the experimental design and the inherent noise of such experiments are largely lacking.Here, we investigate the experimental design for Ago-RIP-Seq and examine biostatistical methods to identify de novo miRNA target genes. Statistical approaches considered are either based on a negative binomial model fit to the read count data or applied to transformed data using a normal distribution-based generalized linear model. We compare them by a real data simulation study using plasmode data sets and evaluate the suitability of the approaches to detect true miRNA targets by sensitivity and false discovery rates. Our results suggest that simple approaches like linear regression models on (appropriately) transformed read count data are preferable. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Computational design of short pulse laser driven iron opacity experiments

    DOE PAGES

    Martin, M. E.; London, R. A.; Goluoglu, S.; ...

    2017-02-23

    Here, the resolution of current disagreements between solar parameters calculated from models and observations would benefit from the experimental validation of theoretical opacity models. Iron's complex ionic structure and large contribution to the opacity in the radiative zone of the sun make iron a good candidate for validation. Short pulse lasers can be used to heat buried layer targets to plasma conditions comparable to the radiative zone of the sun, and the frequency dependent opacity can be inferred from the target's measured x-ray emission. Target and laser parameters must be optimized to reach specific plasma conditions and meet x-ray emissionmore » requirements. The HYDRA radiation hydrodynamics code is used to investigate the effects of modifying laser irradiance and target dimensions on the plasma conditions, x-ray emission, and inferred opacity of iron and iron-magnesium buried layer targets. It was determined that plasma conditions are dominantly controlled by the laser energy and the tamper thickness. The accuracy of the inferred opacity is sensitive to tamper emission and optical depth effects. Experiments at conditions relevant to the radiative zone of the sun would investigate the validity of opacity theories important to resolving disagreements between solar parameters calculated from models and observations.« less

  3. Computational design of short pulse laser driven iron opacity experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, M. E.; London, R. A.; Goluoglu, S.

    Here, the resolution of current disagreements between solar parameters calculated from models and observations would benefit from the experimental validation of theoretical opacity models. Iron's complex ionic structure and large contribution to the opacity in the radiative zone of the sun make iron a good candidate for validation. Short pulse lasers can be used to heat buried layer targets to plasma conditions comparable to the radiative zone of the sun, and the frequency dependent opacity can be inferred from the target's measured x-ray emission. Target and laser parameters must be optimized to reach specific plasma conditions and meet x-ray emissionmore » requirements. The HYDRA radiation hydrodynamics code is used to investigate the effects of modifying laser irradiance and target dimensions on the plasma conditions, x-ray emission, and inferred opacity of iron and iron-magnesium buried layer targets. It was determined that plasma conditions are dominantly controlled by the laser energy and the tamper thickness. The accuracy of the inferred opacity is sensitive to tamper emission and optical depth effects. Experiments at conditions relevant to the radiative zone of the sun would investigate the validity of opacity theories important to resolving disagreements between solar parameters calculated from models and observations.« less

  4. Reconstruction algorithms based on l1-norm and l2-norm for two imaging models of fluorescence molecular tomography: a comparative study.

    PubMed

    Yi, Huangjian; Chen, Duofang; Li, Wei; Zhu, Shouping; Wang, Xiaorui; Liang, Jimin; Tian, Jie

    2013-05-01

    Fluorescence molecular tomography (FMT) is an important imaging technique of optical imaging. The major challenge of the reconstruction method for FMT is the ill-posed and underdetermined nature of the inverse problem. In past years, various regularization methods have been employed for fluorescence target reconstruction. A comparative study between the reconstruction algorithms based on l1-norm and l2-norm for two imaging models of FMT is presented. The first imaging model is adopted by most researchers, where the fluorescent target is of small size to mimic small tissue with fluorescent substance, as demonstrated by the early detection of a tumor. The second model is the reconstruction of distribution of the fluorescent substance in organs, which is essential to drug pharmacokinetics. Apart from numerical experiments, in vivo experiments were conducted on a dual-modality FMT/micro-computed tomography imaging system. The experimental results indicated that l1-norm regularization is more suitable for reconstructing the small fluorescent target, while l2-norm regularization performs better for the reconstruction of the distribution of fluorescent substance.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goncharov, V. N.; Skupsky, S.; Boehly, T. R.

    Irradiation nonuniformities in direct-drive (DD) inertial confinement fusion experiments generate, or ''imprint,'' surface modulations that degrade the symmetry of the implosion and reduce the target performance. To gain physical insight, an analytical model of imprint is developed. The model takes into account the hydrodynamic flow, the dynamics of the conduction zone, and the mass ablation. The important parameters are found to be the time scale for plasma atmosphere formation and the ablation velocity. The model is validated by comparisons to detailed two-dimensional (2D) hydrocode simulations. The results of the model and simulations are in good agreement with a series ofmore » planar-foil imprint experiments performed on the OMEGA laser system [T.R. Boehly, D.L. Brown, R.S. Craxton et al., Opt. Commun. 133, 495 (1997)]. Direct-drive National Ignition Facility's [J.A. Paisner, J.D. Boyes, S.A. Kumpan, W.H. Lowdermilk, and M.S. Sorem, Laser Focus World 30, 75 (1994)] cryogenic targets are shown to have gains larger than 10 when the rms laser-irradiation nonuniformity is reduced by 2D smoothing by spectral dispersion (SSD) used in the current DD target designs. (c)« less

  6. Fish tracking by combining motion based segmentation and particle filtering

    NASA Astrophysics Data System (ADS)

    Bichot, E.; Mascarilla, L.; Courtellemont, P.

    2006-01-01

    In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.

  7. Ultrasonic brain therapy: First trans-skull in vivo experiments on sheep using adaptive focusing

    NASA Astrophysics Data System (ADS)

    Pernot, Mathieu; Aubry, Jean-Francois; Tanter, Michael; Fink, Mathias; Boch, Anne-Laure; Kujas, Michèle

    2004-05-01

    A high-power prototype dedicated to trans-skull therapy has been tested in vivo on 20 sheep. The array is made of 200 high-power transducers working at 1-MHz central and is able to reach 260 bars at focus in water. An echographic array connected to a Philips HDI 1000 system has been inserted in the therapeutic array in order to perform real-time monitoring of the treatment. A complete craniotomy has been performed on half of the treated animal models in order to get a reference model. On the other animals, a minimally invasive surgery has been performed thanks to a time-reversal experiment: a hydrophone was inserted at the target inside the brain thanks to a 1-mm2 craniotomy. A time-reversal experiment was then conducted through the skull bone with the therapeutic array to treat the targeted point. For all the animals a specified region around the target was treated thanks to electronic beam steering. Animals were finally divided into three groups and sacrificed, respectively, 0, 1, and 2 weeks after treatment. Finally, histological examination confirmed tissue damage. These in vivo experiments highlight the strong potential of high-power time-reversal technology.

  8. Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance

    DTIC Science & Technology

    2013-09-09

    targets with .30cal AP M2 projectile using SPH elements. -Model validation runs were conducted based on the DoP experiments described in reference...effect of material properties on DoP 15. SUBJECT TERMS .30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets...and ceramic-faced aluminum targets with „30cal AP M2 projectile using SPH elements. □ Model validation runs were conducted based on the DoP

  9. First PIC simulations modeling the interaction of ultra-intense lasers with sub-micron, liquid crystal targets

    NASA Astrophysics Data System (ADS)

    McMahon, Matthew; Poole, Patrick; Willis, Christopher; Andereck, David; Schumacher, Douglass

    2014-10-01

    We recently introduced liquid crystal films as on-demand, variable thickness (50-5000 nanometers), low cost targets for intense laser experiments. Here we present the first particle-in-cell (PIC) simulations of short pulse laser excitation of liquid crystal targets treating Scarlet (OSU) class lasers using the PIC code LSP. In order to accurately model the target evolution, a low starting temperature and field ionization model are employed. This is essential as large starting temperatures, often used to achieve large Debye lengths, lead to expansion of the target causing significant reduction of the target density before the laser pulse can interact. We also present an investigation of the modification of laser pulses by very thin targets. This work was supported by the DARPA PULSE program through a grant from ARMDEC, by the US Department of Energy under Contract No. DE-NA0001976, and allocations of computing time from the Ohio Supercomputing Center.

  10. Impacts into quartz sand: Crater formation, shock metamorphism, and ejecta distribution in laboratory experiments and numerical models

    NASA Astrophysics Data System (ADS)

    Wünnemann, Kai; Zhu, Meng-Hua; Stöffler, Dieter

    2016-10-01

    We investigated the ejection mechanics by a complementary approach of cratering experiments, including the microscopic analysis of material sampled from these experiments, and 2-D numerical modeling of vertical impacts. The study is based on cratering experiments in quartz sand targets performed at the NASA Ames Vertical Gun Range. In these experiments, the preimpact location in the target and the final position of ejecta was determined by using color-coded sand and a catcher system for the ejecta. The results were compared with numerical simulations of the cratering and ejection process to validate the iSALE shock physics code. In turn the models provide further details on the ejection velocities and angles. We quantify the general assumption that ejecta thickness decreases with distance according to a power-law and that the relative proportion of shocked material in the ejecta increase with distance. We distinguish three types of shock metamorphic particles (1) melt particles, (2) shock lithified aggregates, and (3) shock-comminuted grains. The agreement between experiment and model was excellent, which provides confidence that the models can predict ejection angles, velocities, and the degree of shock loading of material expelled from a crater accurately if impact parameters such as impact velocity, impactor size, and gravity are varied beyond the experimental limitations. This study is relevant for a quantitative assessment of impact gardening on planetary surfaces and the evolution of regolith layers on atmosphereless bodies.

  11. On the accuracy and reliability of predictions by control-system theory.

    PubMed

    Bourbon, W T; Copeland, K E; Dyer, V R; Harman, W K; Mosley, B L

    1990-12-01

    In three experiments we used control-system theory (CST) to predict the results of tracking tasks on which people held a handle to keep a cursor even with a target on a computer screen. 10 people completed a total of 104 replications of the task. In each experiment, there were two conditions: in one, only the handle affected the position of the cursor; in the other, a random disturbance also affected the cursor. From a person's performance during Condition 1, we derived constants used in the CST model to predict the results of Condition 2. In two experiments, predictions occurred a few minutes before Condition 2; in one experiment, the delay was 1 yr. During a 1-min. experimental run, the positions of handle and cursor, produced by the person, were each sampled 1800 times, once every 1/30 sec. During a modeling run, the model predicted the positions of the handle and target for each of the 1800 intervals sampled in the experimental run. In 104 replications, the mean correlation between predicted and actual positions of the handle was .996; SD = .002.

  12. Searching for a dark photon with DarkLight

    NASA Astrophysics Data System (ADS)

    Corliss, R.; DarkLight Collaboration

    2017-09-01

    Despite compelling astrophysical evidence for the existence of dark matter in the universe, we have yet to positively identify it in any terrestrial experiment. If such matter is indeed particle in nature, it may have a new interaction as well, carried by a dark counterpart to the photon. The DarkLight experiment proposes to search for such a beyond-the-standard-model dark photon through complete reconstruction of the final states of electron-proton collisions. In order to accomplish this, the experiment requires a moderate-density target and a very high intensity, low energy electron beam. I describe DarkLight's approach and focus on the implications this has for the design of the experiment, which centers on the use of an internal gas target in Jefferson Lab's Low Energy Recirculating Facility. I also discuss upcoming beam tests, where we will place our target and solenoidal magnet in the beam for the first time.

  13. Detection performance in clutter with variable resolution

    NASA Astrophysics Data System (ADS)

    Schmieder, D. E.; Weathersby, M. R.

    1983-07-01

    Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 line pairs per target (LP/TGT), while at the higher SCRs it was found that a resoluton of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.

  14. Investigating the empirical support for therapeutic targets proposed by the temporal experience of pleasure model in schizophrenia: A systematic review.

    PubMed

    Edwards, Clementine J; Cella, Matteo; Tarrier, Nicholas; Wykes, Til

    2015-10-01

    Anhedonia and amotivation are substantial predictors of poor functional outcomes in people with schizophrenia and often present a formidable barrier to returning to work or building relationships. The Temporal Experience of Pleasure Model proposes constructs which should be considered therapeutic targets for these symptoms in schizophrenia e.g. anticipatory pleasure, memory, executive functions, motivation and behaviours related to the activity. Recent reviews have highlighted the need for a clear evidence base to drive the development of targeted interventions. To review systematically the empirical evidence for each TEP model component and propose evidence-based therapeutic targets for anhedonia and amotivation in schizophrenia. Following PRISMA guidelines, PubMed and PsycInfo were searched using the terms "schizophrenia" and "anhedonia". Studies were included if they measured anhedonia and participants had a diagnosis of schizophrenia. The methodology, measures and main findings from each study were extracted and critically summarised for each TEP model construct. 80 independent studies were reviewed and executive functions, emotional memory and the translation of motivation into actions are highlighted as key deficits with a strong evidence base in people with schizophrenia. However, there are many relationships that are unclear because the empirical work is limited by over-general tasks and measures. Promising methods for research which have more ecological validity include experience sampling and behavioural tasks assessing motivation. Specific adaptations to Cognitive Remediation Therapy, Cognitive Behavioural Therapy and the utilisation of mobile technology to enhance representations and emotional memory are recommended for future development. Copyright © 2015. Published by Elsevier B.V.

  15. Supervised target detection in hyperspectral images using one-class Fukunaga-Koontz Transform

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah

    2016-05-01

    A novel hyperspectral target detection technique based on Fukunaga-Koontz transform (FKT) is presented. FKT offers significant properties for feature selection and ordering. However, it can only be used to solve multi-pattern classification problems. Target detection may be considered as a two-class classification problem, i.e., target versus background clutter. Nevertheless, background clutter typically contains different types of materials. That's why; target detection techniques are different than classification methods by way of modeling clutter. To avoid the modeling of the background clutter, we have improved one-class FKT (OC-FKT) for target detection. The statistical properties of target training samples are used to define tunnel-like boundary of the target class. Non-target samples are then created synthetically as to be outside of the boundary. Thus, only limited target samples become adequate for training of FKT. The hyperspectral image experiments confirm that the proposed OC-FKT technique provides an effective means for target detection.

  16. A model for combined targeting and tracking tasks in computer applications.

    PubMed

    Senanayake, Ransalu; Hoffmann, Errol R; Goonetilleke, Ravindra S

    2013-11-01

    Current models for targeted-tracking are discussed and shown to be inadequate as a means of understanding the combined task of tracking, as in the Drury's paradigm, and having a final target to be aimed at, as in the Fitts' paradigm. It is shown that the task has to be split into components that are, in general, performed sequentially and have a movement time component dependent on the difficulty of the individual component of the task. In some cases, the task time may be controlled by the Fitts' task difficulty, and in others, it may be dominated by the Drury's task difficulty. Based on an experiment carried out that captured movement time in combinations of visually controlled and ballistic movements, a model for movement time in targeted-tracking was developed.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozanov, V. B., E-mail: rozanov@sci.lebedev.ru; Vergunova, G. A., E-mail: verg@sci.lebedev.ru

    The main parameters of compression of a target and tendencies at change in the irradiation conditions are determined by analyzing the published results of experiments at the megajoule National Ignition Facility (NIF) on the compression of capsules in indirect-irradiation targets by means of the one-dimensional RADIAN program in the spherical geometry. A possible version of the “failure of ignition” of an indirect-irradiation target under the NIF conditions is attributed to radiation transfer. The application of onedimensional model to analyze the National Ignition Campaign (NIC) experiments allows identifying conditions corresponding to the future ignition regime and distinguishing them from conditions undermore » which ignition does not occur.« less

  18. Observer age and the social transmission of attractiveness in humans: Younger women are more influenced by the choices of popular others than older women.

    PubMed

    Little, Anthony C; Caldwell, Christine A; Jones, Benedict C; DeBruine, Lisa M

    2015-08-01

    Being paired with an attractive partner increases perceptual judgements of attractiveness in humans. We tested experimentally for prestige bias, whereby individuals follow the choices of prestigious others. Women rated the attractiveness of photographs of target males which were paired with either popular or less popular model female partners. We found that pairing a photo of a man with a woman presented as his partner positively influenced the attractiveness of the man when the woman was presented as more popular (Experiment 1). Further, this effect was stronger in younger participants compared to older participants (Experiment 1). Reversing the target and model such that women were asked to rate women paired with popular and less popular men revealed no effect of model popularity and this effect was unrelated to participant age (Experiment 2). An additional experiment confirmed that participant age and not stimulus age primarily influenced the tendency to follow others' preferences in Experiment 1 (Experiment 3). We also confirmed that our manipulations of popularity lead to variation in rated prestige (Experiment 4). These results suggest a sophisticated model-based bias in social learning whereby individuals are most influenced by the choices of those who have high popularity/prestige. Furthermore, older individuals moderate their use of such social information and so this form of social learning appears strongest in younger women. © 2014 The British Psychological Society.

  19. Evolution of egg target size: an analysis of selection on correlated characters.

    PubMed

    Podolsky, R D

    2001-12-01

    In broadcast-spawning marine organisms, chronic sperm limitation should select for traits that improve chances of sperm-egg contact. One mechanism may involve increasing the size of the physical or chemical target for sperm. However, models of fertilization kinetics predict that increasing egg size can reduce net zygote production due to an associated decline in fecundity. An alternate method for increasing physical target size is through addition of energetically inexpensive external structures, such as the jelly coats typical of eggs in species from several phyla. In selection experiments on eggs of the echinoid Dendraster excentricus, in which sperm was used as the agent of selection, eggs with larger overall targets were favored in fertilization. Actual shifts in target size following selection matched quantitative predictions of a model that assumed fertilization was proportional to target size. Jelly volume and ovum volume, two characters that contribute to target size, were correlated both within and among females. A cross-sectional analysis of selection partitioned the independent effects of these characters on fertilization success and showed that they experience similar direct selection pressures. Coupled with data on relative organic costs of the two materials, these results suggest that, under conditions where fertilization is limited by egg target size, selection should favor investment in low-cost accessory structures and may have a relatively weak effect on the evolution of ovum size.

  20. Over Target Baseline: Lessons Learned from the NASA SLS Booster Element

    NASA Technical Reports Server (NTRS)

    Carroll, Truman J.

    2016-01-01

    Goal of the presentation is to teach, and then model, the steps necessary to implement an Over Target Baseline (OTB). More than a policy and procedure session, participants will learn from recent first hand experience the challenges and benefits that come from successfully executing an OTB.

  1. EMC3-EIRENE modelling of toroidally-localized divertor gas injection experiments on Alcator C-Mod

    DOE PAGES

    Lore, Jeremy D.; Reinke, M. L.; LaBombard, Brian; ...

    2014-09-30

    Experiments on Alcator C-Mod with toroidally and poloidally localized divertor nitrogen injection have been modeled using the three-dimensional edge transport code EMC3-EIRENE to elucidate the mechanisms driving measured toroidal asymmetries. In these experiments five toroidally distributed gas injectors in the private flux region were sequentially activated in separate discharges resulting in clear evidence of toroidal asymmetries in radiated power and nitrogen line emission as well as a ~50% toroidal modulation in electron pressure at the divertor target. The pressure modulation is qualitatively reproduced by the modelling, with the simulation yielding a toroidal asymmetry in the heat flow to the outermore » strike point. Finally, toroidal variation in impurity line emission is qualitatively matched in the scrape-off layer above the strike point, however kinetic corrections and cross-field drifts are likely required to quantitatively reproduce impurity behavior in the private flux region and electron temperatures and densities directly in front of the target.« less

  2. A Model for the Application of Target-Controlled Intravenous Infusion for a Prolonged Immersive DMT Psychedelic Experience.

    PubMed

    Gallimore, Andrew R; Strassman, Rick J

    2016-01-01

    The state of consciousness induced by N,N-dimethyltryptamine (DMT) is one of the most extraordinary of any naturally-occurring psychedelic substance. Users consistently report the complete replacement of normal subjective experience with a novel "alternate universe," often densely populated with a variety of strange objects and other highly complex visual content, including what appear to be sentient "beings." The phenomenology of the DMT state is of great interest to psychology and calls for rigorous academic enquiry. The extremely short duration of DMT effects-less than 20 min-militates against single dose administration as the ideal model for such enquiry. Using pharmacokinetic modeling and DMT blood sampling data, we demonstrate that the unique pharmacological characteristics of DMT, which also include a rapid onset and lack of acute tolerance to its subjective effects, make it amenable to administration by target-controlled intravenous infusion. This is a technology developed to maintain a stable brain concentration of anesthetic drugs during surgery. Simulations of our model demonstrate that this approach will allow research subjects to be induced into a stable and prolonged DMT experience, making it possible to carefully observe its psychological contents, and provide more extensive accounts for subsequent analyses. This model would also be valuable in performing functional neuroimaging, where subjects are required to remain under the influence of the drug for extended periods. Finally, target-controlled intravenous infusion of DMT may aid the development of unique psychotherapeutic applications of this psychedelic agent.

  3. Research on target information optics communications transmission characteristic and performance in multi-screens testing system

    NASA Astrophysics Data System (ADS)

    Li, Hanshan

    2016-04-01

    To enhance the stability and reliability of multi-screens testing system, this paper studies multi-screens target optical information transmission link properties and performance in long-distance, sets up the discrete multi-tone modulation transmission model based on geometric model of laser multi-screens testing system and visible light information communication principle; analyzes the electro-optic and photoelectric conversion function of sender and receiver in target optical information communication system; researches target information transmission performance and transfer function of the generalized visible-light communication channel; found optical information communication transmission link light intensity space distribution model and distribution function; derives the SNR model of information transmission communication system. Through the calculation and experiment analysis, the results show that the transmission error rate increases with the increment of transmission rate in a certain channel modulation depth; when selecting the appropriate transmission rate, the bit error rate reach 0.01.

  4. Infrared small target detection based on Danger Theory

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Yang, Xiao

    2009-11-01

    To solve the problem that traditional method can't detect the small objects whose local SNR is less than 2 in IR images, a Danger Theory-based model to detect infrared small target is presented in this paper. First, on the analog with immunology, the definition is given, in this paper, to such terms as dangerous signal, antigens, APC, antibodies. Besides, matching rule between antigen and antibody is improved. Prior to training the detection model and detecting the targets, the IR images are processed utilizing adaptive smooth filter to decrease the stochastic noise. Then at the training process, deleting rule, generating rule, crossover rule and the mutation rule are established after a large number of experiments in order to realize immediate convergence and obtain good antibodies. The Danger Theory-based model is built after the training process, and this model can detect the target whose local SNR is only 1.5.

  5. a Target Aware Texture Mapping for Sculpture Heritage Modeling

    NASA Astrophysics Data System (ADS)

    Yang, C.; Zhang, F.; Huang, X.; Li, D.; Zhu, Y.

    2017-08-01

    In this paper, we proposed a target aware image to model registration method using silhouette as the matching clues. The target sculpture object in natural environment can be automatically detected from image with complex background with assistant of 3D geometric data. Then the silhouette can be automatically extracted and applied in image to model matching. Due to the user don't need to deliberately draw target area, the time consumption for precisely image to model matching operation can be greatly reduced. To enhance the function of this method, we also improved the silhouette matching algorithm to support conditional silhouette matching. Two experiments using a stone lion sculpture of Ming Dynasty and a potable relic in museum are given to evaluate the method we proposed. The method we proposed in this paper is extended and developed into a mature software applied in many culture heritage documentation projects.

  6. Rapid estimation of high-parameter auditory-filter shapes

    PubMed Central

    Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.

    2014-01-01

    A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086

  7. CCM-C,Collins checks the middeck experiment

    NASA Image and Video Library

    1999-07-24

    S93-E-5016 (23 July 1999) --- Astronaut Eileen M. Collins, mission commander, checks on an experiment on Columbia's middeck during Flight Day 1 activity. The experiment is called the Cell Culture Model, Configuration C. Objectives of it are to validate cell culture models for muscle, bone and endothelial cell biochemical and functional loss induced by microgravity stress; to evaluate cytoskeleton, metabolism, membrane integrity and protease activity in target cells; and to test tissue loss pharmaceuticals for efficacy. The photo was recorded with an electronic still camera (ESC).

  8. Evaluation of an Imputed Pitch Velocity Model of the Auditory Kappa Effect

    ERIC Educational Resources Information Center

    Henry, Molly J.; McAuley, J. Devin

    2009-01-01

    Three experiments evaluated an imputed pitch velocity model of the auditory kappa effect. Listeners heard 3-tone sequences and judged the timing of the middle (target) tone relative to the timing of the 1st and 3rd (bounding) tones. Experiment 1 held pitch constant but varied the time (T) interval between bounding tones (T = 728, 1,000, or 1,600…

  9. Flyer Target Acceleration and Energy Transfer at its Collision with Massive Targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borodziuk, S.; Kasperczuk, A.; Pisarczyk, T.

    2006-01-15

    Numerical modelling was aimed at simulation of successive events resulting from interaction of laser beam-single and double targets. It was performed by means of the 2D Lagrangian hydrodynamics code ATLANT-HE. This code is based on one-fluid and two-temperature model of plasma with electron and ion heat conductivity considerations. The code has an advanced treatment of laser light propagation and absorption. This numerical modelling corresponds to the experiment, which was carried out with the use of the PALS facility. Two types of planar solid targets, i.e. single massive Al slabs and double targets consisting of 6 {mu}m thick Al foil andmore » Al slab were applied. The targets were irradiated by the iodine laser pulses of two wavelengths: 1.315 and 0.438 {mu}m. A pulse duration of 0.4 ns and a focal spot diameter of 250 {mu}m at a laser energy of 130 J were used. The numerical modelling allowed us to obtain a more detailed description of shock wave propagation and crater formation.« less

  10. Total variation-based method for radar coincidence imaging with model mismatch for extended target

    NASA Astrophysics Data System (ADS)

    Cao, Kaicheng; Zhou, Xiaoli; Cheng, Yongqiang; Fan, Bo; Qin, Yuliang

    2017-11-01

    Originating from traditional optical coincidence imaging, radar coincidence imaging (RCI) is a staring/forward-looking imaging technique. In RCI, the reference matrix must be computed precisely to reconstruct the image as preferred; unfortunately, such precision is almost impossible due to the existence of model mismatch in practical applications. Although some conventional sparse recovery algorithms are proposed to solve the model-mismatch problem, they are inapplicable to nonsparse targets. We therefore sought to derive the signal model of RCI with model mismatch by replacing the sparsity constraint item with total variation (TV) regularization in the sparse total least squares optimization problem; in this manner, we obtain the objective function of RCI with model mismatch for an extended target. A more robust and efficient algorithm called TV-TLS is proposed, in which the objective function is divided into two parts and the perturbation matrix and scattering coefficients are updated alternately. Moreover, due to the ability of TV regularization to recover sparse signal or image with sparse gradient, TV-TLS method is also applicable to sparse recovering. Results of numerical experiments demonstrate that, for uniform extended targets, sparse targets, and real extended targets, the algorithm can achieve preferred imaging performance both in suppressing noise and in adapting to model mismatch.

  11. High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.

    PubMed

    Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min

    2012-01-01

    The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.

  12. The "common good" phenomenon: Why similarities are positive and differences are negative.

    PubMed

    Alves, Hans; Koch, Alex; Unkelbach, Christian

    2017-04-01

    Positive attributes are more prevalent than negative attributes in the social environment. From this basic assumption, 2 implications that have been overlooked thus far: Positive compared with negative attributes are more likely to be shared by individuals, and people's shared attributes (similarities) are more positive than their unshared attributes (differences). Consequently, similarity-based comparisons should lead to more positive evaluations than difference-based comparisons. We formalized our probabilistic reasoning in a model and tested its predictions in a simulation and 8 experiments (N = 1,181). When participants generated traits about 2 target persons, positive compared with negative traits were more likely to be shared by the targets (Experiment 1a) and by other participants' targets (Experiment 1b). Conversely, searching for targets' shared traits resulted in more positive traits than searching for unshared traits (Experiments 2, 4a, and 4b). In addition, positive traits were more accessible than negative traits among shared traits but not among unshared traits (Experiment 3). Finally, shared traits were only more positive when positive traits were indeed prevalent (Experiments 5 and 6). The current framework has a number of implications for comparison processes and provides a new interpretation of well-known evaluative asymmetries such as intergroup bias and self-superiority effects. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. The attentional blink reveals serial working memory encoding: evidence from virtual and human event-related potentials.

    PubMed

    Craston, Patrick; Wyble, Brad; Chennu, Srivas; Bowman, Howard

    2009-03-01

    Observers often miss a second target (T2) if it follows an identified first target item (T1) within half a second in rapid serial visual presentation (RSVP), a finding termed the attentional blink. If two targets are presented in immediate succession, however, accuracy is excellent (Lag 1 sparing). The resource sharing hypothesis proposes a dynamic distribution of resources over a time span of up to 600 msec during the attentional blink. In contrast, the ST(2) model argues that working memory encoding is serial during the attentional blink and that, due to joint consolidation, Lag 1 is the only case where resources are shared. Experiment 1 investigates the P3 ERP component evoked by targets in RSVP. The results suggest that, in this context, P3 amplitude is an indication of bottom-up strength rather than a measure of cognitive resource allocation. Experiment 2, employing a two-target paradigm, suggests that T1 consolidation is not affected by the presentation of T2 during the attentional blink. However, if targets are presented in immediate succession (Lag 1 sparing), they are jointly encoded into working memory. We use the ST(2) model's neural network implementation, which replicates a range of behavioral results related to the attentional blink, to generate "virtual ERPs" by summing across activation traces. We compare virtual to human ERPs and show how the results suggest a serial nature of working memory encoding as implied by the ST(2) model.

  14. Qweak Data Analysis for Target Modeling Using Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Moore, Michael; Covrig, Silviu

    2015-04-01

    The 2.5 kW liquid hydrogen (LH2) target used in the Qweak parity violation experiment is the highest power LH2 target in the world and the first to be designed with Computational Fluid Dynamics (CFD) at Jefferson Lab. The Qweak experiment determined the weak charge of the proton by measuring the parity-violating elastic scattering asymmetry of longitudinally polarized electrons from unpolarized liquid hydrogen at small momentum transfer (Q2 = 0 . 025 GeV2). This target met the design goals of < 1 % luminosity reduction and < 5 % contribution to the total asymmetry width (the Qweak target achieved 2 % or 55 ppm). State of the art time dependent CFD simulations are being developed to improve the predictions of target noise on the time scale of the electron beam helicity period. These predictions will be bench-marked with the Qweak target data. This work is an essential ingredient in future designs of very high power low noise targets like MOLLER (5 kW, target noise asymmetry contribution < 25 ppm) and MESA (4.5 kW).

  15. Application of constraint-based satellite mission planning model in forest fire monitoring

    NASA Astrophysics Data System (ADS)

    Guo, Bingjun; Wang, Hongfei; Wu, Peng

    2017-10-01

    In this paper, a constraint-based satellite mission planning model is established based on the thought of constraint satisfaction. It includes target, request, observation, satellite, payload and other elements, with constraints linked up. The optimization goal of the model is to make full use of time and resources, and improve the efficiency of target observation. Greedy algorithm is used in the model solving to make observation plan and data transmission plan. Two simulation experiments are designed and carried out, which are routine monitoring of global forest fire and emergency monitoring of forest fires in Australia. The simulation results proved that the model and algorithm perform well. And the model is of good emergency response capability. Efficient and reasonable plan can be worked out to meet users' needs under complex cases of multiple payloads, multiple targets and variable priorities with this model.

  16. Deep brain stimulation abolishes slowing of reactions to unlikely stimuli.

    PubMed

    Antoniades, Chrystalina A; Bogacz, Rafal; Kennard, Christopher; FitzGerald, James J; Aziz, Tipu; Green, Alexander L

    2014-08-13

    The cortico-basal-ganglia circuit plays a critical role in decision making on the basis of probabilistic information. Computational models have suggested how this circuit could compute the probabilities of actions being appropriate according to Bayes' theorem. These models predict that the subthalamic nucleus (STN) provides feedback that normalizes the neural representation of probabilities, such that if the probability of one action increases, the probabilities of all other available actions decrease. Here we report the results of an experiment testing a prediction of this theory that disrupting information processing in the STN with deep brain stimulation should abolish the normalization of the neural representation of probabilities. In our experiment, we asked patients with Parkinson's disease to saccade to a target that could appear in one of two locations, and the probability of the target appearing in each location was periodically changed. When the stimulator was switched off, the target probability affected the reaction times (RT) of patients in a similar way to healthy participants. Specifically, the RTs were shorter for more probable targets and, importantly, they were longer for the unlikely targets. When the stimulator was switched on, the patients were still faster for more probable targets, but critically they did not increase RTs as the target was becoming less likely. This pattern of results is consistent with the prediction of the model that the patients on DBS no longer normalized their neural representation of prior probabilities. We discuss alternative explanations for the data in the context of other published results. Copyright © 2014 the authors 0270-6474/14/3410844-09$15.00/0.

  17. Infrared and visible image fusion with the target marked based on multi-resolution visual attention mechanisms

    NASA Astrophysics Data System (ADS)

    Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue

    2016-03-01

    During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.

  18. Targeting the link between loneliness and paranoia via an interventionist-causal model framework.

    PubMed

    Gollwitzer, Anton; Wilczynska, Magdalena; Jaya, Edo S

    2018-05-01

    Targeting the antecedents of paranoia may be one potential method to reduce or prevent paranoia. For instance, targeting a potential antecedent of paranoia - loneliness - may reduce paranoia. Our first research question was whether loneliness heightens subclinical paranoia and whether negative affect may mediate this effect. Second, we wondered whether this potential effect could be targeted via two interventionist pathways in line with an interventionist-causal model approach: (1) decreasing loneliness, and (2) intervening on the potential mediator - negative affect. In Study 1 (N = 222), recollecting an experience of companionship reduced paranoia in participants high in pre-manipulation paranoia but not in participants low in pre-manipulation paranoia. Participants recollecting an experience of loneliness, on the other hand, exhibited increased paranoia, and this effect was mediated by negative affect. In Study 2 (N = 196), participants who utilized an emotion-regulation strategy, cognitive reappraisal, to regulate the negative affect associated with loneliness successfully attenuated the effect of loneliness on paranoia. Targeting the effect of loneliness on paranoia by identifying interventionist pathways may be one promising route for reducing and preventing subclinical paranoia. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Application of a post-collisional-interaction distorted-wave model for (e, 2e) of some atomic targets and methane

    NASA Astrophysics Data System (ADS)

    Chinoune, M.; Houamer, S.; Dal Cappello, C.; Galstyan, A.

    2016-10-01

    Recently Isik et al (2016 J. Phys B: At. Mol. Opt. Phys. 49 065203) performed measurements of the triple differential cross sections (TDCSs) of methane by electron impact. Their data clearly show that post-collisional interaction (PCI) effects are present in the angular distributions of ejected electrons. A model describing the ejected electron by a distorted wave and including PCI is applied for the single ionization of atomic targets and for methane. Extensive comparisons between this model and other previous models are made with available experiments.

  20. Charged Particle Identification for Prefragmentation Studies

    NASA Astrophysics Data System (ADS)

    Hu, Jonathan; MoNA Collaboration

    2017-09-01

    Projectile fragmentation refers to high energy (>50 MeV/u) heavy ion beams on production targets to generate intermediate mass and target fragments at facilities like the NSCL, FRIB, GSI, GANIL and RIKEN. The resulting secondary beams can then be isolated by fragment separators like the NCSL's A1900 and that secondary beam then used on reaction targets for a variety of experiments. Predictions of beam intensities for experiment planning depend on models and data. The MoNA Collaboration performed an experiment at the NSCL in which a 48Ca primary beam was used with a 9Be target to produce a 32Mg secondary beam with energy 86 MeV/u that was incident on a second target of 9Be. By characterizing the energy distributions of final fragments of neon, sodium, and fluorine in coincidence with neutrons created both by prefragmentation processes and reaction mechanisms, we are able to extract information about prefragmentation dynamics. The identification of charged fragments is a multi-step process crucial to this analysis. This work is supported by the National Science Foundation under Grant No. PHY-1613429.

  1. Model-independent comparison of annual modulation and total rate with direct detection experiments

    NASA Astrophysics Data System (ADS)

    Kahlhoefer, Felix; Reindl, Florian; Schäffner, Karoline; Schmidt-Hoberg, Kai; Wild, Sebastian

    2018-05-01

    The relative sensitivity of different direct detection experiments depends sensitively on the astrophysical distribution and particle physics nature of dark matter, prohibiting a model-independent comparison. The situation changes fundamentally if two experiments employ the same target material. We show that in this case one can compare measurements of an annual modulation and exclusion bounds on the total rate while making no assumptions on astrophysics and no (or only very general) assumptions on particle physics. In particular, we show that the dark matter interpretation of the DAMA/LIBRA signal can be conclusively tested with COSINUS, a future experiment employing the same target material. We find that if COSINUS excludes a dark matter scattering rate of about 0.01 kg‑1 days‑1 with an energy threshold of 1.8 keV and resolution of 0.2 keV, it will rule out all explanations of DAMA/LIBRA in terms of dark matter scattering off sodium and/or iodine.

  2. Fixed-target hadron production experiments

    NASA Astrophysics Data System (ADS)

    Popov, Boris A.

    2015-08-01

    Results from fixed-target hadroproduction experiments (HARP, MIPP, NA49 and NA61/SHINE) as well as their implications for cosmic ray and neutrino physics are reviewed. HARP measurements have been used for predictions of neutrino beams in K2K and MiniBooNE/SciBooNE experiments and are also being used to improve predictions of the muon yields in EAS and of the atmospheric neutrino fluxes as well as to help in the optimization of neutrino factory and super-beam designs. Recent measurements released by the NA61/SHINE experiment are of significant importance for a precise prediction of the J-PARC neutrino beam used for the T2K experiment and for interpretation of EAS data. These hadroproduction experiments provide also a large amount of input for validation and tuning of hadron production models in Monte-Carlo generators.

  3. Evaluation of the Performance of the Distributed Phased-MIMO Sonar.

    PubMed

    Pan, Xiang; Jiang, Jingning; Wang, Nan

    2017-01-11

    A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments.

  4. Evaluation of the Performance of the Distributed Phased-MIMO Sonar

    PubMed Central

    Pan, Xiang; Jiang, Jingning; Wang, Nan

    2017-01-01

    A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments. PMID:28085071

  5. Hydrocode predictions of collisional outcomes: Effects of target size

    NASA Technical Reports Server (NTRS)

    Ryan, Eileen V.; Asphaug, Erik; Melosh, H. J.

    1991-01-01

    Traditionally, laboratory impact experiments, designed to simulate asteroid collisions, attempted to establish a predictive capability for collisional outcomes given a particular set of initial conditions. Unfortunately, laboratory experiments are restricted to using targets considerably smaller than the modelled objects. It is therefore necessary to develop some methodology for extrapolating the extensive experimental results to the size regime of interest. Results are reported obtained through the use of two dimensional hydrocode based on 2-D SALE and modified to include strength effects and the fragmentation equations. The hydrocode was tested by comparing its predictions for post-impact fragment size distributions to those observed in laboratory impact experiments.

  6. Top-attack modeling and automatic target detection using synthetic FLIR scenery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.; Penn, Joseph A.

    2004-09-01

    A series of experiments have been performed to verify the utility of algorithmic tools for the modeling and analysis of cold-target signatures in synthetic, top-attack, FLIR video sequences. The tools include: MuSES/CREATION for the creation of synthetic imagery with targets, an ARL target detection algorithm to detect imbedded synthetic targets in scenes, and an ARL scoring algorithm, using Receiver-Operating-Characteristic (ROC) curve analysis, to evaluate detector performance. Cold-target detection variability was examined as a function of target emissivity, surrounding clutter type, and target placement in non-obscuring clutter locations. Detector metrics were also individually scored so as to characterize the effect of signature/clutter variations. Results show that using these tools, a detailed, physically meaningful, target detection analysis is possible and that scenario specific target detectors may be developed by selective choice and/or weighting of detector metrics. However, developing these tools into a reliable predictive capability will require the extension of these results to the modeling and analysis of a large number of data sets configured for a wide range of target and clutter conditions. Finally, these tools should also be useful for the comparison of competitive detection algorithms by providing well defined, and controllable target detection scenarios, as well as for the training and testing of expert human observers.

  7. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots.

    PubMed

    Zhao, Jing; Li, Wei; Li, Mengfan

    2015-01-01

    In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.

  8. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots

    PubMed Central

    Li, Mengfan

    2015-01-01

    In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot—a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject’s mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper. PMID:26562524

  9. A backwards glance at words: Using reversed-interior masked primes to test models of visual word identification

    PubMed Central

    Lupker, Stephen J.

    2017-01-01

    The experiments reported here used “Reversed-Interior” (RI) primes (e.g., cetupmor-COMPUTER) in three different masked priming paradigms in order to test between different models of orthographic coding/visual word recognition. The results of Experiment 1, using a standard masked priming methodology, showed no evidence of priming from RI primes, in contrast to the predictions of the Bayesian Reader and LTRS models. By contrast, Experiment 2, using a sandwich priming methodology, showed significant priming from RI primes, in contrast to the predictions of open bigram models, which predict that there should be no orthographic similarity between these primes and their targets. Similar results were obtained in Experiment 3, using a masked prime same-different task. The results of all three experiments are most consistent with the predictions derived from simulations of the Spatial-coding model. PMID:29244824

  10. Source characterization and modeling development for monoenergetic-proton radiography experiments on OMEGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manuel, M. J.-E.; Zylstra, A. B.; Rinderknecht, H. G.

    2012-06-15

    A monoenergetic proton source has been characterized and a modeling tool developed for proton radiography experiments at the OMEGA [T. R. Boehly et al., Opt. Comm. 133, 495 (1997)] laser facility. Multiple diagnostics were fielded to measure global isotropy levels in proton fluence and images of the proton source itself provided information on local uniformity relevant to proton radiography experiments. Global fluence uniformity was assessed by multiple yield diagnostics and deviations were calculated to be {approx}16% and {approx}26% of the mean for DD and D{sup 3}He fusion protons, respectively. From individual fluence images, it was found that the angular frequenciesmore » of Greater-Than-Or-Equivalent-To 50 rad{sup -1} contributed less than a few percent to local nonuniformity levels. A model was constructed using the Geant4 [S. Agostinelli et al., Nuc. Inst. Meth. A 506, 250 (2003)] framework to simulate proton radiography experiments. The simulation implements realistic source parameters and various target geometries. The model was benchmarked with the radiographs of cold-matter targets to within experimental accuracy. To validate the use of this code, the cold-matter approximation for the scattering of fusion protons in plasma is discussed using a typical laser-foil experiment as an example case. It is shown that an analytic cold-matter approximation is accurate to within Less-Than-Or-Equivalent-To 10% of the analytic plasma model in the example scenario.« less

  11. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  12. Adiabatic model and design of a translating field reversed configuration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Intrator, T. P.; Siemon, R. E.; Sieck, P. E.

    We apply an adiabatic evolution model to predict the behavior of a field reversed configuration (FRC) during decompression and translation, as well as during boundary compression. Semi-empirical scaling laws, which were developed and benchmarked primarily for collisionless FRCs, are expected to remain valid even for the collisional regime of FRX-L experiment. We use this approach to outline the design implications for FRX-L, the high density translated FRC experiment at Los Alamos National Laboratory. A conical theta coil is used to accelerate the FRC to the largest practical velocity so it can enter a mirror bounded compression region, where it mustmore » be a suitable target for a magnetized target fusion (MTF) implosion. FRX-L provides the physics basis for the integrated MTF plasma compression experiment at the Shiva-Star pulsed power facility at Kirtland Air Force Research Laboratory, where the FRC will be compressed inside a flux conserving cylindrical shell.« less

  13. Design and Analysis of AN Static Aeroelastic Experiment

    NASA Astrophysics Data System (ADS)

    Hou, Ying-Yu; Yuan, Kai-Hua; Lv, Ji-Nan; Liu, Zi-Qiang

    2016-06-01

    Static aeroelastic experiments are very common in the United States and Russia. The objective of static aeroelastic experiments is to investigate deformation and loads of elastic structure in flow field. Generally speaking, prerequisite of this experiment is that the stiffness distribution of structure is known. This paper describes a method for designing experimental models, in the case where the stiffness distribution and boundary condition of a real aircraft are both uncertain. The stiffness distribution form of the structure can be calculated via finite element modeling and simulation calculation and F141 steels and rigid foam are used to make elastic model. In this paper, the design and manufacturing process of static aeroelastic models is presented and a set of experiment model was designed to simulate the stiffness of the designed wings, a set of experiments was designed to check the results. The test results show that the experimental method can effectively complete the design work of elastic model. This paper introduces the whole process of the static aeroelastic experiment, and the experimental results are analyzed. This paper developed a static aeroelasticity experiment technique and established an experiment model targeting at the swept wing of a certain kind of large aspect ratio aircraft.

  14. The Role of Experience in Location Estimation: Target Distributions Shift Location Memory Biases

    ERIC Educational Resources Information Center

    Lipinski, John; Simmering, Vanessa R.; Johnson, Jeffrey S.; Spencer, John P.

    2010-01-01

    Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. "Cognition, 93", 75-97]. This conflicts with earlier results showing…

  15. Optical model analyses of galactic cosmic ray fragmentation in hydrogen targets

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.

    1993-01-01

    Quantum-mechanical optical model methods for calculating cross sections for the fragmentation of galactic cosmic ray nuclei by hydrogen targets are presented. The fragmentation cross sections are calculated with an abrasion-ablation collision formalism. Elemental and isotopic cross sections are estimated and compared with measured values for neon, sulfur, and calcium ions at incident energies between 400A MeV and 910A MeV. Good agreement between theory and experiment is obtained.

  16. A Reconstruction Algorithm for Breast Cancer Imaging With Electrical Impedance Tomography in Mammography Geometry

    PubMed Central

    Kao, Tzu-Jen; Isaacson, David; Saulnier, Gary J.; Newell, Jonathan C.

    2009-01-01

    The conductivity and permittivity of breast tumors are known to differ significantly from those of normal breast tissues, and electrical impedance tomography (EIT) is being studied as a modality for breast cancer imaging to exploit these differences. At present, X-ray mammography is the primary standard imaging modality used for breast cancer screening in clinical practice, so it is desirable to study EIT in the geometry of mammography. This paper presents a forward model of a simplified mammography geometry and a reconstruction algorithm for breast tumor imaging using EIT techniques. The mammography geometry is modeled as a rectangular box with electrode arrays on the top and bottom planes. A forward model for the electrical impedance imaging problem is derived for a homogeneous conductivity distribution and is validated by experiment using a phantom tank. A reconstruction algorithm for breast tumor imaging based on a linearization approach and the proposed forward model is presented. It is found that the proposed reconstruction algorithm performs well in the phantom experiment, and that the locations of a 5-mm-cube metal target and a 6-mm-cube agar target could be recovered at a target depth of 15 mm using a 32 electrode system. PMID:17405377

  17. Spall response of annealed copper to direct explosive loading

    NASA Astrophysics Data System (ADS)

    Finnegan, S. G.; Burns, M. J.; Markland, L.; Goff, M.; Ferguson, J. W.

    2017-01-01

    Taylor wave spall experiments were conducted on annealed copper targets using direct explosive loading. The targets were mounted on the back of an explosive disc which was being used for a shock to detonation transition (SDT) test in a gas gun. This technique allows two experiments to be conducted with one piece of explosive. Explosive loading creates a high stress state within the target with a lower strain rate than an equivalent plate impact experiment, although the shock front will also have some curvature. Three shots were performed on two differently annealed batches of copper to investigate the viability of the technique and the effect of annealing on the spall response. One pair of targets was annealed at 850°C for four hours and the other target was annealed at 600°C for one hour. The free surface velocity (FSV) profiles were recorded using a Photonic Doppler Velocimetry (PDV) probe focused on the center of the target. The profiles were compared to predictions from the CREST reactive burn model. One profile recorded a significantly lower peak velocity which was attributed to the probe being located off center. Despite this, all three calculated spall strengths closely agreed and it was concluded that the technique is a viable one for loading an inert target.

  18. The proper treatment of language acquisition and change in a population setting.

    PubMed

    Niyogi, Partha; Berwick, Robert C

    2009-06-23

    Language acquisition maps linguistic experience, primary linguistic data (PLD), onto linguistic knowledge, a grammar. Classically, computational models of language acquisition assume a single target grammar and one PLD source, the central question being whether the target grammar can be acquired from the PLD. However, real-world learners confront populations with variation, i.e., multiple target grammars and PLDs. Removing this idealization has inspired a new class of population-based language acquisition models. This paper contrasts 2 such models. In the first, iterated learning (IL), each learner receives PLD from one target grammar but different learners can have different targets. In the second, social learning (SL), each learner receives PLD from possibly multiple targets, e.g., from 2 parents. We demonstrate that these 2 models have radically different evolutionary consequences. The IL model is dynamically deficient in 2 key respects. First, the IL model admits only linear dynamics and so cannot describe phase transitions, attested rapid changes in languages over time. Second, the IL model cannot properly describe the stability of languages over time. In contrast, the SL model leads to nonlinear dynamics, bifurcations, and possibly multiple equilibria and so suffices to model both the case of stable language populations, mixtures of more than 1 language, as well as rapid language change. The 2 models also make distinct, empirically testable predictions about language change. Using historical data, we show that the SL model more faithfully replicates the dynamics of the evolution of Middle English.

  19. The role of object categories in hybrid visual and memory search

    PubMed Central

    Cunningham, Corbin A.; Wolfe, Jeremy M.

    2014-01-01

    In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054

  20. Covariance-based synaptic plasticity in an attractor network model accounts for fast adaptation in free operant learning.

    PubMed

    Neiman, Tal; Loewenstein, Yonatan

    2013-01-23

    In free operant experiments, subjects alternate at will between targets that yield rewards stochastically. Behavior in these experiments is typically characterized by (1) an exponential distribution of stay durations, (2) matching of the relative time spent at a target to its relative share of the total number of rewards, and (3) adaptation after a change in the reward rates that can be very fast. The neural mechanism underlying these regularities is largely unknown. Moreover, current decision-making neural network models typically aim at explaining behavior in discrete-time experiments in which a single decision is made once in every trial, making these models hard to extend to the more natural case of free operant decisions. Here we show that a model based on attractor dynamics, in which transitions are induced by noise and preference is formed via covariance-based synaptic plasticity, can account for the characteristics of behavior in free operant experiments. We compare a specific instance of such a model, in which two recurrently excited populations of neurons compete for higher activity, to the behavior of rats responding on two levers for rewarding brain stimulation on a concurrent variable interval reward schedule (Gallistel et al., 2001). We show that the model is consistent with the rats' behavior, and in particular, with the observed fast adaptation to matching behavior. Further, we show that the neural model can be reduced to a behavioral model, and we use this model to deduce a novel "conservation law," which is consistent with the behavior of the rats.

  1. Singing with yourself: evidence for an inverse modeling account of poor-pitch singing.

    PubMed

    Pfordresher, Peter Q; Mantell, James T

    2014-05-01

    Singing is a ubiquitous and culturally significant activity that humans engage in from an early age. Nevertheless, some individuals - termed poor-pitch singers - are unable to match target pitches within a musical semitone while singing. In the experiments reported here, we tested whether poor-pitch singing deficits would be reduced when individuals imitate recordings of themselves as opposed to recordings of other individuals. This prediction was based on the hypothesis that poor-pitch singers have not developed an abstract "inverse model" of the auditory-vocal system and instead must rely on sensorimotor associations that they have experienced directly, which is true for sequences an individual has already produced. In three experiments, participants, both accurate and poor-pitch singers, were better able to imitate sung recordings of themselves than sung recordings of other singers. However, this self-advantage was enhanced for poor-pitch singers. These effects were not a byproduct of self-recognition (Experiment 1), vocal timbre (Experiment 2), or the absolute pitch of target recordings (i.e., the advantage remains when recordings are transposed, Experiment 3). Results support the conceptualization of poor-pitch singing as an imitative deficit resulting from a deficient inverse model of the auditory-vocal system with respect to pitch. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. ISMIP6: Ice Sheet Model Intercomparison Project for CMIP6

    NASA Technical Reports Server (NTRS)

    Nowicki, S.

    2015-01-01

    ISMIP6 (Ice Sheet Model Intercomparison Project for CMIP6) targets the Cryosphere in a Changing Climate and the Future Sea Level Grand Challenges of the WCRP (World Climate Research Program). Primary goal is to provide future sea level contribution from the Greenland and Antarctic ice sheets, along with associated uncertainty. Secondary goal is to investigate feedback due to dynamic ice sheet models. Experiment design uses and augment the existing CMIP6 (Coupled Model Intercomparison Project Phase 6) DECK (Diagnosis, Evaluation, and Characterization of Klima) experiments. Additonal MIP (Model Intercomparison Project)- specific experiments will be designed for ISM (Ice Sheet Model). Effort builds on the Ice2sea, SeaRISE (Sea-level Response to Ice Sheet Evolution) and COMBINE (Comprehensive Modelling of the Earth System for Better Climate Prediction and Projection) efforts.

  3. Selective binding of lectins to normal and neoplastic urothelium in rat and mouse bladder carcinogenesis models.

    PubMed

    Zupančič, Daša; Kreft, Mateja Erdani; Romih, Rok

    2014-01-01

    Bladder cancer adjuvant intravesical therapy could be optimized by more selective targeting of neoplastic tissue via specific binding of lectins to plasma membrane carbohydrates. Our aim was to establish rat and mouse models of bladder carcinogenesis to investigate in vivo and ex vivo binding of selected lectins to the luminal surface of normal and neoplastic urothelium. Male rats and mice were treated with 0.05 % N-butyl-N-(4-hydroxybutyl)nitrosamine (BBN) in drinking water and used for ex vivo and in vivo lectin binding experiments. Urinary bladder samples were also used for paraffin embedding, scanning electron microscopy and immunofluorescence labelling of uroplakins. During carcinogenesis, the structure of the urinary bladder luminal surface changed from microridges to microvilli and ropy ridges and the expression of urothelial-specific glycoproteins uroplakins was decreased. Ex vivo and in vivo lectin binding experiments gave comparable results. Jacalin (lectin from Artocarpus integrifolia) exhibited the highest selectivity for neoplastic compared to normal urothelium of rats and mice. The binding of lectin from Amaranthus caudatus decreased in rat model and increased in mouse carcinogenesis model, indicating interspecies variations of plasma membrane glycosylation. Lectin from Datura stramonium showed higher affinity for neoplastic urothelium compared to the normal in rat and mouse model. The BBN-induced animal models of bladder carcinogenesis offer a promising approach for lectin binding experiments and further lectin-mediated targeted drug delivery research. Moreover, in vivo lectin binding experiments are comparable to ex vivo experiments, which should be considered when planning and optimizing future research.

  4. Prospects for distinguishing dark matter models using annual modulation

    DOE PAGES

    Witte, Samuel J.; Gluscevic, Vera; McDermott, Samuel D.

    2017-02-24

    It has recently been demonstrated that, in the event of a putative signal in dark matter direct detection experiments, properly identifying the underlying dark matter-nuclei interaction promises to be a challenging task. Given the most optimistic expectations for the number counts of recoil events in the forthcoming Generation 2 experiments, differentiating between interactions that produce distinct features in the recoil energy spectra will only be possible if a strong signal is observed simultaneously on a variety of complementary targets. However, there is a wide range of viable theories that give rise to virtually identical energy spectra, and may only differmore » by the dependence of the recoil rate on the dark matter velocity. In this work, we investigate how degeneracy between such competing models may be broken by analyzing the time dependence of nuclear recoils, i.e. the annual modulation of the rate. For this purpose, we simulate dark matter events for a variety of interactions and experiments, and perform a Bayesian model-selection analysis on all simulated data sets, evaluating the chance of correctly identifying the input model for a given experimental setup. Lastly, we find that including information on the annual modulation of the rate may significantly enhance the ability of a single target to distinguish dark matter models with nearly degenerate recoil spectra, but only with exposures beyond the expectations of Generation 2 experiments.« less

  5. Global Sensitivity Analysis of Environmental Systems via Multiple Indices based on Statistical Moments of Model Outputs

    NASA Astrophysics Data System (ADS)

    Guadagnini, A.; Riva, M.; Dell'Oca, A.

    2017-12-01

    We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.

  6. Biophysically inspired model for functionalized nanocarrier adhesion to cell surface: roles of protein expression and mechanical factors

    NASA Astrophysics Data System (ADS)

    Ramakrishnan, N.; Tourdot, Richard W.; Eckmann, David M.; Ayyaswamy, Portonovo S.; Muzykantov, Vladimir R.; Radhakrishnan, Ravi

    2016-06-01

    In order to achieve selective targeting of affinity-ligand coated nanoparticles to the target tissue, it is essential to understand the key mechanisms that govern their capture by the target cell. Next-generation pharmacokinetic (PK) models that systematically account for proteomic and mechanical factors can accelerate the design, validation and translation of targeted nanocarriers (NCs) in the clinic. Towards this objective, we have developed a computational model to delineate the roles played by target protein expression and mechanical factors of the target cell membrane in determining the avidity of functionalized NCs to live cells. Model results show quantitative agreement with in vivo experiments when specific and non-specific contributions to NC binding are taken into account. The specific contributions are accounted for through extensive simulations of multivalent receptor-ligand interactions, membrane mechanics and entropic factors such as membrane undulations and receptor translation. The computed NC avidity is strongly dependent on ligand density, receptor expression, bending mechanics of the target cell membrane, as well as entropic factors associated with the membrane and the receptor motion. Our computational model can predict the in vivo targeting levels of the intracellular adhesion molecule-1 (ICAM1)-coated NCs targeted to the lung, heart, kidney, liver and spleen of mouse, when the contributions due to endothelial capture are accounted for. The effect of other cells (such as monocytes, etc.) do not improve the model predictions at steady state. We demonstrate the predictive utility of our model by predicting partitioning coefficients of functionalized NCs in mice and human tissues and report the statistical accuracy of our model predictions under different scenarios.

  7. Shock-induced damage in rocks: Application to impact cratering

    NASA Astrophysics Data System (ADS)

    Ai, Huirong

    Shock-induced damage beneath impact craters is studied in this work. Two representative terrestrial rocks, San Marcos granite and Bedford limestone, are chosen as test target. Impacts into the rock targets with different combinations of projectile material, size, impact angle, and impact velocity are carried out at cm scale in the laboratory. Shock-induced damage and fracturing would cause large-scale compressional wave velocity reduction in the recovered target beneath the impact crater. The shock-induced damage is measured by mapping the compressional wave velocity reduction in the recovered target. A cm scale nondestructive tomography technique is developed for this purpose. This technique is proved to be effective in mapping the damage in San Marcos granite, and the inverted velocity profile is in very good agreement with the result from dicing method and cut open directly. Both compressional velocity and attenuation are measured in three orthogonal directions on cubes prepared from one granite target impacted by a lead bullet at 1200 m/s. Anisotropy is observed from both results, but the attenuation seems to be a more useful parameter than acoustic velocity in studying orientation of cracks. Our experiments indicate that the shock-induced damage is a function of impact conditions including projectile type and size, impact velocity, and target properties. Combined with other crater phenomena such as crater diameter, depth, ejecta, etc., shock-induced damage would be used as an important yet not well recognized constraint for impact history. The shock-induced damage is also calculated numerically to be compared with the experiments for a few representative shots. The Johnson-Holmquist strength and failure model, initially developed for ceramics, is applied to geological materials. Strength is a complicated function of pressure, strain, strain rate, and damage. The JH model, coupled with a crack softening model, is used to describe both the inelastic response of rocks in the compressive field near the impact source and the tensile failure in the far field. The model parameters are determined either from direct static measurements, or from indirect numerical adjustment. The agreement between the simulation and experiment is very encouraging.

  8. Comparison of hydrodynamic simulations with two-shockwave drive target experiments

    NASA Astrophysics Data System (ADS)

    Karkhanis, Varad; Ramaprabhu, Praveen; Buttler, William

    2015-11-01

    We consider hydrodynamic continuum simulations to mimic ejecta generation in two-shockwave target experiments, where metallic surface is loaded by two successive shock waves. Time of second shock in simulations is determined to match experimental amplitudes at the arrival of the second shock. The negative Atwood number (A --> - 1) of ejecta simulations leads to two successive phase inversions of the interface corresponding to the passage of the shocks from heavy to light media in each instance. Metallic phase of ejecta (solid/liquid) depends on shock loading pressure in the experiment, and we find that hydrodynamic simulations quantify the liquid phase ejecta physics with a fair degree of accuracy, where RM instability is not suppressed by the strength effect. In particular, we find that our results of free surface velocity, maximum ejecta velocity, and maximum ejecta areal density are in excellent agreement with their experimental counterparts, as well as ejecta models. We also comment on the parametric space for hydrodynamic simulations in which they can be used to compare with the target experiments.

  9. The TARGET project in Tuscany: the first disease management model of a regional project for the prevention of hip re-fractures in the elderly.

    PubMed

    Piscitelli, Prisco; Brandi, Maria Luisa; Nuti, Ranuccio; Rizzuti, Carla; Giorni, Loredano; Giovannini, Valtere; Metozzi, Alessia; Merlotti, Daniela

    2010-09-01

    The official inquiry on osteoporosis in Italy, promoted by the Italian Senate in 2002 concluded that proper preventive strategies should be adopted at regional level in order to prevent osteoporotic fractures. Tuscany is the first Italian region who has promoted an official program (the TARGET project) aimed to reduce osteoporotic fractures by ensuring adequate treatment to all people aged ≥65 years old who experience a hip fragility fracture. this paper provides information concerning the implementation of TARGET project in Tuscany, assuming that it may represent an useful model for similar experiences to be promoted in other Italian Regions and across Europe. we have examined the model proposed for the regional program, and we have particularly analyzed the in-hospital and post-hospitalization path of hip fractured patients aged >65 years old in Tuscany after the adoption of TARGET project by Tuscany healthcare system and during its ongoing start-up phase. orthopaedic surgeons have been gradually involved in the project and are increasingly fulfilling all the clinical prescriptions and recommendations provided in the project protocol. Different forms of cooperation between orthopaedic surgeons and other clinical specialists have been adopted at each hospital for the treatment of hip fractured elderly patients. GPs involvement needs to be fostered both at regional and local level. The effort of Tuscany region to cope with hip fractures suffered from elderly people must be acknowledged as an interesting way of addressing this critical health problem. Specific preventive strategies modelled on the Tuscany TARGET project should be implemented in other Italian regions.

  10. Masked Inhibitory Priming in English: Evidence for Lexical Inhibition

    ERIC Educational Resources Information Center

    Davis, Colin J.; Lupker, Stephen J.

    2006-01-01

    Predictions derived from the interactive activation (IA) model were tested in 3 experiments using the masked priming technique in the lexical decision task. Experiment 1 showed a strong effect of prime lexicality: Classifications of target words were facilitated by orthographically related nonword primes (relative to unrelated nonword primes) but…

  11. Distinguishing bias from sensitivity effects in multialternative detection tasks.

    PubMed

    Sridharan, Devarajan; Steinmetz, Nicholas A; Moore, Tirin; Knudsen, Eric I

    2014-08-21

    Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. © 2014 ARVO.

  12. Distinguishing bias from sensitivity effects in multialternative detection tasks

    PubMed Central

    Sridharan, Devarajan; Steinmetz, Nicholas A.; Moore, Tirin; Knudsen, Eric I.

    2014-01-01

    Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. PMID:25146574

  13. A Model for the Application of Target-Controlled Intravenous Infusion for a Prolonged Immersive DMT Psychedelic Experience

    PubMed Central

    Gallimore, Andrew R.; Strassman, Rick J.

    2016-01-01

    The state of consciousness induced by N,N-dimethyltryptamine (DMT) is one of the most extraordinary of any naturally-occurring psychedelic substance. Users consistently report the complete replacement of normal subjective experience with a novel “alternate universe,” often densely populated with a variety of strange objects and other highly complex visual content, including what appear to be sentient “beings.” The phenomenology of the DMT state is of great interest to psychology and calls for rigorous academic enquiry. The extremely short duration of DMT effects—less than 20 min—militates against single dose administration as the ideal model for such enquiry. Using pharmacokinetic modeling and DMT blood sampling data, we demonstrate that the unique pharmacological characteristics of DMT, which also include a rapid onset and lack of acute tolerance to its subjective effects, make it amenable to administration by target-controlled intravenous infusion. This is a technology developed to maintain a stable brain concentration of anesthetic drugs during surgery. Simulations of our model demonstrate that this approach will allow research subjects to be induced into a stable and prolonged DMT experience, making it possible to carefully observe its psychological contents, and provide more extensive accounts for subsequent analyses. This model would also be valuable in performing functional neuroimaging, where subjects are required to remain under the influence of the drug for extended periods. Finally, target-controlled intravenous infusion of DMT may aid the development of unique psychotherapeutic applications of this psychedelic agent. PMID:27471468

  14. Study and Design of High G Augmentation Devices for Flight Simulators

    DTIC Science & Technology

    1981-12-01

    experiments . Non-invasive blood pressure moni- toring devices ave discussed in a following section (4.2.4). itI may be useful to conduct these experiments ...have experience in pressure suits and space suits. They also built a collapsible LBNP for Cooper and Ord (51) for their LBNP experiments . USE OF LBNP...the target illumination approaches the 42 mL level used in his dial reading experiments . Consequently, the model requires illumination level as an

  15. Extinction with multiple excitors

    PubMed Central

    McConnell, Bridget L.; Miguez, Gonzalo; Miller, Ralph R.

    2012-01-01

    Four conditioned suppression experiments with rats, using an ABC renewal design, investigated the effects of compounding the target conditioned excitor with additional, nontarget conditioned excitors during extinction. Experiment 1 showed stronger extinction, as evidenced by less renewal, when the target excitor was extinguished in compound with a second excitor, relative to when it was extinguished with associatively neutral stimuli. Critically, this deepened extinction effect was attenuated (i.e., more renewal occurred) when a third excitor was added during extinction training. This novel demonstration contradicts the predictions of associative learning models based on total error reduction, but it is explicable in terms of a counteraction effect within the framework of the extended comparator hypothesis. The attenuated deepened extinction effect was replicated in Experiments 2a and 3, which also showed that pretraining consisting of weakening the association between the two additional excitors (Experiments 2a and 2b) or weakening the association between one of the additional excitors and the unconditioned stimulus (Experiment 3) attenuated the counteraction effect, thereby resulting in a decrease in responding to the target excitor. These results suggest that more than simple total error reduction determines responding after extinction. PMID:23055103

  16. Modeling and validating Bayesian accrual models on clinical data and simulations using adaptive priors.

    PubMed

    Jiang, Yu; Simon, Steve; Mayo, Matthew S; Gajewski, Byron J

    2015-02-20

    Slow recruitment in clinical trials leads to increased costs and resource utilization, which includes both the clinic staff and patient volunteers. Careful planning and monitoring of the accrual process can prevent the unnecessary loss of these resources. We propose two hierarchical extensions to the existing Bayesian constant accrual model: the accelerated prior and the hedging prior. The new proposed priors are able to adaptively utilize the researcher's previous experience and current accrual data to produce the estimation of trial completion time. The performance of these models, including prediction precision, coverage probability, and correct decision-making ability, is evaluated using actual studies from our cancer center and simulation. The results showed that a constant accrual model with strongly informative priors is very accurate when accrual is on target or slightly off, producing smaller mean squared error, high percentage of coverage, and a high number of correct decisions as to whether or not continue the trial, but it is strongly biased when off target. Flat or weakly informative priors provide protection against an off target prior but are less efficient when the accrual is on target. The accelerated prior performs similar to a strong prior. The hedging prior performs much like the weak priors when the accrual is extremely off target but closer to the strong priors when the accrual is on target or only slightly off target. We suggest improvements in these models and propose new models for future research. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Using a filtering task to measure the spatial extent of selective attention

    PubMed Central

    Palmer, John; Moore, Cathleen M.

    2009-01-01

    The spatial extent of attention was investigated by measuring sensitivity to stimuli at to-be-ignored locations. Observers detected a stimulus at a cued location (target), while ignoring otherwise identical stimuli at nearby locations (foils). Only an attentional cue distinguished target from foil. Several experiments varied the contrast and separation of targets and foils. Two theories of selection were compared: contrast gain and a version of attention switching called an all-or-none mixture model. Results included large effects of separation, rejection of the contrast gain model, and the measurement of the size and profile of the spatial extent of attention. PMID:18405935

  18. Low, slow, small target recognition based on spatial vision network

    NASA Astrophysics Data System (ADS)

    Cheng, Zhao; Guo, Pei; Qi, Xin

    2018-03-01

    Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.

  19. The proximate unit in Chinese handwritten character production

    PubMed Central

    Chen, Jenn-Yeu; Cherng, Rong-Ju

    2013-01-01

    In spoken word production, a proximate unit is the first phonological unit at the sublexical level that is selectable for production (O'Seaghdha et al., 2010). The present study investigated whether the proximate unit in Chinese handwritten character production is the stroke, the radical, or something in between. A written version of the form preparation task was adopted. Chinese participants learned sets of two-character words, later were cued with the first character of each word, and had to write down the second character (the target). Response times were measured from the onset of a cue character to the onset of a written response. In Experiment 1, the target characters within a block shared (homogeneous) or did not share (heterogeneous) the first stroke. In Experiment 2, the first two strokes were shared in the homogeneous blocks. Response times in the homogeneous blocks and in the heterogeneous blocks were comparable in both experiments (Experiment 1: 687 vs. 684 ms, Experiment 2: 717 vs. 716). In Experiment 3 and 4, the target characters within a block shared or did not share the first radical. Response times in the homogeneous blocks were significantly faster than those in the heterogeneous blocks (Experiment 3: 685 vs. 704, Experiment 4: 594 vs. 650). In Experiment 5 and 6, the shared component was a Gestalt-like form that is more than a stroke, constitutes a portion of the target character, can be a stand-alone character itself, can be a radical of another character but is not a radical of the target character (e.g., ± in , , , ; called a logographeme). Response times in the homogeneous blocks were significantly faster than those in the heterogeneous blocks (Experiment 5: 576 vs. 625, Experiment 6: 586 vs. 620). These results suggest a model of Chinese handwritten character production in which the stroke is not a functional unit, the radical plays the role of a morpheme, and the logographeme is the proximate unit. PMID:23950752

  20. The role of spatial attention in visual word processing

    NASA Technical Reports Server (NTRS)

    Mccann, Robert S.; Folk, Charles L.; Johnston, James C.

    1992-01-01

    Subjects made lexical decisions on a target letter string presented above or below fixation. In Experiments 1 and 2, target location was cued 100 ms in advance of target onset. Responses were faster on validly than on invalidly cued trials. In Experiment 3, the target was sometimes accompanied by irrelevant stimuli on the other side of fixation; in such cases, responses were slowed (a spatial filtering effect). Both cuing and filtering effects on response time were additive with effects of word frequency and lexical status (words vs. nonwords). These findings are difficult to reconcile with claims that spatial attention is less involved in processing familiar words than in unfamiliar words and nonwords. The results can be reconciled with a late-selection locus of spatial attention only with difficulty, but are easily explained by early-selection models.

  1. Planning of reach-and-grasp movements: effects of validity and type of object information

    NASA Technical Reports Server (NTRS)

    Loukopoulos, L. D.; Engelbrecht, S. F.; Berthier, N. E.

    2001-01-01

    Individuals are assumed to plan reach-and-grasp movements by using two separate processes. In 1 of the processes, extrinsic (direction, distance) object information is used in planning the movement of the arm that transports the hand to the target location (transport planning); whereas in the other, intrinsic (shape) object information is used in planning the preshaping of the hand and the grasping of the target object (manipulation planning). In 2 experiments, the authors used primes to provide information to participants (N = 5, Experiment 1; N = 6, Experiment 2) about extrinsic and intrinsic object properties. The validity of the prime information was systematically varied. The primes were succeeded by a cue, which always correctly identified the location and shape of the target object. Reaction times were recorded. Four models of transport and manipulation planning were tested. The only model that was consistent with the data was 1 in which arm transport and object manipulation planning were postulated to be independent processes that operate partially in parallel. The authors suggest that the processes involved in motor planning before execution are primarily concerned with the geometric aspects of the upcoming movement but not with the temporal details of its execution.

  2. Physics of giant electromagnetic pulse generation in short-pulse laser experiments.

    PubMed

    Poyé, A; Hulin, S; Bailly-Grandvaux, M; Dubois, J-L; Ribolzi, J; Raffestin, D; Bardon, M; Lubrano-Lavaderci, F; D'Humières, E; Santos, J J; Nicolaï, Ph; Tikhonchuk, V

    2015-04-01

    In this paper we describe the physical processes that lead to the generation of giant electromagnetic pulses (GEMPs) at powerful laser facilities. Our study is based on experimental measurements of both the charging of a solid target irradiated by an ultra-short, ultra-intense laser and the detection of the electromagnetic emission in the GHz domain. An unambiguous correlation between the neutralization current in the target holder and the electromagnetic emission shows that the source of the GEMP is the remaining positive charge inside the target after the escape of fast electrons accelerated by the ultra-intense laser. A simple model for calculating this charge in the thick target case is presented. From this model and knowing the geometry of the target holder, it becomes possible to estimate the intensity and the dominant frequencies of the GEMP at any facility.

  3. Interactions between facial emotion and identity in face processing: evidence based on redundancy gains.

    PubMed

    Yankouskaya, Alla; Booth, David A; Humphreys, Glyn

    2012-11-01

    Interactions between the processing of emotion expression and form-based information from faces (facial identity) were investigated using the redundant-target paradigm, in which we specifically tested whether identity and emotional expression are integrated in a superadditive manner (Miller, Cognitive Psychology 14:247-279, 1982). In Experiments 1 and 2, participants performed emotion and face identity judgments on faces with sad or angry emotional expressions. Responses to redundant targets were faster than responses to either single target when a universal emotion was conveyed, and performance violated the predictions from a model assuming independent processing of emotion and face identity. Experiment 4 showed that these effects were not modulated by varying interstimulus and nontarget contingencies, and Experiment 5 demonstrated that the redundancy gains were eliminated when faces were inverted. Taken together, these results suggest that the identification of emotion and facial identity interact in face processing.

  4. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  5. Perception of mind and dehumanization: Human, animal, or machine?

    PubMed

    Morera, María D; Quiles, María N; Correa, Ana D; Delgado, Naira; Leyens, Jacques-Philippe

    2016-08-02

    Dehumanization is reached through several approaches, including the attribute-based model of mind perception and the metaphor-based model of dehumanization. We performed two studies to find different (de)humanized images for three targets: Professional people, Evil people, and Lowest of the low. In Study 1, we examined dimensions of mind, expecting the last two categories to be dehumanized through denial of agency (Lowest of the low) or experience (Evil people), compared with humanized targets (Professional people). Study 2 aimed to distinguish these targets using metaphors. We predicted that Evil and Lowest of the low targets would suffer mechanistic and animalistic dehumanization, respectively; our predictions were confirmed, but the metaphor-based model nuanced these results: animalistic and mechanistic dehumanization were shown as overlapping rather than independent. Evil persons were perceived as "killing machines" and "predators." Finally, Lowest of the low were not animalized but considered human beings. We discuss possible interpretations. © 2016 International Union of Psychological Science.

  6. Analyzing Single-Molecule Protein Transportation Experiments via Hierarchical Hidden Markov Models

    PubMed Central

    Chen, Yang; Shen, Kuang

    2017-01-01

    To maintain proper cellular functions, over 50% of proteins encoded in the genome need to be transported to cellular membranes. The molecular mechanism behind such a process, often referred to as protein targeting, is not well understood. Single-molecule experiments are designed to unveil the detailed mechanisms and reveal the functions of different molecular machineries involved in the process. The experimental data consist of hundreds of stochastic time traces from the fluorescence recordings of the experimental system. We introduce a Bayesian hierarchical model on top of hidden Markov models (HMMs) to analyze these data and use the statistical results to answer the biological questions. In addition to resolving the biological puzzles and delineating the regulating roles of different molecular complexes, our statistical results enable us to propose a more detailed mechanism for the late stages of the protein targeting process. PMID:28943680

  7. [Representation of letter position in visual word recognition process].

    PubMed

    Makioka, S

    1994-08-01

    Two experiments investigated the representation of letter position in visual word recognition process. In Experiment 1, subjects (12 undergraduates and graduates) were asked to detect a target word in a briefly-presented probe. Probes consisted of two kanji words. The latters which formed targets (critical letters) were always contained in probes. (e.g. target: [symbol: see text] probe: [symbol: see text]) High false alarm rate was observed when critical letters occupied the same within-word relative position (left or right within the word) in the probe words as in the target word. In Experiment 2 (subject were ten undergraduates and graduates), spaces adjacent to probe words were replaced by randomly chosen hiragana letters (e.g. [symbol: see text]), because spaces are not used to separate words in regular Japanese sentences. In addition to the effect of within-word relative position as in Experiment 1, the effect of between-word relative position (left or right across the probe words) was observed. These results suggest that information about within-word relative position of a letter is used in word recognition process. The effect of within-word relative position was explained by a connectionist model of word recognition.

  8. Integration of parallel 13 C-labeling experiments and in silico pathway analysis for enhanced production of ascomycin.

    PubMed

    Qi, Haishan; Lv, Mengmeng; Song, Kejing; Wen, Jianping

    2017-05-01

    Herein, the hyper-producing strain for ascomycin was engineered based on 13 C-labeling experiments and elementary flux modes analysis (EFMA). First, the metabolism of non-model organism Streptomyces hygroscopicus var. ascomyceticus SA68 was investigated and an updated network model was reconstructed using 13 C- metabolic flux analysis. Based on the precise model, EFMA was further employed to predict genetic targets for higher ascomycin production. Chorismatase (FkbO) and pyruvate carboxylase (Pyc) were predicted as the promising overexpression and deletion targets, respectively. The corresponding mutant TD-FkbO and TD-ΔPyc exhibited the consistency effects between model prediction and experimental results. Finally, the combined genetic manipulations were performed, achieving a high-yield ascomycin engineering strain TD-ΔPyc-FkbO with production up to 610 mg/L, 84.8% improvement compared with the parent strain SA68. These results manifested that the integration of 13 C-labeling experiments and in silico pathway analysis could serve as a promising concept to enhance ascomycin production, as well as other valuable products. Biotechnol. Bioeng. 2017;114: 1036-1044. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Polarimetric SAR Models for Oil Fields Monitoring in China Seas

    NASA Astrophysics Data System (ADS)

    Buono, A.; Nunziata, F.; Li, X.; Wei, Y.; Ding, X.

    2014-11-01

    In this study, physical-based models for polarimetric Synthetic Aperture Radar (SAR) oil fields monitoring are proposed. They all share a physical rationale relying on the different scattering mechanisms that characterize a free sea surface, an oil slick-covered sea surface, and a metallic target. In fact, sea surface scattering is well modeled by a Bragg-like behaviour, while a strong departure from Bragg scattering is in place when dealing with oil slicks and targets. Furthermore, the proposed polarimetric models aim at addressing simultaneously target and oil slick detection, providing useful extra information with respect to single-pol SAR data in order to approach oil discrimination and classification. Experiments undertaken over East and South China Sea from actual C-band RadarSAT-2 full-pol SAR data witness the soundness of the proposed rationale.

  10. Polarimetric SAR Models for Oil Fields Monitoring in China Seas

    NASA Astrophysics Data System (ADS)

    Buono, A.; Nunziata, F.; Li, X.; Wei, Y.; Ding, X.

    2014-11-01

    In this study, physical-based models for polarimetric Synthetic Aperture Radar (SAR) oil fields monitoring are proposed. They all share a physical rationale relying on the different scattering mechanisms that characterize a free sea surface, an oil slick-covered sea surface, and a metallic target. In fact, sea surface scattering is well modeled by a Bragg-like behaviour, while a strong departure from Bragg scattering is in place when dealing with oil slicks and targets. Furthermore, the proposed polarimetric models aim at addressing simultaneously target and oil slick detection, providing useful extra information with respect to single-pol SAR data in order to approach oil discrimination and classification.Experiments undertaken over East and South China Sea from actual C-band RadarSAT-2 full-pol SAR data witness the soundness of the proposed rationale.

  11. Validating the random search model for two targets of different difficulty.

    PubMed

    Chan, Alan H S; Yu, Ruifeng

    2010-02-01

    A random visual search model was fitted to 1,788 search times obtained from a nonidentical double-target search task. 30 Hong Kong Chinese (13 men, 17 women) ages 18 to 33 years (M = 23, SD = 6.8) took part in the experiment voluntarily. The overall adequacy and prediction accuracy of the model for various search time parameters (mean and median search times and response times) for both individual and pooled data show that search strategy may reasonably be inferred from search time distributions. The results also suggested the general applicability of the random search model for describing the search behavior of a large number of participants performing the type of search used here, as well as the practical feasibility of its application for determination of stopping policy for optimization of an inspection system design. Although the data generally conformed to the model the search for the more difficult target was faster than expected. The more difficult target was usually detected after the easier target and it is suggested that some degree of memory-guided searching may have been used for the second target. Some abnormally long search times were observed and it is possible that these might have been due to the characteristics of visual lobes, nonoptimum interfixation distances and inappropriate overlapping of lobes, as has been previously reported.

  12. Rayleigh--Taylor spike evaporation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schappert, G. T.; Batha, S. H.; Klare, K. A.

    2001-09-01

    Laser-based experiments have shown that Rayleigh--Taylor (RT) growth in thin, perturbed copper foils leads to a phase dominated by narrow spikes between thin bubbles. These experiments were well modeled and diagnosed until this '' spike'' phase, but not into this spike phase. Experiments were designed, modeled, and performed on the OMEGA laser [T. R. Boehly, D. L. Brown, R. S. Craxton , Opt. Commun. 133, 495 (1997)] to study the late-time spike phase. To simulate the conditions and evolution of late time RT, a copper target was fabricated consisting of a series of thin ridges (spikes in cross section) 150more » {mu}m apart on a thin flat copper backing. The target was placed on the side of a scale-1.2 hohlraum with the ridges pointing into the hohlraum, which was heated to 190 eV. Side-on radiography imaged the evolution of the ridges and flat copper backing into the typical RT bubble and spike structure including the '' mushroom-like feet'' on the tips of the spikes. RAGE computer models [R. M. Baltrusaitis, M. L. Gittings, R. P. Weaver, R. F. Benjamin, and J. M. Budzinski, Phys. Fluids 8, 2471 (1996)] show the formation of the '' mushrooms,'' as well as how the backing material converges to lengthen the spike. The computer predictions of evolving spike and bubble lengths match measurements fairly well for the thicker backing targets but not for the thinner backings.« less

  13. Compensatory changes in CYP expression in three different toxicology mouse models: CAR-null, Cyp3a-null, and Cyp2b9/10/13-null mice

    EPA Science Inventory

    Targeted mutant models are common in mechanistic toxicology experiments investigating the absorption, metabolism, distribution, or elimination (ADME) of chemicals from individuals. Key models include those for xenosensing transcription factors and cytochrome P450s (CYP). Here we ...

  14. Simulation of neutron production using MCNPX+MCUNED.

    PubMed

    Erhard, M; Sauvan, P; Nolte, R

    2014-10-01

    In standard MCNPX, the production of neutrons by ions cannot be modelled efficiently. The MCUNED patch applied to MCNPX 2.7.0 allows to model the production of neutrons by light ions down to energies of a few kiloelectron volts. This is crucial for the simulation of neutron reference fields. The influence of target properties, such as the diffusion of reactive isotopes into the target backing or the effect of energy and angular straggling, can be studied efficiently. In this work, MCNPX/MCUNED calculations are compared with results obtained with the TARGET code for simulating neutron production. Furthermore, MCUNED incorporates more effective variance reduction techniques and a coincidence counting tally. This allows the simulation of a TCAP experiment being developed at PTB. In this experiment, 14.7-MeV neutrons will be produced by the reaction T(d,n)(4)He. The neutron fluence is determined by counting alpha particles, independently of the reaction cross section. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. The SHiP experiment at CERN SPS

    NASA Astrophysics Data System (ADS)

    Di Crescenzo, A.; SHiP Collaboration

    2016-01-01

    SHiP is a new general purpose fixed target facility, whose Technical Proposal has been recently submitted to the CERN SPS Committee. In its initial phase, the 400GeV proton beam extracted from the SPS will be dumped on a heavy target with the aim of integrating 2×1020 pot in 5years. A dedicated detector located downstream of the target, based on a long vacuum tank followed by a spectrometer and particle identification detectors, will allow probing a variety of models with light long-lived exotic particles and masses below a few GeV/c2. The beam dump is also an ideal source of tau neutrinos, the less known particle in the Standard Model. Another dedicated detector, based on the Emulsion Cloud Chamber technology already used in the OPERA experiment, will allow to perform for the first time measurements of the tau neutrino deep inelastic scattering cross section. Tau neutrinos will be distinguished from tau anti-neutrinos, thus providing the first observation of the tau anti-neutrino.

  16. Martian rampart crater ejecta - Experiments and analysis of melt-water interaction

    NASA Technical Reports Server (NTRS)

    Wohletz, K. H.; Sheridan, M. F.

    1983-01-01

    The possible effects of explosive water vaporization on ejecta emplacement after impact into a wet target are described. A general model is formulated from analysis of Viking imagery of Mars and experimental vapor explosions as well as consideration of fluidized particulate transport and lobate volcanic deposits. The discussed model contends that as target water content increases, the effects of vapor expansion due to impact increasingly modify the ballistic flow field during crater excavation. This modification results in transport by gravity-driven surface flowage, and is similar to that of atmospheric drag effects on ejecta modelled by Schultz and Gault (1979).

  17. The locating ways of laying pipe manipulator

    NASA Astrophysics Data System (ADS)

    Wang, Dan; Li, Bin; Lei, DongLiang

    2010-01-01

    The laying pipe manipulator is a new equipment to lay concrete pipe. This kind of manipulator makes the work of laying pipes mechanized and automated. We report here a new laying pipe manipulator. The manipulator has 5 free degrees, and is driven by the hydraulic system. In the paper, one critical question of manipulator is studied: the locating ways of the manipulator to lay concrete pipe. During the process of laying concrete pipe, how to locate the manipulator is realized by the locating system of manipulator. The locating system consists of photoelectric target, laser producer, and computer. According to different construction condition, one or two or three photoelectric targets can be used. During the process of laying concrete pipe, if the interface of pipes are jointed together, and the other segment of pipe deviates from the pipe way, one target can be used, if the angle that the manipulator rotates around the holding pipe's axes is 0°, two targets can be used, three targets can be used at any site. In the paper, according to each locating way, the theory analysis is done. And the mathematical models of the manipulator moving from original position to goal position are obtained by different locating way. And the locating experiment was done. According to the experiment result, the work principle and mathematical models of different locating way was turned out to be well adopted for requirement, the mathematical model of different locating way supplies the basic control theory for the manipulator to lay and joint concrete pipe automatically.

  18. A cognitive approach to game usability and design: mental model development in novice real-time strategy gamers.

    PubMed

    Graham, John; Zheng, Liya; Gonzalez, Cleotilde

    2006-06-01

    We developed a technique to observe and characterize a novice real-time-strategy (RTS) player's mental model as it shifts with experience. We then tested this technique using an off-the-shelf RTS game, EA Games Generals. Norman defined mental models as, "an internal representation of a target system that provides predictive and explanatory power to the operator." In the case of RTS games, the operator is the player and the target system is expressed by the relationships within the game. We studied five novice participants in laboratory-controlled conditions playing a RTS game. They played Command and Conquer Generals for 2 h per day over the course of 5 days. A mental model analysis was generated using player dissimilarity-ratings of the game's artificial intelligence (AI) agents analyzed using multidimensional scaling (MDS) statistical methods. We hypothesized that novices would begin with an impoverished model based on the visible physical characteristics of the game system. As they gained experience and insight, their mental models would shift and accommodate the functional characteristics of the AI agents. We found that all five of the novice participants began with the predicted physical-based mental model. However, while their models did qualitatively shift with experience, they did not necessarily change to the predicted functional-based model. This research presents an opportunity for the design of games that are guided by shifts in a player's mental model as opposed to the typical progression through successive performance levels.

  19. Polar-direct-drive experiments on the National Ignition Facility

    DOE PAGES

    Hohenberger, M.; Radha, P. B.; Myatt, J. F.; ...

    2015-05-11

    To support direct-drive inertial confinement fusion experiments at the National Ignition Facility (NIF) [G. H. Miller, E. I. Moses, and C. R. Wuest, Opt. Eng. 43, 2841 (2004)] in its indirect-drive beam configuration, the polar-direct-drive (PDD) concept [S. Skupsky et al., Phys. Plasmas 11, 2763 (2004)] has been proposed. Ignition in PDD geometry requires direct-drive–specific beam smoothing, phase plates, and repointing the NIF beams toward the equator to ensure symmetric target irradiation. First experiments to study the energetics and preheat in PDD implosions at the NIF have been performed. These experiments utilize the NIF in its current configuration, including beammore » geometry, phase plates, and beam smoothing. Room-temperature, 2.2-mm-diam plastic shells filled with D₂ gas were imploded with total drive energies ranging from ~500 to 750 kJ with peak powers of 120 to 180 TW and peak on-target irradiances at the initial target radius from 8 10¹⁴ to 1.2 10¹⁵W/cm². Results from these initial experiments are presented, including measurements of shell trajectory, implosion symmetry, and the level of hot-electron preheat in plastic and Si ablators. Experiments are simulated with the 2-D hydrodynamics code DRACO including a full 3-D ray-trace to model oblique beams, and models for nonlocal electron transport and cross-beam energy transport (CBET). These simulations indicate that CBET affects the shell symmetry and leads to a loss of energy imparted onto the shell, consistent with the experimental data.« less

  20. Resolution Enhanced Magnetic Sensing System for Wide Coverage Real Time UXO Detection

    NASA Astrophysics Data System (ADS)

    Zalevsky, Zeev; Bregman, Yuri; Salomonski, Nizan; Zafrir, Hovav

    2012-09-01

    In this paper we present a new high resolution automatic detection algorithm based upon a Wavelet transform and then validate it in marine related experiments. The proposed approach allows obtaining an automatic detection in a very low signal to noise ratios. The amount of calculations is reduced, the magnetic trend is depressed and the probability of detection/ false alarm rate can easily be controlled. Moreover, the algorithm enables to distinguish between close targets. In the algorithm we use the physical dependence of the magnetic field of a magnetic dipole in order to define a Wavelet mother function that later on can detect magnetic targets modeled as dipoles and embedded in noisy surrounding, at improved resolution. The proposed algorithm was realized on synthesized targets and then validated in field experiments involving a marine surface-floating system for wide coverage real time unexploded ordinance (UXO) detection and mapping. The detection probability achieved in the marine experiment was above 90%. The horizontal radial error of most of the detected targets was only 16 m and two baseline targets that were immersed about 20 m one to another could easily be distinguished.

  1. Use of a vision model to quantify the significance of factors effecting target conspicuity

    NASA Astrophysics Data System (ADS)

    Gilmore, M. A.; Jones, C. K.; Haynes, A. W.; Tolhurst, D. J.; To, M.; Troscianko, T.; Lovell, P. G.; Parraga, C. A.; Pickavance, K.

    2006-05-01

    When designing camouflage it is important to understand how the human visual system processes the information to discriminate the target from the background scene. A vision model has been developed to compare two images and detect differences in local contrast in each spatial frequency channel. Observer experiments are being undertaken to validate this vision model so that the model can be used to quantify the relative significance of different factors affecting target conspicuity. Synthetic imagery can be used to design improved camouflage systems. The vision model is being used to compare different synthetic images to understand what features in the image are important to reproduce accurately and to identify the optimum way to render synthetic imagery for camouflage effectiveness assessment. This paper will describe the vision model and summarise the results obtained from the initial validation tests. The paper will also show how the model is being used to compare different synthetic images and discuss future work plans.

  2. Geant4 models for simulation of hadron/ion nuclear interactions at moderate and low energies.

    NASA Astrophysics Data System (ADS)

    Ivantchenko, Anton; Ivanchenko, Vladimir; Quesada, Jose-Manuel; Wright, Dennis

    The Geant4 toolkit is intended for Monte Carlo simulation of particle transport in media. It was initially designed for High Energy Physics purposes such as experiments at the Large Hadron Collider (LHC) at CERN. The toolkit offers a set of models allowing effective simulation of cosmic ray interactions with different materials. For moderate and low energy hadron/ion interactions with nuclei there are a number of competitive models: Binary and Bertini intra-nuclear cascade models, quantum molecular dynamic model (QMD), INCL/ABLA cascade model, and Chiral Invariant Phase Space Decay model (CHIPS). We report the status of these models for the recent version of Geant4 (release 9.3, December 2009). The Bertini cascade in-ternal cross sections were upgraded. The native Geant4 precompound and deexcitation models were used in the Binary cascade and QMD. They were significantly improved including emis-sion of light fragments, the Fermi break-up model, the General Evaporation Model (GEM), the multi-fragmentation model, and the fission model. Comparisons between model predictions and data for thin target experiments for neutron, proton, light ions, and isotope production are presented and discussed. The focus of these validations is concentrated on target materials important for space missions.

  3. Beyond dichotomies-(m)others' structuring and the development of toddlers' prosocial behavior across cultures.

    PubMed

    Kärtner, Joscha

    2018-04-01

    Basic elements of prosociality-(pro)social cognition, motivation, and prosocial behavior-emerge during the first and second year of life. These elements are rooted in biological predispositions and the developmental system is complemented by caregivers' structuring. By structuring, (m)others integrate toddlers' unrefined (pro)social sentiments and behavioral inclinations into coherent patterns and align toddlers' experience and behavior with the population's cultural model. These cultural models specify target states for appropriate affective, motivational and behavioral responses regarding toddlers' prosociality and these target states, in turn, inform (m)others' appraisal and guide their structuring. The experiences that toddlers make in these social interactions have important implications for how the basic elements of prosociality are refined and further develop. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Numerical study of impact erosion of multiple solid particle

    NASA Astrophysics Data System (ADS)

    Zheng, Chao; Liu, Yonghong; Chen, Cheng; Qin, Jie; Ji, Renjie; Cai, Baoping

    2017-11-01

    Material erosion caused by continuous particle impingement during hydraulic fracturing results in significant economic loss and increased production risks. The erosion process is complex and has not been clearly explained through physical experiments. To address this problem, a multiple particle model in a 3D configuration was proposed to investigate the dynamic erosion process. This approach can significantly reduce experiment costs. The numerical model considered material damping and elastic-plastic material behavior of target material. The effects of impact parameters on erosion characteristics, such as plastic deformation, contact time, and energy loss rate, were investigated. Based on comprehensive studies, the dynamic erosion mechanism and geometry evolution of eroded crater was obtained. These findings can provide a detailed erosion process of target material and insights into the material erosion caused by multiple particle impingement.

  5. Target acquisition modeling over the exact optical path: extending the EOSTAR TDA with the TOD sensor performance model

    NASA Astrophysics Data System (ADS)

    Dijk, J.; Bijl, P.; Oppeneer, M.; ten Hove, R. J. M.; van Iersel, M.

    2017-10-01

    The Electro-Optical Signal Transmission and Ranging (EOSTAR) model is an image-based Tactical Decision Aid (TDA) for thermal imaging systems (MWIR/LWIR) developed for a sea environment with an extensive atmosphere model. The Triangle Orientation Discrimination (TOD) Target Acquisition model calculates the sensor and signal processing effects on a set of input triangle test pattern images, judges their orientation using humans or a Human Visual System (HVS) model and derives the system image quality and operational field performance from the correctness of the responses. Combination of the TOD model and EOSTAR, basically provides the possibility to model Target Acquisition (TA) performance over the exact path from scene to observer. In this method ship representative TOD test patterns are placed at the position of the real target, subsequently the combined effects of the environment (atmosphere, background, etc.), sensor and signal processing on the image are calculated using EOSTAR and finally the results are judged by humans. The thresholds are converted into Detection-Recognition-Identification (DRI) ranges of the real target. In experiments is shown that combination of the TOD model and the EOSTAR model is indeed possible. The resulting images look natural and provide insight in the possibilities of combining the two models. The TOD observation task can be done well by humans, and the measured TOD is consistent with analytical TOD predictions for the same camera that was modeled in the ECOMOS project.

  6. Aerogel Algorithm for Shrapnel Penetration Experiments

    NASA Astrophysics Data System (ADS)

    Tokheim, R. E.; Erlich, D. C.; Curran, D. R.; Tobin, M.; Eder, D.

    2004-07-01

    To aid in assessing shrapnel produced by laser-irradiated targets, we have performed shrapnel collection "BB gun" experiments in aerogel and have developed a simple analytical model for deceleration of the shrapnel particles in the aerogel. The model is similar in approach to that of Anderson and Ahrens (J. Geophys. Res., 99 El, 2063-2071, Jan. 1994) and accounts for drag, aerogel compaction heating, and the velocity threshold for shrapnel ablation due to conductive heating. Model predictions are correlated with the BB gun results at impact velocities up to a few hundred m/s and with NASA data for impact velocities up to 6 km/s. The model shows promising agreement with the data and will be used to plan and interpret future experiments.

  7. Asteroid collisions: Target size effects and resultant velocity distributions

    NASA Technical Reports Server (NTRS)

    Ryan, Eileen V.

    1993-01-01

    To study the dynamic fragmentation of rock to simulate asteroid collisions, we use a 2-D, continuum damage numerical hydrocode which models two-body impacts. This hydrocode monitors stress wave propagation and interaction within the target body, and includes a physical model for the formation and growth of cracks in rock. With this algorithm we have successfully reproduced fragment size distributions and mean ejecta speeds from laboratory impact experiments using basalt, and weak and strong mortar as target materials. Using the hydrocode, we have determined that the energy needed to fracture a body has a much stronger dependence on target size than predicted from most scaling theories. In addition, velocity distributions obtained indicate that mean ejecta speeds resulting from large-body collisions do not exceed escape velocities.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witte, Samuel J.; Gluscevic, Vera; McDermott, Samuel D.

    It has recently been demonstrated that, in the event of a putative signal in dark matter direct detection experiments, properly identifying the underlying dark matter-nuclei interaction promises to be a challenging task. Given the most optimistic expectations for the number counts of recoil events in the forthcoming Generation 2 experiments, differentiating between interactions that produce distinct features in the recoil energy spectra will only be possible if a strong signal is observed simultaneously on a variety of complementary targets. However, there is a wide range of viable theories that give rise to virtually identical energy spectra, and may only differmore » by the dependence of the recoil rate on the dark matter velocity. In this work, we investigate how degeneracy between such competing models may be broken by analyzing the time dependence of nuclear recoils, i.e. the annual modulation of the rate. For this purpose, we simulate dark matter events for a variety of interactions and experiments, and perform a Bayesian model-selection analysis on all simulated data sets, evaluating the chance of correctly identifying the input model for a given experimental setup. Lastly, we find that including information on the annual modulation of the rate may significantly enhance the ability of a single target to distinguish dark matter models with nearly degenerate recoil spectra, but only with exposures beyond the expectations of Generation 2 experiments.« less

  9. Latent semantic analysis cosines as a cognitive similarity measure: Evidence from priming studies.

    PubMed

    Günther, Fritz; Dudschig, Carolin; Kaup, Barbara

    2016-01-01

    In distributional semantics models (DSMs) such as latent semantic analysis (LSA), words are represented as vectors in a high-dimensional vector space. This allows for computing word similarities as the cosine of the angle between two such vectors. In two experiments, we investigated whether LSA cosine similarities predict priming effects, in that higher cosine similarities are associated with shorter reaction times (RTs). Critically, we applied a pseudo-random procedure in generating the item material to ensure that we directly manipulated LSA cosines as an independent variable. We employed two lexical priming experiments with lexical decision tasks (LDTs). In Experiment 1 we presented participants with 200 different prime words, each paired with one unique target. We found a significant effect of cosine similarities on RTs. The same was true for Experiment 2, where we reversed the prime-target order (primes of Experiment 1 were targets in Experiment 2, and vice versa). The results of these experiments confirm that LSA cosine similarities can predict priming effects, supporting the view that they are psychologically relevant. The present study thereby provides evidence for qualifying LSA cosine similarities not only as a linguistic measure, but also as a cognitive similarity measure. However, it is also shown that other DSMs can outperform LSA as a predictor of priming effects.

  10. Design of an Experiment to Observe Laser-Plasma Interactions on NIKE

    NASA Astrophysics Data System (ADS)

    Phillips, L.; Weaver, J.; Manheimer, W.; Zalesak, S.; Schmitt, A.; Fyfe, D.; Afeyan, B.; Charbonneau-Lefort, M.

    2007-11-01

    Recent proposed designs (Obenschain et al., Phys. Plasmas 13 056320 (2006)) for direct-drive ICF targets for energy applications involve high implosion velocities combined with higher laser irradiances. The use of high irradiances increases the likelihood of deleterious laser plasma instabilities (LPI) that may lead, for example, to the generation of fast electrons. The proposed use of a 248 nm KrF laser to drive these targets is expected to minimize LPI; this is being studied by experiments at NRL's NIKE facility. We used a modification of the FAST code that models laser pulses with arbitrary spatial and temporal profiles to assist in designing these experiments. The goal is to design targets and pulseshapes to create plasma conditions that will produce sufficient growth of LPI to be observable on NIKE. Using, for example, a cryogenic DT target that is heated by a brief pulse and allowed to expand freely before interacting with a second, high-intensity pulse, allows the development of long scalelengths at low electron temperatures and leads to a predicted 20-efold growth in two-plasmon amplitude.

  11. Implied dynamics biases the visual perception of velocity.

    PubMed

    La Scaleia, Barbara; Zago, Myrka; Moscatelli, Alessandro; Lacquaniti, Francesco; Viviani, Paolo

    2014-01-01

    We expand the anecdotic report by Johansson that back-and-forth linear harmonic motions appear uniform. Six experiments explore the role of shape and spatial orientation of the trajectory of a point-light target in the perceptual judgment of uniform motion. In Experiment 1, the target oscillated back-and-forth along a circular arc around an invisible pivot. The imaginary segment from the pivot to the midpoint of the trajectory could be oriented vertically downward (consistent with an upright pendulum), horizontally leftward, or vertically upward (upside-down). In Experiments 2 to 5, the target moved uni-directionally. The effect of suppressing the alternation of movement directions was tested with curvilinear (Experiment 2 and 3) or rectilinear (Experiment 4 and 5) paths. Experiment 6 replicated the upright condition of Experiment 1, but participants were asked to hold the gaze on a fixation point. When some features of the trajectory evoked the motion of either a simple pendulum or a mass-spring system, observers identified as uniform the kinematic profiles close to harmonic motion. The bias towards harmonic motion was most consistent in the upright orientation of Experiment 1 and 6. The bias disappeared when the stimuli were incompatible with both pendulum and mass-spring models (Experiments 3 to 5). The results are compatible with the hypothesis that the perception of dynamic stimuli is biased by the laws of motion obeyed by natural events, so that only natural motions appear uniform.

  12. The Visual System's Intrinsic Bias and Knowledge of Size Mediate Perceived Size and Location in the Dark

    ERIC Educational Resources Information Center

    Zhou, Liu; He, Zijiang J.; Ooi, Teng Leng

    2013-01-01

    Dimly lit targets in the dark are perceived as located about an implicit slanted surface that delineates the visual system's intrinsic bias (Ooi, Wu, & He, 2001). If the intrinsic bias reflects the internal model of visual space--as proposed here--its influence should extend beyond target localization. Our first 2 experiments demonstrated that…

  13. Immunological Targeting of Tumor Initiating Prostate Cancer Cells

    DTIC Science & Technology

    2014-10-01

    clinically using well-accepted immuno-competent animal models. 2) Keywords: Prostate Cancer, Lymphocyte, Vaccine, Antibody 3) Overall Project Summary...castrate animals . Task 1: Identify and verify antigenic targets from CAstrate Resistant Luminal Epithelial Cells (CRLEC) (months 1-16... animals per group will be processed to derive sufficient RNA for microarray analysis; the experiment will be repeated x 3. Microarray analysis will

  14. The Nucleon Axial Form Factor and Staggered Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Aaron Scott

    The study of neutrino oscillation physics is a major research goal of the worldwide particle physics program over the upcoming decade. Many new experiments are being built to study the properties of neutrinos and to answer questions about the phenomenon of neutrino oscillation. These experiments need precise theoretical cross sections in order to access fundamental neutrino properties. Neutrino oscillation experiments often use large atomic nuclei as scattering targets, which are challenging for theorists to model. Nuclear models rely on free-nucleon amplitudes as inputs. These amplitudes are constrained by scattering experiments with large nuclear targets that rely on the very samemore » nuclear models. The work in this dissertation is the rst step of a new initiative to isolate and compute elementary amplitudes with theoretical calculations to support the neutrino oscillation experimental program. Here, the eort focuses on computing the axial form factor, which is the largest contributor of systematic error in the primary signal measurement process for neutrino oscillation studies, quasielastic scattering. Two approaches are taken. First, neutrino scattering data on a deuterium target are reanalyzed with a model-independent parametrization of the axial form factor to quantify the present uncertainty in the free-nucleon amplitudes. The uncertainties on the free-nucleon cross section are found to be underestimated by about an order of magnitude compared to the ubiquitous dipole model parametrization. The second approach uses lattice QCD to perform a rst-principles computation of the nucleon axial form factor. The Highly Improved Staggered Quark (HISQ) action is employed for both valence and sea quarks. The results presented in this dissertation are computed at physical pion mass for one lattice spacing. This work presents a computation of the axial form factor at zero momentum transfer, and forms the basis for a computation of the axial form factor momentum dependence with an extrapolation to the continuum limit and a full systematic error budget.« less

  15. Development And Characterization Of A Liner-On-Target Injector For Staged Z-Pinch Experiments

    NASA Astrophysics Data System (ADS)

    Valenzuela, J. C.; Conti, F.; Krasheninnikov, I.; Narkis, J.; Beg, F.; Wessel, F. J.; Rahman, H. U.

    2016-10-01

    We present the design and optimization of a liner-on-target injector for Staged Z-pinch experiments. The injector is composed of an annular high atomic number (e.g. Ar, Kr) gas-puff and an on-axis plasma gun that delivers the ionized deuterium target. The liner nozzle injector has been carefully studied using Computational Fluid Dynamics (CFD) simulations to produce a highly collimated 1 cm radius gas profile that satisfies the theoretical requirement for best performance on the 1 MA Zebra current driver. The CFD simulations produce density profiles as a function of the nozzle shape and gas. These profiles are initialized in the MHD MACH2 code to find the optimal liner density for a stable, uniform implosion. We use a simple Snowplow model to study the plasma sheath acceleration in a coaxial plasma gun to help us properly design the target injector. We have performed line-integrated density measurements using a CW He-Ne laser to characterize the liner gas and the plasma gun density as a function of time. The measurements are compared with models and calculations and benchmarked accordingly. Advanced Research Projects Agency - Energy, DE-AR0000569.

  16. Computational investigation of reshock strength in hydrodynamic instability growth at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Bender, Jason; Raman, Kumar; Huntington, Channing; Nagel, Sabrina; Morgan, Brandon; Prisbrey, Shon; MacLaren, Stephan

    2017-10-01

    Experiments at the National Ignition Facility (NIF) are studying Richtmyer-Meshkov and Rayleigh-Taylor hydrodynamic instabilities in multiply-shocked plasmas. Targets feature two different-density fluids with a multimode initial perturbation at the interface, which is struck by two X-ray-driven shock waves. Here we discuss computational hydrodynamics simulations investigating the effect of second-shock (``reshock'') strength on instability growth, and how these simulations are informing target design for the ongoing experimental campaign. A Reynolds-Averaged Navier Stokes (RANS) model was used to predict motion of the spike and bubble fronts and the mixing-layer width. In addition to reshock strength, the reshock ablator thickness and the total length of the target were varied; all three parameters were found to be important for target design, particularly for ameliorating undesirable reflected shocks. The RANS data are compared to theoretical models that predict multimode instability growth proportional to the shock-induced change in interface velocity, and to currently-available data from the NIF experiments. Work performed under the auspices of the U.S. D.O.E. by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344. LLNL-ABS-734611.

  17. Choosing colors for map display icons using models of visual search.

    PubMed

    Shive, Joshua; Francis, Gregory

    2013-04-01

    We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.

  18. Direction information in multiple object tracking is limited by a graded resource.

    PubMed

    Horowitz, Todd S; Cohen, Michael A

    2010-10-01

    Is multiple object tracking (MOT) limited by a fixed set of structures (slots), a limited but divisible resource, or both? Here, we answer this question by measuring the precision of the direction representation for tracked targets. The signature of a limited resource is a decrease in precision as the square root of the tracking load. The signature of fixed slots is a fixed precision. Hybrid models predict a rapid decrease to asymptotic precision. In two experiments, observers tracked moving disks and reported target motion direction by adjusting a probe arrow. We derived the precision of representation of correctly tracked targets using a mixture distribution analysis. Precision declined with target load according to the square-root law up to six targets. This finding is inconsistent with both pure and hybrid slot models. Instead, directional information in MOT appears to be limited by a continuously divisible resource.

  19. Distinctiveness and encoding effects in online sentence comprehension

    PubMed Central

    Hofmeister, Philip; Vasishth, Shravan

    2014-01-01

    In explicit memory recall and recognition tasks, elaboration and contextual isolation both facilitate memory performance. Here, we investigate these effects in the context of sentence processing: targets for retrieval during online sentence processing of English object relative clause constructions differ in the amount of elaboration associated with the target noun phrase, or the homogeneity of superficial features (text color). Experiment 1 shows that greater elaboration for targets during the encoding phase reduces reading times at retrieval sites, but elaboration of non-targets has considerably weaker effects. Experiment 2 illustrates that processing isolated superficial features of target noun phrases—here, a green word in a sentence with words colored white—does not lead to enhanced memory performance, despite triggering longer encoding times. These results are interpreted in the light of the memory models of Nairne, 1990, 2001, 2006, which state that encoding remnants contribute to the set of retrieval cues that provide the basis for similarity-based interference effects. PMID:25566105

  20. Human activity discrimination for maritime application

    NASA Astrophysics Data System (ADS)

    Boettcher, Evelyn; Deaver, Dawne M.; Krapels, Keith

    2008-04-01

    The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is investigating how motion affects the target acquisition model (NVThermIP) sensor performance estimates. This paper looks specifically at estimating sensor performance for the task of discriminating human activities on watercraft, and was sponsored by the Office of Naval Research (ONR). Traditionally, sensor models were calibrated using still images. While that approach is sufficient for static targets, video allows one to use motion cues to aid in discerning the type of human activity more quickly and accurately. This, in turn, will affect estimated sensor performance and these effects are measured in order to calibrate current target acquisition models for this task. The study employed an eleven alternative forced choice (11AFC) human perception experiment to measure the task difficulty of discriminating unique human activities on watercrafts. A mid-wave infrared camera was used to collect video at night. A description of the construction of this experiment is given, including: the data collection, image processing, perception testing and how contrast was defined for video. These results are applicable to evaluate sensor field performance for Anti-Terrorism and Force Protection (AT/FP) tasks for the U.S. Navy.

  1. Optical simulation of flying targets using physically based renderer

    NASA Astrophysics Data System (ADS)

    Cheng, Ye; Zheng, Quan; Peng, Junkai; Lv, Pin; Zheng, Changwen

    2018-02-01

    The simulation of aerial flying targets is widely needed in many fields. This paper proposes a physically based method for optical simulation of flying targets. In the first step, three-dimensional target models are built and the motion speed and direction are defined. Next, the material of the outward appearance of a target is also simulated. Then the illumination conditions are defined. After all definitions are given, all settings are encoded in a description file. Finally, simulated results are generated by Monte Carlo ray tracing in a physically based renderer. Experiments show that this method is able to simulate materials, lighting and motion blur for flying targets, and it can generate convincing and highquality simulation results.

  2. Parallel Energy Transport in Detached DIII-D Divertor Plasmas

    NASA Astrophysics Data System (ADS)

    Leonard, A. W.; Lore, J. D.; Canik, J. M.; McLean, A. G.; Makowski, M. A.

    2017-10-01

    A comparison of experiment and modeling of detached divertor plasmas is examined in the context of parallel energy transport. Experimental estimates of power carried by electron thermal conduction versus plasma convection are experimentally inferred from power balance measurements of radiated power and target plate heat flux combined with Thomson scattering measurements of the Te profile along the divertor leg. Experimental profiles of Te exhibit relatively low gradients with Te < 15 eV from the X-point to the target implying transport dominated by convection. In contrast, fluid modeling with SOLPS produces sharp Te gradients for Te > 3 eV, characteristic of transport dominated by electron conduction through the bulk of the divertor. This discrepancy with experimental transport dominated by convection and modeling by conduction has significant implications for the radiative capacity of divertor plasmas and may explain at least part of the difficulty for fluid modeling to obtain the experimentally observed radiative losses. Comparisons are also made for helium plasmas where the match between experiment and modeling is much better. Work supported by the US DOE under DE-FC02-04ER54698.

  3. View-Based Organization and Interplay of Spatial Working and Long-Term Memories

    PubMed Central

    Röhrich, Wolfgang G.; Hardiess, Gregor; Mallot, Hanspeter A.

    2014-01-01

    Space perception provides egocentric, oriented views of the environment from which working and long-term memories are constructed. “Allocentric” (i.e. position-independent) long-term memories may be organized as graphs of recognized places or views but the interaction of such cognitive graphs with egocentric working memories is unclear. Here we present a simple coherent model of view-based working and long-term memories, together with supporting evidence from behavioral experiments. The model predicts that within a given place, memories for some views may be more salient than others, that imagery of a target square should depend on the location where the recall takes place, and that recall favors views of the target square that would be obtained when approaching it from the current recall location. In two separate experiments in an outdoor urban environment, pedestrians were approached at various interview locations and asked to draw sketch maps of one of two well-known squares. Orientations of the sketch map productions depended significantly on distance and direction of the interview location from the target square, i.e. different views were recalled at different locations. Further analysis showed that location-dependent recall is related to the respective approach direction when imagining a walk from the interview location to the target square. The results are consistent with a view-based model of spatial long-term and working memories and their interplay. PMID:25409437

  4. Efficient discovery of responses of proteins to compounds using active learning

    PubMed Central

    2014-01-01

    Background Drug discovery and development has been aided by high throughput screening methods that detect compound effects on a single target. However, when using focused initial screening, undesirable secondary effects are often detected late in the development process after significant investment has been made. An alternative approach would be to screen against undesired effects early in the process, but the number of possible secondary targets makes this prohibitively expensive. Results This paper describes methods for making this global approach practical by constructing predictive models for many target responses to many compounds and using them to guide experimentation. We demonstrate for the first time that by jointly modeling targets and compounds using descriptive features and using active machine learning methods, accurate models can be built by doing only a small fraction of possible experiments. The methods were evaluated by computational experiments using a dataset of 177 assays and 20,000 compounds constructed from the PubChem database. Conclusions An average of nearly 60% of all hits in the dataset were found after exploring only 3% of the experimental space which suggests that active learning can be used to enable more complete characterization of compound effects than otherwise affordable. The methods described are also likely to find widespread application outside drug discovery, such as for characterizing the effects of a large number of compounds or inhibitory RNAs on a large number of cell or tissue phenotypes. PMID:24884564

  5. Broadband Scattering from Sand and Sand/Mud Sediments with Extensive Environmental Characterization

    DTIC Science & Technology

    2017-01-30

    experiment , extensive envi- ronmental characterization was also performed to support data/model comparisons for both experimental efforts. The site...mechanisms, potentially addressing questions left unresolved from the previous sediment acoustics experiments , SAX99 and SAX04. This work was also to provide...environmental characterization to support the analysis of data collected during the Target and Reverberation Experiment in 2013 (TREX13) as well as

  6. High affinity ligands from in vitro selection: Complex targets

    PubMed Central

    Morris, Kevin N.; Jensen, Kirk B.; Julin, Carol M.; Weil, Michael; Gold, Larry

    1998-01-01

    Human red blood cell membranes were used as a model system to determine if the systematic evolution of ligands by exponential enrichment (SELEX) methodology, an in vitro protocol for isolating high-affinity oligonucleotides that bind specifically to virtually any single protein, could be used with a complex mixture of potential targets. Ligands to multiple targets were generated simultaneously during the selection process, and the binding affinities of these ligands for their targets are comparable to those found in similar experiments against pure targets. A secondary selection scheme, deconvolution-SELEX, facilitates rapid isolation of the ligands to targets of special interest within the mixture. SELEX provides high-affinity compounds for multiple targets in a mixture and might allow a means for dissecting complex biological systems. PMID:9501188

  7. New support vector machine-based method for microRNA target prediction.

    PubMed

    Li, L; Gao, Q; Mao, X; Cao, Y

    2014-06-09

    MicroRNA (miRNA) plays important roles in cell differentiation, proliferation, growth, mobility, and apoptosis. An accurate list of precise target genes is necessary in order to fully understand the importance of miRNAs in animal development and disease. Several computational methods have been proposed for miRNA target-gene identification. However, these methods still have limitations with respect to their sensitivity and accuracy. Thus, we developed a new miRNA target-prediction method based on the support vector machine (SVM) model. The model supplies information of two binding sites (primary and secondary) for a radial basis function kernel as a similarity measure for SVM features. The information is categorized based on structural, thermodynamic, and sequence conservation. Using high-confidence datasets selected from public miRNA target databases, we obtained a human miRNA target SVM classifier model with high performance and provided an efficient tool for human miRNA target gene identification. Experiments have shown that our method is a reliable tool for miRNA target-gene prediction, and a successful application of an SVM classifier. Compared with other methods, the method proposed here improves the sensitivity and accuracy of miRNA prediction. Its performance can be further improved by providing more training examples.

  8. Non-Targeted Effects Models Predict Significantly Higher Mars Mission Cancer Risk than Targeted Effects Models

    DOE PAGES

    Cucinotta, Francis A.; Cacao, Eliedonna

    2017-05-12

    Cancer risk is an important concern for galactic cosmic ray (GCR) exposures, which consist of a wide-energy range of protons, heavy ions and secondary radiation produced in shielding and tissues. Relative biological effectiveness (RBE) factors for surrogate cancer endpoints in cell culture models and tumor induction in mice vary considerable, including significant variations for different tissues and mouse strains. Many studies suggest non-targeted effects (NTE) occur for low doses of high linear energy transfer (LET) radiation, leading to deviation from the linear dose response model used in radiation protection. Using the mouse Harderian gland tumor experiment, the only extensive data-setmore » for dose response modelling with a variety of particle types (>4), for the first-time a particle track structure model of tumor prevalence is used to investigate the effects of NTEs in predictions of chronic GCR exposure risk. The NTE model led to a predicted risk 2-fold higher compared to a targeted effects model. The scarcity of data with animal models for tissues that dominate human radiation cancer risk, including lung, colon, breast, liver, and stomach, suggest that studies of NTEs in other tissues are urgently needed prior to long-term space missions outside the protection of the Earth’s geomagnetic sphere.« less

  9. Non-Targeted Effects Models Predict Significantly Higher Mars Mission Cancer Risk than Targeted Effects Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cucinotta, Francis A.; Cacao, Eliedonna

    Cancer risk is an important concern for galactic cosmic ray (GCR) exposures, which consist of a wide-energy range of protons, heavy ions and secondary radiation produced in shielding and tissues. Relative biological effectiveness (RBE) factors for surrogate cancer endpoints in cell culture models and tumor induction in mice vary considerable, including significant variations for different tissues and mouse strains. Many studies suggest non-targeted effects (NTE) occur for low doses of high linear energy transfer (LET) radiation, leading to deviation from the linear dose response model used in radiation protection. Using the mouse Harderian gland tumor experiment, the only extensive data-setmore » for dose response modelling with a variety of particle types (>4), for the first-time a particle track structure model of tumor prevalence is used to investigate the effects of NTEs in predictions of chronic GCR exposure risk. The NTE model led to a predicted risk 2-fold higher compared to a targeted effects model. The scarcity of data with animal models for tissues that dominate human radiation cancer risk, including lung, colon, breast, liver, and stomach, suggest that studies of NTEs in other tissues are urgently needed prior to long-term space missions outside the protection of the Earth’s geomagnetic sphere.« less

  10. Site selection and directional models of deserts used for ERBE validation targets

    NASA Technical Reports Server (NTRS)

    Staylor, W. F.

    1986-01-01

    Broadband shortwave and longwave radiance measurements obtained from the Nimbus 7 Earth Radiation Budget scanner were used to develop reflectance and emittance models for the Sahara, Gibson, and Saudi Deserts. These deserts will serve as in-flight validation targets for the Earth Radiation Budget Experiment being flown on the Earth Radiation Budget Satellite and two National Oceanic and Atmospheric Administration polar satellites. The directional reflectance model derived for the deserts was a function of the sum and product of the cosines of the solar and viewing zenith angles, and thus reciprocity existed between these zenith angles. The emittance model was related by a power law of the cosine of the viewing zenith angle.

  11. Study of neutron generation in the compact tokamak TUMAN-3M in support of a tokamak-based fusion neutron source

    NASA Astrophysics Data System (ADS)

    Kornev, V. A.; Askinazi, L. G.; Belokurov, A. A.; Chernyshev, F. V.; Lebedev, S. V.; Melnik, A. D.; Shabelsky, A. A.; Tukachinsky, A. S.; Zhubr, N. A.

    2017-12-01

    The paper presents DD neutron flux measurements in neutron beam injection (NBI) experiments aimed at the optimization of target plasma and heating beam parameters to achieve maximum neutron flux in the TUMAN-3M compact tokamak. Two ion sources of different design were used, which allowed the separation of the beam’s energy and power influence on the neutron rate. Using the database of experiments performed with the two ion sources, an empirical scaling was derived describing the neutron rate dependence on the target plasma and heating beam parameters. Numerical modeling of the neutron rate in the NBI experiments performed using the ASTRA transport code showed good agreement with the scaling.

  12. Experimental and Computational Study of Underexpanded Jet Impingement Heat Transfer

    NASA Technical Reports Server (NTRS)

    Rufer, Shann J.; Nowak, Robert J.; Daryabeigi, Kamran; Picetti, Donald

    2009-01-01

    An experiment was performed to assess CFD modeling of a hypersonic-vehicle breach, boundary-layer flow ingestion and internal surface impingement. Tests were conducted in the NASA Langley Research Center 31-Inch Mach 10 Tunnel. Four simulated breaches were tested and impingement heat flux data was obtained for each case using both phosphor thermography and thin film gages on targets placed inside the model. A separate target was used to measure the surface pressure distribution. The measured jet impingement width and peak location are in good agreement with CFD analysis.

  13. Mentor Teacher Training: A Hybrid Model to Promote Partnering in Candidate Development

    ERIC Educational Resources Information Center

    Childre, Amy L.; Van Rie, Ginny L.

    2015-01-01

    In order to promote high quality clinical experiences for teacher candidates, one of the recent changes to educator preparation accreditation standards specifically targeted clinical faculty qualifications. Qualified mentor teachers are critical clinical faculty because they serve as the model for training practices for teacher candidates, the…

  14. No arousal-biased competition in focused visuospatial attention.

    PubMed

    Ásgeirsson, Árni Gunnar; Nieuwenhuis, Sander

    2017-11-01

    Arousal sometimes enhances and sometimes impairs perception and memory. A recent theory attempts to reconcile these findings by proposing that arousal amplifies the competition between stimulus representations, strengthening already strong representations and weakening already weak representations. Here, we report a stringent test of this arousal-biased competition theory in the context of focused visuospatial attention. Participants were required to identify a briefly presented target in the context of multiple distractors, which varied in the degree to which they competed for representation with the target, as revealed by psychophysics. We manipulated arousal using emotionally arousing pictures (Experiment 1), alerting tones (Experiment 2) and white-noise stimulation (Experiment 3), and validated these manipulations with electroencephalography and pupillometry. In none of the experiments did we find evidence that arousal modulated the effect of distractor competition on the accuracy of target identification. Bayesian statistics revealed moderate to strong evidence against arousal-biased competition. Modeling of the psychophysical data based on Bundesen's (1990) theory of visual attention corroborated the conclusion that arousal does not bias competition in focused visuospatial attention. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Radioimmunotargeting of human tumour cells in immunocompetent animals.

    PubMed Central

    Fjeld, J. G.; Bruland, O. S.; Benestad, H. B.; Schjerven, L.; Stigbrand, T.; Nustad, K.

    1990-01-01

    A tumour model system is reported that for many purposes may be an alternative to xenografted nude mice. The model allows immunotargeting of human tumour cells in immunocompetent animals. The target cells are contained in i.p. diffusion chambers (DC) with micropore membrane walls that are permeable to molecules, including the cell specific monoclonal antibodies (MoAb), but impermeable to cells. Thus, the tumour cells are protected from the host immunocompetent cells. In the work here presented the model was tested in immunocompetent mice and pigs, with tumour cells and antibody preparations that had demonstrated specific targeting in the nude mouse xenograft model. Hence, the DC were filled with cells from the human cell lines Hep-2 (expressing placental alkaline phosphatase, PLALP), or OHS (a sarcoma cell line), and the MoAb preparations injected i.v. were a 125I-labelled Fab fragment of the PLALP specific antibody H7, or a 125I-labelled F(ab')2 fragment of the sarcoma specific antibody TP-1. Specific targeting of the human tumour cells was demonstrated in both mice and pigs. The target: blood ratios were comparable in the two species, reaching a maximum of about 15 after 24 h with the Fab preparation, and a ratio of 25 after 72 h with the F(ab')2. The target uptake relative to injected dose was lower in pigs than in mice, but the difference between the two species was smaller than expected, presumably due to a slower antibody clearance in the pigs than in the mice. An artificial cell targeting system like this has several advantages in the search for solutions to many of the fundamental problems experienced in immunotargeting. Firstly, parallel binding experiments can be carried out in vitro with the same target. Because in vitro results are only influenced by the diffusion into the DC and the immunological binding characteristics of the antibodies, targeting differences between antibody preparations due to these factors can then be distinguished from differences due to pharmacokinetical properties. Secondly, the animals can be implanted with any type and number of target cells, or with antigen negative control cells. Thirdly, and perhaps most important, the system opens a possibility for evaluation of the murine MoAb in xenogenic species, and this may predict the clinical targeting potential better than experiments on mice. PMID:2223574

  16. Perturbation biology nominates upstream-downstream drug combinations in RAF inhibitor resistant melanoma cells.

    PubMed

    Korkut, Anil; Wang, Weiqing; Demir, Emek; Aksoy, Bülent Arman; Jing, Xiaohong; Molinelli, Evan J; Babur, Özgün; Bemis, Debra L; Onur Sumer, Selcuk; Solit, David B; Pratilas, Christine A; Sander, Chris

    2015-08-18

    Resistance to targeted cancer therapies is an important clinical problem. The discovery of anti-resistance drug combinations is challenging as resistance can arise by diverse escape mechanisms. To address this challenge, we improved and applied the experimental-computational perturbation biology method. Using statistical inference, we build network models from high-throughput measurements of molecular and phenotypic responses to combinatorial targeted perturbations. The models are computationally executed to predict the effects of thousands of untested perturbations. In RAF-inhibitor resistant melanoma cells, we measured 143 proteomic/phenotypic entities under 89 perturbation conditions and predicted c-Myc as an effective therapeutic co-target with BRAF or MEK. Experiments using the BET bromodomain inhibitor JQ1 affecting the level of c-Myc protein and protein kinase inhibitors targeting the ERK pathway confirmed the prediction. In conclusion, we propose an anti-cancer strategy of co-targeting a specific upstream alteration and a general downstream point of vulnerability to prevent or overcome resistance to targeted drugs.

  17. Analysis of EEG Related Saccadic Eye Movement

    NASA Astrophysics Data System (ADS)

    Funase, Arao; Kuno, Yoshiaki; Okuma, Shigeru; Yagi, Tohru

    Our final goal is to establish the model for saccadic eye movement that connects the saccade and the electroencephalogram(EEG). As the first step toward this goal, we recorded and analyzed the saccade-related EEG. In the study recorded in this paper, we tried detecting a certain EEG that is peculiar to the eye movement. In these experiments, each subject was instructed to point their eyes toward visual targets (LEDs) or the direction of the sound sources (buzzers). In the control cases, the EEG was recorded in the case of no eye movemens. As results, in the visual experiments, we found that the potential of EEG changed sharply on the occipital lobe just before eye movement. Furthermore, in the case of the auditory experiments, similar results were observed. In the case of the visual experiments and auditory experiments without eye movement, we could not observed the EEG changed sharply. Moreover, when the subject moved his/her eyes toward a right-side target, a change in EEG potential was found on the right occipital lobe. On the contrary, when the subject moved his/her eyes toward a left-side target, a sharp change in EEG potential was found on the left occipital lobe.

  18. A network pharmacology study of Sendeng-4, a Mongolian medicine.

    PubMed

    Zi, Tian; Yu, Dong

    2015-02-01

    We collected the data on the Sendeng-4 chemical composition corresponding targets through the literature and from DrugBank, SuperTarget, TTD (Therapeutic Targets Database) and other databases and the relevant signaling pathways from the KEGG (Kyoto Encyclopedia of Genes and Genomes) database and established models of the chemical composition-target network and chemical composition-target-disease network using Cytoscape software, the analysis indicated that the chemical composition had at least nine different types of targets that acted together to exert effects on the diseases, suggesting a "multi-component, multi-target" feature of the traditional Mongolian medicine. We also employed the rat model of rheumatoid arthritis induced by Collgen Type II to validate the key targets of the chemical components of Sendeng-4, and three of the key targets were validated through laboratory experiments, further confirming the anti-inflammatory effects of Sendeng-4. In all, this study predicted the active ingredients and targets of Sendeng-4, and explored its mechanism of action, which provided new strategies and methods for further research and development of Sendeng-4 and other traditional Mongolian medicines as well. Copyright © 2015 China Pharmaceutical University. Published by Elsevier B.V. All rights reserved.

  19. Substructural Regularization With Data-Sensitive Granularity for Sequence Transfer Learning.

    PubMed

    Sun, Shichang; Liu, Hongbo; Meng, Jiana; Chen, C L Philip; Yang, Yu

    2018-06-01

    Sequence transfer learning is of interest in both academia and industry with the emergence of numerous new text domains from Twitter and other social media tools. In this paper, we put forward the data-sensitive granularity for transfer learning, and then, a novel substructural regularization transfer learning model (STLM) is proposed to preserve target domain features at substructural granularity in the light of the condition of labeled data set size. Our model is underpinned by hidden Markov model and regularization theory, where the substructural representation can be integrated as a penalty after measuring the dissimilarity of substructures between target domain and STLM with relative entropy. STLM can achieve the competing goals of preserving the target domain substructure and utilizing the observations from both the target and source domains simultaneously. The estimation of STLM is very efficient since an analytical solution can be derived as a necessary and sufficient condition. The relative usability of substructures to act as regularization parameters and the time complexity of STLM are also analyzed and discussed. Comprehensive experiments of part-of-speech tagging with both Brown and Twitter corpora fully justify that our model can make improvements on all the combinations of source and target domains.

  20. Use of external magnetic fields in hohlraum plasmas to improve laser-coupling

    DOE PAGES

    Montgomery, D. S.; Albright, B. J.; Barnak, D. H.; ...

    2015-01-13

    Efficient coupling of laser energy into hohlraum targets is important for indirect drive ignition. Laser-plasma instabilities can reduce coupling, reduce symmetry, and cause preheat. We consider the effects of an external magnetic field on laser-energy coupling in hohlraum targets. Experiments were performed at the Omega Laser Facility using low-Z gas-filled hohlraum targets which were placed in a magnetic coil with B z ≤ 7.5-T. We found that an external field B z = 7.5-T aligned along the hohlraum axis results in up to a 50% increase in plasma temperature as measured by Thomson scattering. As a result, the experiments weremore » modeled using the 2-D magnetohydrodynamics package in HYDRA and were found to be in good agreement.« less

  1. Sociocultural experiences, body image, and indoor tanning among young adult women.

    PubMed

    Stapleton, Jerod L; Manne, Sharon L; Greene, Kathryn; Darabos, Katie; Carpenter, Amanda; Hudson, Shawna V; Coups, Elliot J

    2017-10-01

    The purpose of this survey study was to evaluate a model of body image influences on indoor tanning behavior. Participants were 823 young adult women recruited from a probability-based web panel in the United States. Consistent with our hypothesized model, tanning-related sociocultural experiences were indirectly associated with lifetime indoor tanning use and intentions to tan as mediated through tan surveillance and tan dissatisfaction. Findings suggest the need for targeting body image constructs as mechanisms of behavior change in indoor tanning behavioral interventions.

  2. Computational model for behavior shaping as an adaptive health intervention strategy.

    PubMed

    Berardi, Vincent; Carretero-González, Ricardo; Klepeis, Neil E; Ghanipoor Machiani, Sahar; Jahangiri, Arash; Bellettiere, John; Hovell, Melbourne

    2018-03-01

    Adaptive behavioral interventions that automatically adjust in real-time to participants' changing behavior, environmental contexts, and individual history are becoming more feasible as the use of real-time sensing technology expands. This development is expected to improve shortcomings associated with traditional behavioral interventions, such as the reliance on imprecise intervention procedures and limited/short-lived effects. JITAI adaptation strategies often lack a theoretical foundation. Increasing the theoretical fidelity of a trial has been shown to increase effectiveness. This research explores the use of shaping, a well-known process from behavioral theory for engendering or maintaining a target behavior, as a JITAI adaptation strategy. A computational model of behavior dynamics and operant conditioning was modified to incorporate the construct of behavior shaping by adding the ability to vary, over time, the range of behaviors that were reinforced when emitted. Digital experiments were performed with this updated model for a range of parameters in order to identify the behavior shaping features that optimally generated target behavior. Narrowing the range of reinforced behaviors continuously in time led to better outcomes compared with a discrete narrowing of the reinforcement window. Rapid narrowing followed by more moderate decreases in window size was more effective in generating target behavior than the inverse scenario. The computational shaping model represents an effective tool for investigating JITAI adaptation strategies. Model parameters must now be translated from the digital domain to real-world experiments so that model findings can be validated.

  3. Binocular combination of luminance profiles

    PubMed Central

    Ding, Jian; Levi, Dennis M.

    2017-01-01

    We develop and test a new two-dimensional model for binocular combination of the two eyes' luminance profiles. For first-order stimuli, the model assumes that one eye's luminance profile first goes through a luminance compressor, receives gain-control and gain-enhancement from the other eye, and then linearly combines the other eye's output profile. For second-order stimuli, rectification is added in the signal path of the model before the binocular combination site. Both the total contrast and luminance energies, weighted sums over both the space and spatial-frequency domains, were used in the interocular gain-control, while only the total contrast energy was used in the interocular gain-enhancement. To challenge the model, we performed a binocular brightness matching experiment over a large range of background and target luminances. The target stimulus was a dichoptic disc with a sharp edge that has an increment or decrement luminance from its background. The disk's interocular luminance ratio varied from trial to trial. To refine the model we tested three luminance compressors, five nested binocular combination models (including the Ding–Sperling and the DSKL models), and examined the presence or absence of total luminance energy in the model. We found that (1) installing a luminance compressor, either a logarithmic luminance function or luminance gain-control, (2) including both contrast and luminance energies, and (3) adding interocular gain-enhancement (the DSKL model) to a combined model significantly improved its performance. The combined model provides a systematic account of binocular luminance summation over a large range of luminance input levels. It gives a unified explanation of Fechner's paradox observed on a dark background, and a winner-take-all phenomenon observed on a light background. To further test the model, we conducted two additional experiments: luminance summation of discs with asymmetric contour information (Experiment 2), similar to Levelt (1965) and binocular combination of second-order contrast-modulated gratings (Experiment 3). We used the model obtained in Experiment 1 to predict the results of Experiments 2 and 3 and the results of our previous studies. Model simulations further refined the contrast space weight and contrast sensitivity functions that are installed in the model, and provide a reasonable account for rebalancing of imbalanced binocular vision by reducing the mean luminance in the dominant eye. PMID:29098293

  4. Ultrafast Kα x-ray Thomson scattering from shock compressed lithium hydride

    DOE PAGES

    Kritcher, A. L.; Neumayer, P.; Castor, J.; ...

    2009-04-13

    Spectrally and temporally resolved x-ray Thomson scattering using ultrafast Ti Kα x rays has provided experimental validation for modeling of the compression and heating of shocked matter. The coalescence of two shocks launched into a solid density LiH target by a shaped 6 ns heater beam was observed from rapid heating to temperatures of 2.2 eV, enabling tests of shock timing models. Here, the temperature evolution of the target at various times during shock progression was characterized from the intensity of the elastic scattering component. The observation of scattering from plasmons, electron plasma oscillations, at shock coalescence indicates a transitionmore » to a dense metallic plasma state in LiH. From the frequency shift of the measured plasmon feature the electron density was directly determined with high accuracy, providing a material compression of a factor of 3 times solid density. The quality of data achieved in these experiments demonstrates the capability for single shot dynamic characterization of dense shock compressed matter. Here, the conditions probed in this experiment are relevant for the study of the physics of planetary formation and to characterize inertial confinement fusion targets for experiments such as on the National Ignition Facility, Lawrence Livermore National Laboratory.« less

  5. GITR Simulation of Helium Exposed Tungsten Erosion and Redistribution in PISCES-A

    NASA Astrophysics Data System (ADS)

    Younkin, T. R.; Green, D. L.; Doerner, R. P.; Nishijima, D.; Drobny, J.; Canik, J. M.; Wirth, B. D.

    2017-10-01

    The extreme heat, charged particle, and neutron flux / fluence to plasma facing materials in magnetically confined fusion devices has motivated research to understand, predict, and mitigate the associated detrimental effects. Of relevance to the ITER divertor is the helium interaction with the tungsten divertor, the resulting erosion and migration of impurities. The linear plasma device PISCES A has performed dedicated experiments for high (4x10-22 m-2s-1) and low (4x10-21 m-2s-1) flux, 250 eV He exposed tungsten targets to assess the net and gross erosion of tungsten and volumetric transport. The temperature of the target was held between 400 and 600 degrees C. We present results of the erosion / migration / re-deposition of W during the experiment from the GITR (Global Impurity Transport) code coupled to materials response models. In particular, the modeled and experimental W I emission spectroscopy data for the 429.4 nm wavelength and net erosion through target and collector mass difference measurements are compared. Overall, the predictions are in good agreement with experiments. This material is supported by the US DOE, Office of Science, Office of Fusion Energy Sciences and Office of Advanced Scientific Computing Research through the SciDAC program on Plasma-Surface Interactions.

  6. Mathematical modeling of vesicle drug delivery systems 2: targeted vesicle interactions with cells, tumors, and the body.

    PubMed

    Ying, Chong T; Wang, Juntian; Lamm, Robert J; Kamei, Daniel T

    2013-02-01

    Vesicles have been studied for several years in their ability to deliver drugs. Mathematical models have much potential in reducing time and resources required to engineer optimal vesicles, and this review article summarizes these models that aid in understanding the ability of targeted vesicles to bind and internalize into cancer cells, diffuse into tumors, and distribute in the body. With regard to binding and internalization, radiolabeling and surface plasmon resonance experiments can be performed to determine optimal vesicle size and the number and type of ligands conjugated. Binding and internalization properties are also inputs into a mathematical model of vesicle diffusion into tumor spheroids, which highlights the importance of the vesicle diffusion coefficient and the binding affinity of the targeting ligand. Biodistribution of vesicles in the body, along with their half-life, can be predicted with compartmental models for pharmacokinetics that include the effect of targeting ligands, and these predictions can be used in conjunction with in vivo models to aid in the design of drug carriers. Mathematical models can prove to be very useful in drug carrier design, and our hope is that this review will encourage more investigators to combine modeling with quantitative experimentation in the field of vesicle-based drug delivery.

  7. Real time tracking by LOPF algorithm with mixture model

    NASA Astrophysics Data System (ADS)

    Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo

    2007-11-01

    A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, Christopher Matthew

    The proton form factors provide information on the fundamental properties of the proton and provide a test for models based on QCD. In 1998 at Jefferson Lab (JLAB) in Newport News, VA, experiment E93026 measured the inclusive e-p scattering cross section from a polarized ammonia ( 15NH 3) target at a four momentum transfer squared of Q 2 = 0.5 (GeV/c) 2. Longitudinally polarized electrons were scattered from the polarized target and the scattered electron was detected. Data has been analyzed to obtain the asymmetry from elastically scattered electrons from hydrogen in 15NH 3. The asymmetry, A p, has beenmore » used to determine the proton elastic form factor G Ep. The result is consistent with the dipole model and data from previous experiments. However, due to the choice of kinematics, the uncertainty in the measurement is large.« less

  9. Dissecting and Targeting Latent Metastasis

    DTIC Science & Technology

    2014-09-01

    metastasis of breast cancer (LMBC). These cells retain the potential to form overt metastasis for years. Targeting LMBC with new drugs offers an...cancer cell extravasation through the BBB in experimental models and predict brain metastasis in the clinic (16). Once inside the brain parenchyma... drugs that perturb gap junction activity (Chen et al submitted for publication). In our experiments, both drugs as single agents were effective

  10. Vessel Noise Affects Beaked Whale Behavior: Results of a Dedicated Acoustic Response Study

    DTIC Science & Technology

    2012-08-01

    the analysis. Gaussian Models Shapiro-Wilk test (Normality) Breusch - Pagan test (Heteroscedasticity) Durbin-Watson test (Independence) Foraging duration...noise) may disrupt behavior. An experiment involving the exposure of target whale groups to intense vessel-generated noise tested how these exposures...exposure of target whale groups to intense vessel-generated noise tested how these exposures influenced the foraging behavior of Blainville?s beaked

  11. Attention capture by abrupt onsets: re-visiting the priority tag model.

    PubMed

    Sunny, Meera M; von Mühlenen, Adrian

    2013-01-01

    Abrupt onsets have been shown to strongly attract attention in a stimulus-driven, bottom-up manner. However, the precise mechanism that drives capture by onsets is still debated. According to the new object account, abrupt onsets capture attention because they signal the appearance of a new object. Yantis and Johnson (1990) used a visual search task and showed that up to four onsets can be automatically prioritized. However, in their study the number of onsets co-varied with the total number of items in the display, allowing for a possible confound between these two variables. In the present study, display size was fixed at eight items while the number of onsets was systematically varied between zero and eight. Experiment 1 showed a systematic increase in reactions times with increasing number of onsets. This increase was stronger when the target was an onset than when it was a no-onset item, a result that is best explained by a model according to which only one onset is automatically prioritized. Even when the onsets were marked in red (Experiment 2), nearly half of the participants continued to prioritize only one onset item. Only when onset and no-onset targets were blocked (Experiment 3), participants started to search selectively through the set of only the relevant target type. These results further support the finding that only one onset captures attention. Many bottom-up models of attention capture, like masking or saliency accounts, can efficiently explain this finding.

  12. Chemical combination effects predict connectivity in biological systems

    PubMed Central

    Lehár, Joseph; Zimmermann, Grant R; Krueger, Andrew S; Molnar, Raymond A; Ledell, Jebediah T; Heilbut, Adrian M; Short, Glenn F; Giusti, Leanne C; Nolan, Garry P; Magid, Omar A; Lee, Margaret S; Borisy, Alexis A; Stockwell, Brent R; Keith, Curtis T

    2007-01-01

    Efforts to construct therapeutically useful models of biological systems require large and diverse sets of data on functional connections between their components. Here we show that cellular responses to combinations of chemicals reveal how their biological targets are connected. Simulations of pathways with pairs of inhibitors at varying doses predict distinct response surface shapes that are reproduced in a yeast experiment, with further support from a larger screen using human tumour cells. The response morphology yields detailed connectivity constraints between nearby targets, and synergy profiles across many combinations show relatedness between targets in the whole network. Constraints from chemical combinations complement genetic studies, because they probe different cellular components and can be applied to disease models that are not amenable to mutagenesis. Chemical probes also offer increased flexibility, as they can be continuously dosed, temporally controlled, and readily combined. After extending this initial study to cover a wider range of combination effects and pathway topologies, chemical combinations may be used to refine network models or to identify novel targets. This response surface methodology may even apply to non-biological systems where responses to targeted perturbations can be measured. PMID:17332758

  13. Visual detection following retinal damage: predictions of an inhomogeneous retino-cortical model

    NASA Astrophysics Data System (ADS)

    Arnow, Thomas L.; Geisler, Wilson S.

    1996-04-01

    A model of human visual detection performance has been developed, based on available anatomical and physiological data for the primate visual system. The inhomogeneous retino- cortical (IRC) model computes detection thresholds by comparing simulated neural responses to target patterns with responses to a uniform background of the same luminance. The model incorporates human ganglion cell sampling distributions; macaque monkey ganglion cell receptive field properties; macaque cortical cell contrast nonlinearities; and a optical decision rule based on ideal observer theory. Spatial receptive field properties of cortical neurons were not included. Two parameters were allowed to vary while minimizing the squared error between predicted and observed thresholds. One parameter was decision efficiency, the other was the relative strength of the ganglion-cell center and surround. The latter was only allowed to vary within a small range consistent with known physiology. Contrast sensitivity was measured for sinewave gratings as a function of spatial frequency, target size and eccentricity. Contrast sensitivity was also measured for an airplane target as a function of target size, with and without artificial scotomas. The results of these experiments, as well as contrast sensitivity data from the literature were compared to predictions of the IRC model. Predictions were reasonably good for grating and airplane targets.

  14. Progress in hohlraum physics for the National Ignition Facilitya)

    NASA Astrophysics Data System (ADS)

    Moody, J. D.; Callahan, D. A.; Hinkel, D. E.; Amendt, P. A.; Baker, K. L.; Bradley, D.; Celliers, P. M.; Dewald, E. L.; Divol, L.; Döppner, T.; Eder, D. C.; Edwards, M. J.; Jones, O.; Haan, S. W.; Ho, D.; Hopkins, L. B.; Izumi, N.; Kalantar, D.; Kauffman, R. L.; Kilkenny, J. D.; Landen, O.; Lasinski, B.; LePape, S.; Ma, T.; MacGowan, B. J.; MacLaren, S. A.; Mackinnon, A. J.; Meeker, D.; Meezan, N.; Michel, P.; Milovich, J. L.; Munro, D.; Pak, A. E.; Rosen, M.; Ralph, J.; Robey, H. F.; Ross, J. S.; Schneider, M. B.; Strozzi, D.; Storm, E.; Thomas, C.; Town, R. P. J.; Widmann, K. L.; Kline, J.; Kyrala, G.; Nikroo, A.; Boehly, T.; Moore, A. S.; Glenzer, S. H.

    2014-05-01

    Advances in hohlraums for inertial confinement fusion at the National Ignition Facility (NIF) were made this past year in hohlraum efficiency, dynamic shape control, and hot electron and x-ray preheat control. Recent experiments are exploring hohlraum behavior over a large landscape of parameters by changing the hohlraum shape, gas-fill, and laser pulse. Radiation hydrodynamic modeling, which uses measured backscatter, shows that gas-filled hohlraums utilize between 60% and 75% of the laser power to match the measured bang-time, whereas near-vacuum hohlraums utilize 98%. Experiments seem to be pointing to deficiencies in the hohlraum (instead of capsule) modeling to explain most of the inefficiency in gas-filled targets. Experiments have begun quantifying the Cross Beam Energy Transfer (CBET) rate at several points in time for hohlraum experiments that utilize CBET for implosion symmetry. These measurements will allow better control of the dynamic implosion symmetry for these targets. New techniques are being developed to measure the hot electron energy and energy spectra generated at both early and late time. Rugby hohlraums offer a target which requires little to no CBET and may be less vulnerable to undesirable dynamic symmetry "swings." A method for detecting the effect of the energetic electrons on the fuel offers a direct measure of the hot electron effects as well as a means to test energetic electron mitigation methods. At higher hohlraum radiation temperatures (including near vacuum hohlraums), the increased hard x-rays (1.8-4 keV) may pose an x-ray preheat problem. Future experiments will explore controlling these x-rays with advanced wall materials.

  15. Validation of GNSS Multipath Model for Space Proximity Operations Using the Hubble Servicing Mission 4 Experiment

    NASA Technical Reports Server (NTRS)

    Ashman, B. W.; Veldman, J. L.; Axelrad, P.; Garrison, J. L.; Winternitz, L. B.

    2016-01-01

    In the rendezvous and docking of spacecraft, GNSS signals can reflect off the target vehicle and cause large errors in the chaser vehicle receiver at ranges below a few hundred meters. It has been proposed that the additional ray paths, or multipath, be used as a source of information about the state of the target relative to the receiver. With Hubble Servicing Mission 4 as a case study, electromagnetic ray tracing has been used to construct a model of reflected signals from known geometry. Oscillations in the prompt correlator power due to multipath, known as multipath fading, are studied as a means of model validation. Agreement between the measured and simulated multipath fading serves to confirm the presence of signals reflected off the target spacecraft that might be used for relative navigation.

  16. Validation of GNSS Multipath Model for Space Proximity Operations Using the Hubble Servicing Mission 4 Experiment

    NASA Technical Reports Server (NTRS)

    Ashman, Ben; Veldman, Jeanette; Axelrad, Penina; Garrison, James; Winternitz, Luke

    2016-01-01

    In the rendezvous and docking of spacecraft, GNSS signals can reflect off the target vehicle and cause prohibitively large errors in the chaser vehicle receiver at ranges below 200 meters. It has been proposed that the additional ray paths, or multipath, be used as a source of information about the state of the target relative to the receiver. With Hubble Servicing Mission 4 as a case study, electromagnetic ray tracing has been used to construct a model of reflected signals from known geometry. Oscillations in the prompt correlator power due to multipath, known as multipath fading, are studied as a means of model validation. Agreement between the measured and simulated multipath fading serves to confirm the presence of signals reflected off the target spacecraft that might be used for relative navigation.

  17. Posture Affects How Robots and Infants Map Words to Objects

    PubMed Central

    Morse, Anthony F.; Benitez, Viridian L.; Belpaeme, Tony; Cangelosi, Angelo; Smith, Linda B.

    2015-01-01

    For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body – and its momentary posture – may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1–3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1–5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies’ momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6–9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge –not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping – but through the body’s momentary disposition in space. PMID:25785834

  18. A Physics Exploratory Experiment on Plasma Liner Formation

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Knapp, Charles E.; Kirkpatrick, Ronald C.; Siemon, Richard E.; Turchi, Peter

    2002-01-01

    Momentum flux for imploding a target plasma in magnetized target fusion (MTF) may be delivered by an array of plasma guns launching plasma jets that would merge to form an imploding plasma shell (liner). In this paper, we examine what would be a worthwhile experiment to do in order to explore the dynamics of merging plasma jets to form a plasma liner as a first step in establishing an experimental database for plasma-jets driven magnetized target fusion (PJETS-MTF). Using past experience in fusion energy research as a model, we envisage a four-phase program to advance the art of PJETS-MTF to fusion breakeven Q is approximately 1). The experiment (PLX (Plasma Liner Physics Exploratory Experiment)) described in this paper serves as Phase I of this four-phase program. The logic underlying the selection of the experimental parameters is presented. The experiment consists of using twelve plasma guns arranged in a circle, launching plasma jets towards the center of a vacuum chamber. The velocity of the plasma jets chosen is 200 km/s, and each jet is to carry a mass of 0.2 mg - 0.4 mg. A candidate plasma accelerator for launching these jets consists of a coaxial plasma gun of the Marshall type.

  19. Using Data Independent Acquisition (DIA) to Model High-responding Peptides for Targeted Proteomics Experiments*

    PubMed Central

    Searle, Brian C.; Egertson, Jarrett D.; Bollinger, James G.; Stergachis, Andrew B.; MacCoss, Michael J.

    2015-01-01

    Targeted mass spectrometry is an essential tool for detecting quantitative changes in low abundant proteins throughout the proteome. Although selected reaction monitoring (SRM) is the preferred method for quantifying peptides in complex samples, the process of designing SRM assays is laborious. Peptides have widely varying signal responses dictated by sequence-specific physiochemical properties; one major challenge is in selecting representative peptides to target as a proxy for protein abundance. Here we present PREGO, a software tool that predicts high-responding peptides for SRM experiments. PREGO predicts peptide responses with an artificial neural network trained using 11 minimally redundant, maximally relevant properties. Crucial to its success, PREGO is trained using fragment ion intensities of equimolar synthetic peptides extracted from data independent acquisition experiments. Because of similarities in instrumentation and the nature of data collection, relative peptide responses from data independent acquisition experiments are a suitable substitute for SRM experiments because they both make quantitative measurements from integrated fragment ion chromatograms. Using an SRM experiment containing 12,973 peptides from 724 synthetic proteins, PREGO exhibits a 40–85% improvement over previously published approaches at selecting high-responding peptides. These results also represent a dramatic improvement over the rules-based peptide selection approaches commonly used in the literature. PMID:26100116

  20. Scattering Models and Basic Experiments in the Microwave Regime

    NASA Technical Reports Server (NTRS)

    Fung, A. K.; Blanchard, A. J. (Principal Investigator)

    1985-01-01

    The objectives of research over the next three years are: (1) to develop a randomly rough surface scattering model which is applicable over the entire frequency band; (2) to develop a computer simulation method and algorithm to simulate scattering from known randomly rough surfaces, Z(x,y); (3) to design and perform laboratory experiments to study geometric and physical target parameters of an inhomogeneous layer; (4) to develop scattering models for an inhomogeneous layer which accounts for near field interaction and multiple scattering in both the coherent and the incoherent scattering components; and (5) a comparison between theoretical models and measurements or numerical simulation.

  1. Two-dimensional simulation of high-power laser-surface interaction

    NASA Astrophysics Data System (ADS)

    Goldman, S. Robert; Wilke, Mark D.; Green, Ray E.; Busch, George E.; Johnson, Randall P.

    1998-09-01

    For laser intensities in the range of 108 - 109 W/cm2, and pulse lengths of order 10 microseconds or longer, we have modified the inertial confinement fusion code Lasnex to simulate gaseous and some dense material aspects of the laser-matter interaction. The unique aspect of our treatment consists of an ablation model which defines a dense material-vapor interface and then calculates the mass flow across this interface. The model treats the dense material as a rigid two-dimensional mass and heat reservoir suppressing all hydrodynamic motion in the dense material. The computer simulations and additional post-processors provide predictions for measurements including impulse given to the target, pressures at the target interface, electron temperatures and densities in the vapor-plasma plume region, and emission of radiation from the target. We will present an analysis of some relatively well diagnosed experiments which have been useful in developing our modeling. The simulations match experimentally obtained target impulses, pressures at the target surface inside the laser spot, and radiation emission from the target to within about 20%. Hence our simulational technique appears to form a useful basis for further investigation of laser-surface interaction in this intensity, pulse-width range.

  2. Full-wave Nonlinear Inverse Scattering for Acoustic and Electromagnetic Breast Imaging

    NASA Astrophysics Data System (ADS)

    Haynes, Mark Spencer

    Acoustic and electromagnetic full-wave nonlinear inverse scattering techniques are explored in both theory and experiment with the ultimate aim of noninvasively mapping the material properties of the breast. There is evidence that benign and malignant breast tissue have different acoustic and electrical properties and imaging these properties directly could provide higher quality images with better diagnostic certainty. In this dissertation, acoustic and electromagnetic inverse scattering algorithms are first developed and validated in simulation. The forward solvers and optimization cost functions are modified from traditional forms in order to handle the large or lossy imaging scenes present in ultrasonic and microwave breast imaging. An antenna model is then presented, modified, and experimentally validated for microwave S-parameter measurements. Using the antenna model, a new electromagnetic volume integral equation is derived in order to link the material properties of the inverse scattering algorithms to microwave S-parameters measurements allowing direct comparison of model predictions and measurements in the imaging algorithms. This volume integral equation is validated with several experiments and used as the basis of a free-space inverse scattering experiment, where images of the dielectric properties of plastic objects are formed without the use of calibration targets. These efforts are used as the foundation of a solution and formulation for the numerical characterization of a microwave near-field cavity-based breast imaging system. The system is constructed and imaging results of simple targets are given. Finally, the same techniques are used to explore a new self-characterization method for commercial ultrasound probes. The method is used to calibrate an ultrasound inverse scattering experiment and imaging results of simple targets are presented. This work has demonstrated the feasibility of quantitative microwave inverse scattering by way of a self-consistent characterization formalism, and has made headway in the same area for ultrasound.

  3. Stress priming in picture naming: an SOA study.

    PubMed

    Schiller, Niels O; Fikkert, Paula; Levelt, Clara C

    2004-01-01

    This study investigates whether or not the representation of lexical stress information can be primed during speech production. In four experiments, we attempted to prime the stress position of bisyllabic target nouns (picture names) having initial and final stress with auditory prime words having either the same or different stress as the target (e.g., WORtel-MOtor vs. koSTUUM-MOtor; capital letters indicate stressed syllables in prime-target pairs). Furthermore, half of the prime words were semantically related, the other half unrelated. Overall, picture names were not produced faster when the prime word had the same stress as the target than when the prime had different stress, i.e., there was no stress-priming effect in any experiment. This result would not be expected if stress were stored in the lexicon. However, targets with initial stress were responded to faster than final-stress targets. The reason for this effect was neither the quality of the pictures nor frequency of occurrence or voice-key characteristics. We hypothesize here that this stress effect is a genuine encoding effect, i.e., words with stress on the second syllable take longer to be encoded because their stress pattern is irregular with respect to the lexical distribution of bisyllabic stress patterns, even though it can be regular with respect to metrical stress rules in Dutch. The results of the experiments are discussed in the framework of models of phonological encoding.

  4. Visual working memory simultaneously guides facilitation and inhibition during visual search.

    PubMed

    Dube, Blaire; Basciano, April; Emrich, Stephen M; Al-Aidroos, Naseem

    2016-07-01

    During visual search, visual working memory (VWM) supports the guidance of attention in two ways: It stores the identity of the search target, facilitating the selection of matching stimuli in the search array, and it maintains a record of the distractors processed during search so that they can be inhibited. In two experiments, we investigated whether the full contents of VWM can be used to support both of these abilities simultaneously. In Experiment 1, participants completed a preview search task in which (a) a subset of search distractors appeared before the remainder of the search items, affording participants the opportunity to inhibit them, and (b) the search target varied from trial to trial, requiring the search target template to be maintained in VWM. We observed the established signature of VWM-based inhibition-reduced ability to ignore previewed distractors when the number of distractors exceeds VWM's capacity-suggesting that VWM can serve this role while also representing the target template. In Experiment 2, we replicated Experiment 1, but added to the search displays a singleton distractor that sometimes matched the color (a task-irrelevant feature) of the search target, to evaluate capture. We again observed the signature of VWM-based preview inhibition along with attentional capture by (and, thus, facilitation of) singletons matching the target template. These findings indicate that more than one VWM representation can bias attention at a time, and that these representations can separately affect selection through either facilitation or inhibition, placing constraints on existing models of the VWM-based guidance of attention.

  5. Application of QSAR and shape pharmacophore modeling approaches for targeted chemical library design.

    PubMed

    Ebalunode, Jerry O; Zheng, Weifan; Tropsha, Alexander

    2011-01-01

    Optimization of chemical library composition affords more efficient identification of hits from biological screening experiments. The optimization could be achieved through rational selection of reagents used in combinatorial library synthesis. However, with a rapid advent of parallel synthesis methods and availability of millions of compounds synthesized by many vendors, it may be more efficient to design targeted libraries by means of virtual screening of commercial compound collections. This chapter reviews the application of advanced cheminformatics approaches such as quantitative structure-activity relationships (QSAR) and pharmacophore modeling (both ligand and structure based) for virtual screening. Both approaches rely on empirical SAR data to build models; thus, the emphasis is placed on achieving models of the highest rigor and external predictive power. We present several examples of successful applications of both approaches for virtual screening to illustrate their utility. We suggest that the expert use of both QSAR and pharmacophore models, either independently or in combination, enables users to achieve targeted libraries enriched with experimentally confirmed hit compounds.

  6. Experimental validation of an analytical kinetic model for edge-localized modes in JET-ITER-like wall

    NASA Astrophysics Data System (ADS)

    Guillemaut, C.; Metzger, C.; Moulton, D.; Heinola, K.; O’Mullane, M.; Balboa, I.; Boom, J.; Matthews, G. F.; Silburn, S.; Solano, E. R.; contributors, JET

    2018-06-01

    The design and operation of future fusion devices relying on H-mode plasmas requires reliable modelling of edge-localized modes (ELMs) for precise prediction of divertor target conditions. An extensive experimental validation of simple analytical predictions of the time evolution of target plasma loads during ELMs has been carried out here in more than 70 JET-ITER-like wall H-mode experiments with a wide range of conditions. Comparisons of these analytical predictions with diagnostic measurements of target ion flux density, power density, impact energy and electron temperature during ELMs are presented in this paper and show excellent agreement. The analytical predictions tested here are made with the ‘free-streaming’ kinetic model (FSM) which describes ELMs as a quasi-neutral plasma bunch expanding along the magnetic field lines into the Scrape-Off Layer without collisions. Consequences of the FSM on energy reflection and deposition on divertor targets during ELMs are also discussed.

  7. ICF Implosions, Space-Charge Electric Fields, and Their Impact on Mix and Compression

    NASA Astrophysics Data System (ADS)

    Knoll, Dana; Chacon, Luis; Simakov, Andrei

    2013-10-01

    The single-fluid, quasi-neutral, radiation hydrodynamics codes, used to design the NIF targets, predict thermonuclear ignition for the conditions that have been achieved experimentally. A logical conclusion is that the physics model used in these codes is missing one, or more, key phenomena. Two key model-experiment inconsistencies on NIF are: 1) a lower implosion velocity than predicted by the design codes, and 2) transport of pusher material deep into the hot spot. We hypothesize that both of these model-experiment inconsistencies may be a result of a large, space-charge, electric field residing on the distinct interfaces in a NIF target. Large space-charge fields have been experimentally observed in Omega experiments. Given our hypothesis, this presentation will: 1) Develop a more complete physics picture of initiation, sustainment, and dissipation of a current-driven plasma sheath / double-layer at the Fuel-Pusher interface of an ablating plastic shell implosion on Omega, 2) Characterize the mix that can result from a double-layer field at the Fuel-Pusher interface, prior to the onset of fluid instabilities, and 3) Quantify the impact of the double-layer induced surface tension at the Fuel-Pusher interface on the peak observed implosion velocity in Omega.

  8. Incidental and context-responsive activation of structure- and function-based action features during object identification

    PubMed Central

    Lee, Chia-lin; Middleton, Erica; Mirman, Daniel; Kalénine, Solène; Buxbaum, Laurel J.

    2012-01-01

    Previous studies suggest that action representations are activated during object processing, even when task-irrelevant. In addition, there is evidence that lexical-semantic context may affect such activation during object processing. Finally, prior work from our laboratory and others indicates that function-based (“use”) and structure-based (“move”) action subtypes may differ in their activation characteristics. Most studies assessing such effects, however, have required manual object-relevant motor responses, thereby plausibly influencing the activation of action representations. The present work utilizes eyetracking and a Visual World Paradigm task without object-relevant actions to assess the time course of activation of action representations, as well as their responsiveness to lexical-semantic context. In two experiments, participants heard a target word and selected its referent from an array of four objects. Gaze fixations on non-target objects signal activation of features shared between targets and non-targets. The experiments assessed activation of structure-based (Experiment 1) or function-based (Experiment 2) distractors, using neutral sentences (“S/he saw the …”) or sentences with a relevant action verb (Experiment 1: “S/he picked up the……”; Experiment 2: “S/he used the….”). We observed task-irrelevant activations of action information in both experiments. In neutral contexts, structure-based activation was relatively faster-rising but more transient than function-based activation. Additionally, action verb contexts reliably modified patterns of activation in both Experiments. These data provide fine-grained information about the dynamics of activation of function-based and structure-based actions in neutral and action-relevant contexts, in support of the “Two Action System” model of object and action processing (e.g., Buxbaum & Kalénine, 2010). PMID:22390294

  9. Model Minority Stereotyping, Perceived Discrimination, and Adjustment Among Adolescents from Asian American Backgrounds.

    PubMed

    Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L

    2016-07-01

    The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed.

  10. Toward modular biological models: defining analog modules based on referent physiological mechanisms

    PubMed Central

    2014-01-01

    Background Currently, most biomedical models exist in isolation. It is often difficult to reuse or integrate models or their components, in part because they are not modular. Modular components allow the modeler to think more deeply about the role of the model and to more completely address a modeling project’s requirements. In particular, modularity facilitates component reuse and model integration for models with different use cases, including the ability to exchange modules during or between simulations. The heterogeneous nature of biology and vast range of wet-lab experimental platforms call for modular models designed to satisfy a variety of use cases. We argue that software analogs of biological mechanisms are reasonable candidates for modularization. Biomimetic software mechanisms comprised of physiomimetic mechanism modules offer benefits that are unique or especially important to multi-scale, biomedical modeling and simulation. Results We present a general, scientific method of modularizing mechanisms into reusable software components that we call physiomimetic mechanism modules (PMMs). PMMs utilize parametric containers that partition and expose state information into physiologically meaningful groupings. To demonstrate, we modularize four pharmacodynamic response mechanisms adapted from an in silico liver (ISL). We verified the modularization process by showing that drug clearance results from in silico experiments are identical before and after modularization. The modularized ISL achieves validation targets drawn from propranolol outflow profile data. In addition, an in silico hepatocyte culture (ISHC) is created. The ISHC uses the same PMMs and required no refactoring. The ISHC achieves validation targets drawn from propranolol intrinsic clearance data exhibiting considerable between-lab variability. The data used as validation targets for PMMs originate from both in vitro to in vivo experiments exhibiting large fold differences in time scale. Conclusions This report demonstrates the feasibility of PMMs and their usefulness across multiple model use cases. The pharmacodynamic response module developed here is robust to changes in model context and flexible in its ability to achieve validation targets in the face of considerable experimental uncertainty. Adopting the modularization methods presented here is expected to facilitate model reuse and integration, thereby accelerating the pace of biomedical research. PMID:25123169

  11. Toward modular biological models: defining analog modules based on referent physiological mechanisms.

    PubMed

    Petersen, Brenden K; Ropella, Glen E P; Hunt, C Anthony

    2014-08-16

    Currently, most biomedical models exist in isolation. It is often difficult to reuse or integrate models or their components, in part because they are not modular. Modular components allow the modeler to think more deeply about the role of the model and to more completely address a modeling project's requirements. In particular, modularity facilitates component reuse and model integration for models with different use cases, including the ability to exchange modules during or between simulations. The heterogeneous nature of biology and vast range of wet-lab experimental platforms call for modular models designed to satisfy a variety of use cases. We argue that software analogs of biological mechanisms are reasonable candidates for modularization. Biomimetic software mechanisms comprised of physiomimetic mechanism modules offer benefits that are unique or especially important to multi-scale, biomedical modeling and simulation. We present a general, scientific method of modularizing mechanisms into reusable software components that we call physiomimetic mechanism modules (PMMs). PMMs utilize parametric containers that partition and expose state information into physiologically meaningful groupings. To demonstrate, we modularize four pharmacodynamic response mechanisms adapted from an in silico liver (ISL). We verified the modularization process by showing that drug clearance results from in silico experiments are identical before and after modularization. The modularized ISL achieves validation targets drawn from propranolol outflow profile data. In addition, an in silico hepatocyte culture (ISHC) is created. The ISHC uses the same PMMs and required no refactoring. The ISHC achieves validation targets drawn from propranolol intrinsic clearance data exhibiting considerable between-lab variability. The data used as validation targets for PMMs originate from both in vitro to in vivo experiments exhibiting large fold differences in time scale. This report demonstrates the feasibility of PMMs and their usefulness across multiple model use cases. The pharmacodynamic response module developed here is robust to changes in model context and flexible in its ability to achieve validation targets in the face of considerable experimental uncertainty. Adopting the modularization methods presented here is expected to facilitate model reuse and integration, thereby accelerating the pace of biomedical research.

  12. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): Simulation design and preliminary results

    DOE PAGES

    Kravitz, Benjamin S.; Robock, Alan; Tilmes, S.; ...

    2015-10-27

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more long wave radiation to escape to space. We discuss experiment designs, as well as the rationale formore » those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. In conclusion, this is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.« less

  13. Muon polarization in the MEG experiment: predictions and measurements

    DOE PAGES

    Baldini, A. M.; Bao, Y.; Baracchini, E.; ...

    2016-04-22

    The MEG experiment makes use of one of the world’s most intense low energy muon beams, in order to search for the lepton flavour violating process μ +→e +γ. We determined the residual beam polarization at the thin stopping target, by measuring the asymmetry of the angular distribution of Michel decay positrons as a function of energy. The initial muon beam polarization at the production is predicted to be P μ=-1 by the Standard Model (SM) with massless neutrinos. We estimated our residual muon polarization to be P μ= -0.86 ± 0.02 (stat)more » $$+0.05\\atop{-0.06}$$ (syst) at the stopping target, which is consistent with the SM predictions when the depolarizing effects occurring during the muon production, propagation and moderation in the target are taken into account. The knowledge of beam polarization is of fundamental importance in order to model the background of our μ +→e +γ search induced by the muon radiative decay: μ +→e +$$\\bar{v}$$ μν eγ.« less

  14. SPH calculations of asteroid disruptions: The role of pressure dependent failure models

    NASA Astrophysics Data System (ADS)

    Jutzi, Martin

    2015-03-01

    We present recent improvements of the modeling of the disruption of strength dominated bodies using the Smooth Particle Hydrodynamics (SPH) technique. The improvements include an updated strength model and a friction model, which are successfully tested by a comparison with laboratory experiments. In the modeling of catastrophic disruptions of asteroids, a comparison between old and new strength models shows no significant deviation in the case of targets which are initially non-porous, fully intact and have a homogeneous structure (such as the targets used in the study by Benz and Asphaug, 1999). However, for many cases (e.g. initially partly or fully damaged targets and rubble-pile structures) we find that it is crucial that friction is taken into account and the material has a pressure dependent shear strength. Our investigations of the catastrophic disruption threshold Q D * as a function of target properties and target sizes up to a few 100 km show that a fully damaged target modeled without friction has a Q D * which is significantly (5-10 times) smaller than in the case where friction is included. When the effect of the energy dissipation due to compaction (pore crushing) is taken into account as well, the targets become even stronger ( Q D * is increased by a factor of 2-3). On the other hand, cohesion is found to have an negligible effect at large scales and is only important at scales ≲ 1 km. Our results show the relative effects of strength, friction and porosity on the outcome of collisions among small (≲ 1000 km) bodies. These results will be used in a future study to improve existing scaling laws for the outcome of collisions (e.g. Leinhardt and Stewart, 2012).

  15. Template-based and free modeling of I-TASSER and QUARK pipelines using predicted contact maps in CASP12.

    PubMed

    Zhang, Chengxin; Mortuza, S M; He, Baoji; Wang, Yanting; Zhang, Yang

    2018-03-01

    We develop two complementary pipelines, "Zhang-Server" and "QUARK", based on I-TASSER and QUARK pipelines for template-based modeling (TBM) and free modeling (FM), and test them in the CASP12 experiment. The combination of I-TASSER and QUARK successfully folds three medium-size FM targets that have more than 150 residues, even though the interplay between the two pipelines still awaits further optimization. Newly developed sequence-based contact prediction by NeBcon plays a critical role to enhance the quality of models, particularly for FM targets, by the new pipelines. The inclusion of NeBcon predicted contacts as restraints in the QUARK simulations results in an average TM-score of 0.41 for the best in top five predicted models, which is 37% higher than that by the QUARK simulations without contacts. In particular, there are seven targets that are converted from non-foldable to foldable (TM-score >0.5) due to the use of contact restraints in the simulations. Another additional feature in the current pipelines is the local structure quality prediction by ResQ, which provides a robust residue-level modeling error estimation. Despite the success, significant challenges still remain in ab initio modeling of multi-domain proteins and folding of β-proteins with complicated topologies bound by long-range strand-strand interactions. Improvements on domain boundary and long-range contact prediction, as well as optimal use of the predicted contacts and multiple threading alignments, are critical to address these issues seen in the CASP12 experiment. © 2017 Wiley Periodicals, Inc.

  16. GEANT4-based full simulation of the PADME experiment at the DAΦNE BTF

    NASA Astrophysics Data System (ADS)

    Leonardi, E.; Kozhuharov, V.; Raggi, M.; Valente, P.

    2017-10-01

    A possible solution to the dark matter problem postulates that dark particles can interact with Standard Model particles only through a new force mediated by a “portal”. If the new force has a U(1) gauge structure, the “portal” is a massive photon-like vector particle, called dark photon or A‧. The PADME experiment at the DAΦNE Beam-Test Facility (BTF) in Frascati is designed to detect dark photons produced in positron on fixed target annihilations decaying to dark matter (e+e-→γA‧) by measuring the final state missing mass. The experiment will be composed of a thin active diamond target where a 550 MeV positron beam will impinge to produce e+e- annihilation events. The surviving beam will be deflected with a magnet while the photons produced in the annihilation will be measured by a calorimeter composed of BGO crystals. To reject the background from Bremsstrahlung gamma production, a set of segmented plastic scintillator vetoes will be used to detect positrons exiting the target with an energy lower than that of the beam, while a fast small angle calorimeter will be used to reject the e+e-→γγ(γ) background. To optimize the experimental layout in terms of signal acceptance and background rejection, the full layout of the experiment was modelled with the GEANT4 simulation package. In this paper we will describe the details of the simulation and report on the results obtained with the software.

  17. Implied Dynamics Biases the Visual Perception of Velocity

    PubMed Central

    La Scaleia, Barbara; Zago, Myrka; Moscatelli, Alessandro; Lacquaniti, Francesco; Viviani, Paolo

    2014-01-01

    We expand the anecdotic report by Johansson that back-and-forth linear harmonic motions appear uniform. Six experiments explore the role of shape and spatial orientation of the trajectory of a point-light target in the perceptual judgment of uniform motion. In Experiment 1, the target oscillated back-and-forth along a circular arc around an invisible pivot. The imaginary segment from the pivot to the midpoint of the trajectory could be oriented vertically downward (consistent with an upright pendulum), horizontally leftward, or vertically upward (upside-down). In Experiments 2 to 5, the target moved uni-directionally. The effect of suppressing the alternation of movement directions was tested with curvilinear (Experiment 2 and 3) or rectilinear (Experiment 4 and 5) paths. Experiment 6 replicated the upright condition of Experiment 1, but participants were asked to hold the gaze on a fixation point. When some features of the trajectory evoked the motion of either a simple pendulum or a mass-spring system, observers identified as uniform the kinematic profiles close to harmonic motion. The bias towards harmonic motion was most consistent in the upright orientation of Experiment 1 and 6. The bias disappeared when the stimuli were incompatible with both pendulum and mass-spring models (Experiments 3 to 5). The results are compatible with the hypothesis that the perception of dynamic stimuli is biased by the laws of motion obeyed by natural events, so that only natural motions appear uniform. PMID:24667578

  18. A novel patient-derived xenograft model for claudin-low triple-negative breast cancer.

    PubMed

    Matossian, Margarite D; Burks, Hope E; Bowles, Annie C; Elliott, Steven; Hoang, Van T; Sabol, Rachel A; Pashos, Nicholas C; O'Donnell, Benjamen; Miller, Kristin S; Wahba, Bahia M; Bunnell, Bruce A; Moroz, Krzysztof; Zea, Arnold H; Jones, Steven D; Ochoa, Augusto C; Al-Khami, Amir A; Hossain, Fokhrul; Riker, Adam I; Rhodes, Lyndsay V; Martin, Elizabeth C; Miele, Lucio; Burow, Matthew E; Collins-Burow, Bridgette M

    2018-06-01

    Triple-negative breast cancer (TNBC) subtypes are clinically aggressive and cannot be treated with targeted therapeutics commonly used in other breast cancer subtypes. The claudin-low (CL) molecular subtype of TNBC has high rates of metastases, chemoresistance and recurrence. There exists an urgent need to identify novel therapeutic targets in TNBC; however, existing models utilized in target discovery research are limited. Patient-derived xenograft (PDX) models have emerged as superior models for target discovery experiments because they recapitulate features of patient tumors that are limited by cell-line derived xenograft methods. We utilize immunohistochemistry, qRT-PCR and Western Blot to visualize tumor architecture, cellular composition, genomic and protein expressions of a new CL-TNBC PDX model (TU-BcX-2O0). We utilize tissue decellularization techniques to examine extracellular matrix composition of TU-BcX-2O0. Our laboratory successfully established a TNBC PDX tumor, TU-BCX-2O0, which represents a CL-TNBC subtype and maintains this phenotype throughout subsequent passaging. We dissected TU-BCx-2O0 to examine aspects of this complex tumor that can be targeted by developing therapeutics, including the whole and intact breast tumor, specific cell populations within the tumor, and the extracellular matrix. Here, we characterize a claudin-low TNBC patient-derived xenograft model that can be utilized for therapeutic research studies.

  19. Non-negative infrared patch-image model: Robust target-background separation via partial sum minimization of singular values

    NASA Astrophysics Data System (ADS)

    Dai, Yimian; Wu, Yiquan; Song, Yu; Guo, Jun

    2017-03-01

    To further enhance the small targets and suppress the heavy clutters simultaneously, a robust non-negative infrared patch-image model via partial sum minimization of singular values is proposed. First, the intrinsic reason behind the undesirable performance of the state-of-the-art infrared patch-image (IPI) model when facing extremely complex backgrounds is analyzed. We point out that it lies in the mismatching of IPI model's implicit assumption of a large number of observations with the reality of deficient observations of strong edges. To fix this problem, instead of the nuclear norm, we adopt the partial sum of singular values to constrain the low-rank background patch-image, which could provide a more accurate background estimation and almost eliminate all the salient residuals in the decomposed target image. In addition, considering the fact that the infrared small target is always brighter than its adjacent background, we propose an additional non-negative constraint to the sparse target patch-image, which could not only wipe off more undesirable components ulteriorly but also accelerate the convergence rate. Finally, an algorithm based on inexact augmented Lagrange multiplier method is developed to solve the proposed model. A large number of experiments are conducted demonstrating that the proposed model has a significant improvement over the other nine competitive methods in terms of both clutter suppressing performance and convergence rate.

  20. Penetration of tungsten-alloy rods into composite ceramic targets: Experiments and 2-D simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Z.; Dekel, E.; Hohler, V.

    1998-07-10

    A series of terminal ballistics experiments, with scaled tungsten-alloy penetrators, was performed on composite targets consisting of ceramic tiles glued to thick steel backing plates. Tiles of silicon-carbide, aluminum nitride, titanium-dibroide and boron-carbide were 20-80 mm thick, and impact velocity was 1.7 km/s. 2-D numerical simulations, using the PISCES code, were performed in order to simulate these shots. It is shown that a simplified version of the Johnson-Holmquist failure model can account for the penetration depths of the rods but is not enough to capture the effect of lateral release waves on these penetrations.

  1. Determination of the Sources of Radar Scattering

    NASA Technical Reports Server (NTRS)

    Moore, R. K.; Zoughi, R.

    1984-01-01

    Fine-resolution radar backscattering measurements were proposed to determine the backscattering sources in various vegetation canopies and surface targets. The results were then used to improve the existing theoretical models of terrain scattering, and also to enhance understanding of the radar signal observed by an imaging radar over a vegetated area. Various experiments were performed on targets such as corn, milo, soybeans, grass, asphalt pavements, soil and concrete walkways. Due to the lack of available references on measurements of this type, the obtained results will be used primarily as a foundation or future experiments. The constituent backscattering characteristics of the vegetation canopies was also examined.

  2. Fiery Passion and Relentless Commitment: The Lived Experiences of African American Women Principals in Turnaround Model Schools

    ERIC Educational Resources Information Center

    Aldaco, Adrienne L. Gratten

    2016-01-01

    Chronically low performing schools in the United States have required targeted support and interventions to increase student achievement. In recent years, the school turnaround model has emerged as a swift, dramatic, comprehensive approach to implementing interventions in the lowest performing schools (Calkins, Guenther, Belfiore, & Lash,…

  3. [The modeling of the ricochet shot fired from a light weapon].

    PubMed

    Gusentsov, A O; Chuchko, V A; Kil'dyushev, E M; Tumanov, E V

    The objective of the present study was to choose the optimal method for the modeling of the glance of a bullet after hitting a target under conditions of the laboratory experiment. The study required the designing and construction of an original device for the modeling of the rebound effect of a light-firearm shot under experimental conditions. The device was tested under conditions of the laboratory experiment. The trials have demonstrated the possibility of using barriers of different weight and dimensions in the above device, their positioning and fixation depending on the purpose of the experiment, dynamic alteration of its conditions with due regard for the safety and security arrangements to protect the health and life of the experimenters without compromising the statistical significance and scientific validity of the results of the experiments.

  4. [Passive ranging of infrared target using oxygen A-band and Elsasser model].

    PubMed

    Li, Jin-Hua; Wang, Zhao-Ba; Wang Zhi

    2014-09-01

    Passive ranging method of short range and single band was developed based on target radiation and attenuation characteristic of oxygen spectrum absorption. The relation between transmittance of oxygen A band and range of measured target was analyzed. Radiation strength distribution of measured target can be obtained according to the distribution law of absorption coefficient with environmental parameters. Passive ranging mathematical model of short ranges was established using Elsasser model with Lorentz line shape based on the computational methods of band average transmittance and high-temperature gas radiation narrowband model. The range of measured object was obtained using transmittance fitting with test data calculation and theoretical model. Besides, ranging precision was corrected considering the influence of oxygen absorption with enviromental parameter. The ranging experiment platform was established. The source was a 10 watt black body, and a grating spectrometer with 17 cm(-1) resolution was used. In order to improve the light receiving efficiency, light input was collected with 23 mm calibre telescope. The test data was processed for different range in 200 m. The results show that the transmittance accuracy was better than 2.18% in short range compared to the test data with predicted value in the same conditions.

  5. Scene-Based Contextual Cueing in Pigeons

    PubMed Central

    Wasserman, Edward A.; Teng, Yuejia; Brooks, Daniel I.

    2014-01-01

    Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target which could appear in one of four locations on color photographs of real-world scenes. On half of the trials, each of four scenes was consistently paired with one of four possible target locations; on the other half of the trials, each of four different scenes was randomly paired with the same four possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons. PMID:25546098

  6. A model of clutter for complex, multivariate geospatial displays.

    PubMed

    Lohrenz, Maura C; Trafton, J Gregory; Beck, R Melissa; Gendron, Marlin L

    2009-02-01

    A novel model of measuring clutter in complex geospatial displays was compared with human ratings of subjective clutter as a measure of convergent validity. The new model is called the color-clustering clutter (C3) model. Clutter is a known problem in displays of complex data and has been shown to affect target search performance. Previous clutter models are discussed and compared with the C3 model. Two experiments were performed. In Experiment 1, participants performed subjective clutter ratings on six classes of information visualizations. Empirical results were used to set two free parameters in the model. In Experiment 2, participants performed subjective clutter ratings on aeronautical charts. Both experiments compared and correlated empirical data to model predictions. The first experiment resulted in a .76 correlation between ratings and C3. The second experiment resulted in a .86 correlation, significantly better than results from a model developed by Rosenholtz et al. Outliers to our correlation suggest further improvements to C3. We suggest that (a) the C3 model is a good predictor of subjective impressions of clutter in geospatial displays, (b) geospatial clutter is a function of color density and saliency (primary C3 components), and (c) pattern analysis techniques could further improve C3. The C3 model could be used to improve the design of electronic geospatial displays by suggesting when a display will be too cluttered for its intended audience.

  7. The forgotten artist: Why to consider intentions and interaction in a model of aesthetic experience. Comment on "Move me, astonish me... delight my eyes and brain: The Vienna Integrated Model of top-down and bottom-up processes in Art Perception (VIMAP) and corresponding affective, evaluative, and neurophysiological correlates" by Matthew Pelowski et al.

    NASA Astrophysics Data System (ADS)

    Brattico, Elvira; Brattico, Pauli; Vuust, Peter

    2017-07-01

    In their target article published in this journal issue, Pelowski et al. [1] address the question of how humans experience, and respond to, visual art. They propose a multi-layered model of the representations and processes involved in assessing visual art objects that, furthermore, involves both bottom-up and top-down elements. Their model provides predictions for seven different outcomes of human aesthetic experience, based on few distinct features (schema congruence, self-relevance, and coping necessity), and connects the underlying processing stages to ;specific correlates of the brain; (a similar attempt was previously done for music by [2-4]). In doing this, the model aims to account for the (often profound) experience of an individual viewer in front of an art object.

  8. Analysis of the nuclear dependence of the νμ charged current inclusive cross section with MINERvA

    NASA Astrophysics Data System (ADS)

    Ransome, Ronald

    2014-03-01

    Neutrino experiments use heavy nuclei (Fe, Pb, C) to achieve necessary statistics. However, the use of heavy nuclei exposes these experiments to the nuclear dependence of neutrino-nucleus cross sections, which are poorly known and difficult to model. The MINERvA (Main INjector ExpeRiment for ?-A), a few-GeV neutrino nucleus scattering experiment at Fermilab, seeks to remedy the situation by directly studying the A-dependence of exclusive and inclusive channels. The MINERvA detector contains an 8 ton fully active fine-grained scintillator tracking core and targets of carbon, iron, lead, water and liquid helium which sit upstream of the tracking core. We present results from our analysis using the nuclear targets: ratios of the ?? charged-current inclusive cross section in carbon, iron, lead and plastic scintillator (CH). Supported in part by the US National Science Foundation and the Dept. of Energy.

  9. Jumping into the healthcare retail market: our experience.

    PubMed

    Pollert, Pat; Dobberstein, Darla; Wiisanen, Ronald

    2008-01-01

    Who among us has not heard of the retail-based clinic concept? Retail-based clinics have been springing up across the country in Target, Walmart, grocery stores, drugstores, and shopping malls. Due to multiple marketplace issues, others who have not traditionally been providers of healthcare saw an opportunity to meet the consumer's demand. Do retail and healthcare mix, and can this model be successful? MeritCare Health System in Fargo, ND made the decision to embrace and experiment with this new emerging consumerism model. This article reviews our experience in developing the first retail-based clinic in our service area and the state of North Dakota.

  10. Rainfall Results of the Florida Area Cumulus Experiment, 1970-76.

    NASA Astrophysics Data System (ADS)

    Woodley, William L.; Jordan, Jill; Barnston, Anthony; Simpson, Joanne; Biondini, Ron; Flueck, John

    1982-02-01

    The Florida Area Cumulus Experiment of 1970-76 (FACE-1) is a single-area, randomized, exploratory experiment to determine whether seeding cumuli for dynamic effects (dynamic seeding) can be used to augment convective rainfall over a substantial target area (1.3 × 104 km2) in south Florida. Rainfall is estimated using S-band radar observations after adjustment by raingages. The two primary response variables are rain volumes in the total target (TT) and in the floating target (FT), the most intensely treated portion of the target. The experimental unit is the day and the main observational period is the 6 h after initiation of treatment (silver iodide flares on seed days and either no flares or placebos on control days). Analyses without predictors suggest apparent increases in both the location (means and medians) and the dispersion (standard deviation and interquartile range) characteristics of rainfall due to seeding in the FT and TT variables with substantial statistical support for the FT results and lesser statistical support for the TT results. Analyses of covariance using meteorologically meaningful predictor variables suggest a somewhat larger effect of seeding with stronger statistical support. These results are interpreted in terms of the FACE conceptual model.

  11. Measuring the Ablative Richtmyer-Meshkov Growth of Isolated Defects on Plastic Capsules

    NASA Astrophysics Data System (ADS)

    Loomis, Eric; Braun, Dave; Batha, Steve; Sedillo, Tom; Evans, Scott; Sorce, Chuck; Landen, Otto

    2010-11-01

    To achieve thermonuclear ignition at Megajoule class laser systems such as the NIF using inertially confined plasmas, targets must be designed with high in-flight aspect ratios (IFAR) resulting in low shell stability. Recent simulations and experiments have shown that isolated features on the outer surface of an ignition capsule can profoundly impact capsule performance by leading to material jetting or mix into the hotspot. Unfortunately, our ability to accurately predict these effects is uncertain due to disagreement between equation of state (EOS) models. In light of this, we have begun a campaign to measure the growth of isolated defects due to ablative Richtmyer-Meshkov in CH capsules to validate these models. Face- on transmission radiography has been used to measure the evolution of Gaussian bump arrays in plastic targets. Targets were indirectly-driven using Au halfraums to radiation temperatures near 65-75 eV at the Omega laser (Laboratory for Laser Energetics, University of Rochester, NY) simultaneous with x-ray backlighting from a saran (Cl) foil. Shock speed measurements were also made to determine drive conditions in the target. The results from these experiments will aid in the design of ignition drive pulses that minimize bump amplitude at the time of shell acceleration.

  12. Evaluation of an imputed pitch velocity model of the auditory kappa effect.

    PubMed

    Henry, Molly J; McAuley, J Devin

    2009-04-01

    Three experiments evaluated an imputed pitch velocity model of the auditory kappa effect. Listeners heard 3-tone sequences and judged the timing of the middle (target) tone relative to the timing of the 1st and 3rd (bounding) tones. Experiment 1 held pitch constant but varied the time (T) interval between bounding tones (T = 728, 1,000, or 1,600 ms) in order to establish baseline performance levels for the 3 values of T. Experiments 2 and 3 combined the values of T tested in Experiment 1 with a pitch manipulation in order to create fast (8 semitones/728 ms), medium (8 semitones/1,000 ms), and slow (8 semitones/1,600 ms) velocity conditions. Consistent with an auditory motion hypothesis, distortions in perceived timing were larger for fast than for slow velocity conditions for both ascending sequences (Experiment 2) and descending sequences (Experiment 3). Overall, results supported the proposed imputed pitch velocity model of the auditory kappa effect. (c) 2009 APA, all rights reserved.

  13. New designs of LMJ targets for early ignition experiments

    NASA Astrophysics Data System (ADS)

    C-Clérouin, C.; Bonnefille, M.; Dattolo, E.; Fremerye, P.; Galmiche, D.; Gauthier, P.; Giorla, J.; Laffite, S.; Liberatore, S.; Loiseau, P.; Malinie, G.; Masse, L.; Poggi, F.; Seytor, P.

    2008-05-01

    The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 40 laser quads, delivering up to 1.4MJ and 380TW. New targets needing reduced laser energy with only a small decrease in robustness are then designed for this purpose. A first strategy is to use scaled-down cylindrical hohlraums and capsules, taking advantage of our better understanding of the problem, set on theoretical modelling, simulations and experiments. Another strategy is to work specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, which is with parametric instabilities a crucial drawback of indirect drive. An alternative design is proposed, made up of the nominal 60 quads capsule, named A1040, in a rugby-shaped hohlraum. Robustness evaluations of these different targets are in progress.

  14. How implicitly activated and explicitly acquired knowledge contribute to the effectiveness of retrieval cues.

    PubMed

    Nelson, Douglas L; Fisher, Serena L; Akirmak, Umit

    2007-12-01

    The extralist cued recall task simulates everyday reminding because a memory is encoded on the fly and retrieved later by an unexpected cue. Target words are studied individually, and recall is cued by associatively related words having preexisting forward links to them. In Experiments 1 and 2, forward cue-to-target and backward target-to-cue strengths were varied over an extended range in order to determine how these two sources of strength are related and which source has a greater effect. Forward and backward strengths had additive effects on recall, with forward strength having a consistently larger effect. The PIER2 model accurately predicted these findings, but a plausible generation-recognition version of the model, called PIER.GR, could not. In Experiment 3, forward and backward strengths, level of processing, and study time were varied in order to determine how preexisting lexical knowledge is related to knowledge acquired during the study episode. The main finding indicates that preexisting knowledge and episodic knowledge have additive effects on extralist cued recall. PIER2 can explain these findings because it assumes that these sources of strength contribute independently to recall, whereas the eSAM model cannot explain the findings because it assumes that the sources of strength are multiplicatively related.

  15. An infrastructure to mine molecular descriptors for ligand selection on virtual screening.

    PubMed

    Seus, Vinicius Rosa; Perazzo, Giovanni Xavier; Winck, Ana T; Werhli, Adriano V; Machado, Karina S

    2014-01-01

    The receptor-ligand interaction evaluation is one important step in rational drug design. The databases that provide the structures of the ligands are growing on a daily basis. This makes it impossible to test all the ligands for a target receptor. Hence, a ligand selection before testing the ligands is needed. One possible approach is to evaluate a set of molecular descriptors. With the aim of describing the characteristics of promising compounds for a specific receptor we introduce a data warehouse-based infrastructure to mine molecular descriptors for virtual screening (VS). We performed experiments that consider as target the receptor HIV-1 protease and different compounds for this protein. A set of 9 molecular descriptors are taken as the predictive attributes and the free energy of binding is taken as a target attribute. By applying the J48 algorithm over the data we obtain decision tree models that achieved up to 84% of accuracy. The models indicate which molecular descriptors and their respective values are relevant to influence good FEB results. Using their rules we performed ligand selection on ZINC database. Our results show important reduction in ligands selection to be applied in VS experiments; for instance, the best selection model picked only 0.21% of the total amount of drug-like ligands.

  16. Application of Biologically-Based Lumping To Investigate the ...

    EPA Pesticide Factsheets

    People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. However, investigators have often considered complex mixtures as one lumped entity. Valuable information can be obtained from these experiments, though this simplification provides little insight into the impact of a mixture's chemical composition on toxicologically-relevant metabolic interactions that may occur among its constituents. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically-based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate performance of our PBPK model. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course kinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for the 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 non-target chemicals. Application of this biologic

  17. Exploding Pusher Targets for Electron-Ion Coupling Measurements

    NASA Astrophysics Data System (ADS)

    Whitley, Heather D.; Pino, Jesse; Schneider, Marilyn; Shepherd, Ronnie; Benedict, Lorin; Bauer, Joseph; Graziani, Frank; Garbett, Warren

    2015-11-01

    Over the past several years, we have conducted theoretical investigations of electron-ion coupling and electronic transport in plasmas. In the regime of weakly coupled plasmas, we have identified models that we believe describe the physics well, but experimental data is still needed to validate the models. We are currently designing spectroscopic experiments to study electron-ion equilibration and/or electron heat transport using exploding pusher (XP) targets for experiments at the National Ignition Facility. Two platforms are being investigated: an indirect drive XP (IDXP) with a plastic ablator and a polar-direct drive XP (PDXP) with a glass ablator. The fill gas for both designs is D2. We propose to use a higher-Z dopant, such as Ar, as a spectroscopic tracer for time-resolved electron and ion temperature measurements. We perform 1D simulations using the ARES hydrodynamic code, in order to produce the time-resolved plasma conditions, which are then post-processed with CRETIN to assess the feasibility of a spectroscopic measurement. We examine target performance with respect to variations in gas fill pressure, ablator thickness, atom fraction of the Ar dopant, and drive energy, and assess the sensitivity of the predicted spectra to variations in the models for electron-ion equilibration and thermal conductivity. Prepared by LLNL under Contract DE-AC52-07NA27344. LLNL-ABS-675219.

  18. A prospective earthquake forecast experiment in the western Pacific

    NASA Astrophysics Data System (ADS)

    Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan

    2012-09-01

    Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.

  19. Active Electro-Location of Objects in the Underwater Environment Based on the Mixed Polarization Multiple Signal Classification Algorithm

    PubMed Central

    Guo, Lili; Qi, Junwei; Xue, Wei

    2018-01-01

    This article proposes a novel active localization method based on the mixed polarization multiple signal classification (MP-MUSIC) algorithm for positioning a metal target or an insulator target in the underwater environment by using a uniform circular antenna (UCA). The boundary element method (BEM) is introduced to analyze the boundary of the target by use of a matrix equation. In this method, an electric dipole source as a part of the locating system is set perpendicularly to the plane of the UCA. As a result, the UCA can only receive the induction field of the target. The potential of each electrode of the UCA is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields-based localization method, which can be easily implemented in practical engineering applications. A simulation model and a physical experiment are constructed. The simulation and the experiment results provide accurate positioning performance, with the help of verifying the effectiveness of the proposed localization method in underwater target locating. PMID:29439495

  20. Polar-Drive Experiments at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Hohenberger, M.

    2014-10-01

    To support direct-drive inertial confinement fusion (ICF) experiments at the National Ignition Facility (NIF) in its indirect-drive beam configuration, the polar-drive (PD) concept has been proposed. It requires direct-drive-specific beam smoothing, phase plates, and repointing the NIF beams toward the equator to ensure symmetric target irradiation. First experiments testing the performance of ignition-relevant PD implosions at the NIF have been performed. The goal of these early experiments was to develop a stable, warm implosion platform to investigate laser deposition and laser-plasma instabilities at ignition-relevant plasma conditions, and to develop and validate ignition-relevant models of laser deposition and heat conduction. These experiments utilize the NIF in its current configuration, including beam geometry, phase plates, and beam smoothing. Warm, 2.2-mm-diam plastic shells were imploded with total drive energies ranging from ~ 350 to 750 kJ with peak powers of 60 to 180 TW and peak on-target intensities from 4 ×1014 to 1 . 2 ×1015 W/cm2. Results from these initial experiments are presented, including the level of hot-electron preheat, and implosion symmetry and shell trajectory inferred via self-emission imaging and backlighting. Experiments are simulated with the 2-D hydrodynamics code DRACO including a full 3-D ray trace to model oblique beams, and a model for cross-beam energy transfer (CBET). These simulations indicate that CBET affects the shell symmetry and leads to a loss of energy imparted onto the shell, consistent with the experimental data. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  1. Unsupervised Spatial Event Detection in Targeted Domains with Applications to Civil Unrest Modeling

    PubMed Central

    Zhao, Liang; Chen, Feng; Dai, Jing; Hua, Ting; Lu, Chang-Tien; Ramakrishnan, Naren

    2014-01-01

    Twitter has become a popular data source as a surrogate for monitoring and detecting events. Targeted domains such as crime, election, and social unrest require the creation of algorithms capable of detecting events pertinent to these domains. Due to the unstructured language, short-length messages, dynamics, and heterogeneity typical of Twitter data streams, it is technically difficult and labor-intensive to develop and maintain supervised learning systems. We present a novel unsupervised approach for detecting spatial events in targeted domains and illustrate this approach using one specific domain, viz. civil unrest modeling. Given a targeted domain, we propose a dynamic query expansion algorithm to iteratively expand domain-related terms, and generate a tweet homogeneous graph. An anomaly identification method is utilized to detect spatial events over this graph by jointly maximizing local modularity and spatial scan statistics. Extensive experiments conducted in 10 Latin American countries demonstrate the effectiveness of the proposed approach. PMID:25350136

  2. Design and Fabrication of Opacity Targets for the National Ignition Facility

    DOE PAGES

    Cardenas, Tana; Schmidt, Derek William; Dodd, Evan S.; ...

    2017-12-22

    Accurate models for opacity of partially ionized atoms are important for modeling and understanding stellar interiors and other high-energy-density phenomena such as inertial confinement fusion. Lawrence Livermore National Laboratory is leading a multilaboratory effort to conduct experiments on the National Ignition Facility (NIF) to try to reproduce recent opacity tests at the Sandia National Laboratory Z-facility. Since 2015, the NIF effort has evolved several hohlraum designs that consist of multiple pieces joined together. The target also has three components attached to the main stalk over a long distance with high tolerances that have resulted in several design iterations. The targetmore » has made use of rapid prototyped features to attach a capsule and collimator under the hohlraum while avoiding interference with the beams. Furthermore, this paper discusses the evolution of the hohlraum and overall target design and the challenges involved with fabricating and assembling these targets.« less

  3. Design and Fabrication of Opacity Targets for the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardenas, Tana; Schmidt, Derek William; Dodd, Evan S.

    Accurate models for opacity of partially ionized atoms are important for modeling and understanding stellar interiors and other high-energy-density phenomena such as inertial confinement fusion. Lawrence Livermore National Laboratory is leading a multilaboratory effort to conduct experiments on the National Ignition Facility (NIF) to try to reproduce recent opacity tests at the Sandia National Laboratory Z-facility. Since 2015, the NIF effort has evolved several hohlraum designs that consist of multiple pieces joined together. The target also has three components attached to the main stalk over a long distance with high tolerances that have resulted in several design iterations. The targetmore » has made use of rapid prototyped features to attach a capsule and collimator under the hohlraum while avoiding interference with the beams. Furthermore, this paper discusses the evolution of the hohlraum and overall target design and the challenges involved with fabricating and assembling these targets.« less

  4. Attentional episodes in visual perception

    PubMed Central

    Wyble, Brad; Potter, Mary C; Bowman, Howard; Nieuwenstein, Mark

    2011-01-01

    Is one's temporal perception of the world truly as seamless as it appears? This paper presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic Simultaneous Type/ Serial Token model or eSTST; Wyble et al 2009a). Breaks between these episodes are punctuated by periods of suppressed attention, better known as the attentional blink (Raymond, Shapiro & Arnell 1992). We test predictions from this model and demonstrate that subjects are able to report more letters from a sequence of four targets presented in a dense temporal cluster, than from a sequence of four targets that are interleaved with non-targets. However, this superior report accuracy comes at a cost in impaired temporal order perception. Further experiments explore the dynamics of multiple episodes, and the boundary conditions that trigger episodic breaks. Finally, we contrast the importance of attentional control, limited resources and memory capacity constructs in the model. PMID:21604913

  5. Financial and risk considerations for successful disease management programs.

    PubMed

    Baldwin, A L

    1999-11-01

    Results for disease management [DM] programs have not been as positive as hoped because of clinical issues, lack of access to capital, and administrative issues. The financial experience of DM programs can be quite volatile. Financial projections that are protocol-based, rather than experience-based, may understate the revenue required and the range of possible costs for a DM program by understating the impact of complicating conditions and comorbidities. Actuarial tools (risk analysis and risk projection models) support better understanding of DM contracts. In particular, these models can provide the ability to quantify the impact of the factors that drive costs of a contract and the volatility of those costs. This analysis can assist DM companies in setting appropriate revenue and capital targets. Similar analysis by health plans can identify diseases that are good candidates for DM programs and can provide the basis for performance targets.

  6. Objective assessment of operator performance during ultrasound-guided procedures.

    PubMed

    Tabriz, David M; Street, Mandie; Pilgram, Thomas K; Duncan, James R

    2011-09-01

    Simulation permits objective assessment of operator performance in a controlled and safe environment. Image-guided procedures often require accurate needle placement, and we designed a system to monitor how ultrasound guidance is used to monitor needle advancement toward a target. The results were correlated with other estimates of operator skill. The simulator consisted of a tissue phantom, ultrasound unit, and electromagnetic tracking system. Operators were asked to guide a needle toward a visible point target. Performance was video-recorded and synchronized with the electromagnetic tracking data. A series of algorithms based on motor control theory and human information processing were used to convert raw tracking data into different performance indices. Scoring algorithms converted the tracking data into efficiency, quality, task difficulty, and targeting scores that were aggregated to create performance indices. After initial feasibility testing, a standardized assessment was developed. Operators (N = 12) with a broad spectrum of skill and experience were enrolled and tested. Overall scores were based on performance during ten simulated procedures. Prior clinical experience was used to independently estimate operator skill. When summed, the performance indices correlated well with estimated skill. Operators with minimal or no prior experience scored markedly lower than experienced operators. The overall score tended to increase according to operator's clinical experience. Operator experience was linked to decreased variation in multiple aspects of performance. The aggregated results of multiple trials provided the best correlation between estimated skill and performance. A metric for the operator's ability to maintain the needle aimed at the target discriminated between operators with different levels of experience. This study used a highly focused task model, standardized assessment, and objective data analysis to assess performance during simulated ultrasound-guided needle placement. The performance indices were closely related to operator experience.

  7. A glimpsing account of the role of temporal fine structure information in speech recognition.

    PubMed

    Apoux, Frédéric; Healy, Eric W

    2013-01-01

    Many behavioral studies have reported a significant decrease in intelligibility when the temporal fine structure (TFS) of a sound mixture is replaced with noise or tones (i.e., vocoder processing). This finding has led to the conclusion that TFS information is critical for speech recognition in noise. How the normal -auditory system takes advantage of the original TFS, however, remains unclear. Three -experiments on the role of TFS in noise are described. All three experiments measured speech recognition in various backgrounds while manipulating the envelope, TFS, or both. One experiment tested the hypothesis that vocoder processing may artificially increase the apparent importance of TFS cues. Another experiment evaluated the relative contribution of the target and masker TFS by disturbing only the TFS of the target or that of the masker. Finally, a last experiment evaluated the -relative contribution of envelope and TFS information. In contrast to previous -studies, however, the original envelope and TFS were both preserved - to some extent - in all conditions. Overall, the experiments indicate a limited influence of TFS and suggest that little speech information is extracted from the TFS. Concomitantly, these experiments confirm that most speech information is carried by the temporal envelope in real-world conditions. When interpreted within the framework of the glimpsing model, the results of these experiments suggest that TFS is primarily used as a grouping cue to select the time-frequency regions -corresponding to the target speech signal.

  8. Searching for a dark photon with DarkLight

    DOE PAGES

    Corliss, R.

    2016-07-30

    Here, we describe the current status of the DarkLight experiment at Jefferson Laboratory. DarkLight is motivated by the possibility that a dark photon in the mass range 10 to 100 MeV/c 2 could couple the dark sector to the Standard Model. DarkLight will precisely measure electron proton scattering using the 100 MeV electron beam of intensity 5 mA at the Jefferson Laboratory energy recovering linac incident on a windowless gas target of molecular hydrogen. We will detect the complete final state including scattered electron, recoil proton, and e +e - pair. A phase-I experiment has been funded and is expectedmore » to take data in the next eighteen months. The complete phase-II experiment is under final design and could run within two years after phase-I is completed. The DarkLight experiment drives development of new technology for beam, target, and detector and provides a new means to carry out electron scattering experiments at low momentum transfers.« less

  9. Mathematical modeling for novel cancer drug discovery and development.

    PubMed

    Zhang, Ping; Brusic, Vladimir

    2014-10-01

    Mathematical modeling enables: the in silico classification of cancers, the prediction of disease outcomes, optimization of therapy, identification of promising drug targets and prediction of resistance to anticancer drugs. In silico pre-screened drug targets can be validated by a small number of carefully selected experiments. This review discusses the basics of mathematical modeling in cancer drug discovery and development. The topics include in silico discovery of novel molecular drug targets, optimization of immunotherapies, personalized medicine and guiding preclinical and clinical trials. Breast cancer has been used to demonstrate the applications of mathematical modeling in cancer diagnostics, the identification of high-risk population, cancer screening strategies, prediction of tumor growth and guiding cancer treatment. Mathematical models are the key components of the toolkit used in the fight against cancer. The combinatorial complexity of new drugs discovery is enormous, making systematic drug discovery, by experimentation, alone difficult if not impossible. The biggest challenges include seamless integration of growing data, information and knowledge, and making them available for a multiplicity of analyses. Mathematical models are essential for bringing cancer drug discovery into the era of Omics, Big Data and personalized medicine.

  10. Model-to-image based 2D-3D registration of angiographic data

    NASA Astrophysics Data System (ADS)

    Mollus, Sabine; Lübke, Jördis; Walczuch, Andreas J.; Schumann, Heidrun; Weese, Jürgen

    2008-03-01

    We propose a novel registration method, which combines well-known vessel detection techniques with aspects of model adaptation. The proposed method is tailored to the requirements of 2D-3D-registration of interventional angiographic X-ray data such as acquired during abdominal procedures. As prerequisite, a vessel centerline is extracted out of a rotational angiography (3DRA) data set to build an individual model of the vascular tree. Following the two steps of local vessel detection and model transformation the centerline model is matched to one dynamic subtraction angiography (DSA) target image. Thereby, the in-plane position and the 3D orientation of the centerline is related to the vessel candidates found in the target image minimizing the residual error in least squares manner. In contrast to feature-based methods, no segmentation of the vessel tree in the 2D target image is required. First experiments with synthetic angiographies and clinical data sets indicate that matching with the proposed model-to-image based registration approach is accurate and robust and is characterized by a large capture range.

  11. archAR: an archaeological augmented reality experience

    NASA Astrophysics Data System (ADS)

    Wiley, Bridgette; Schulze, Jürgen P.

    2015-03-01

    We present an application for Android phones or tablets called "archAR" that uses augmented reality as an alternative, portable way of viewing archaeological information from UCSD's Levantine Archaeology Laboratory. archAR provides a unique experience of flying through an archaeological dig site in the Levantine area and exploring the artifacts uncovered there. Using a Google Nexus tablet and Qualcomm's Vuforia API, we use an image target as a map and overlay a three-dimensional model of the dig site onto it, augmenting reality such that we are able to interact with the plotted artifacts. The user can physically move the Android device around the image target and see the dig site model from any perspective. The user can also move the device closer to the model in order to "zoom" into the view of a particular section of the model and its associated artifacts. This is especially useful, as the dig site model and the collection of artifacts are very detailed. The artifacts are plotted as points, colored by type. The user can touch the virtual points to trigger a popup information window that contains details of the artifact, such as photographs, material descriptions, and more.

  12. Shooting Star Experiment

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The Shooting Star Experiment (SSE) is designed to develop and demonstrate the technology required to focus the sun's energy and use the energy for inexpensive space Propulsion Research. Pictured is an engineering model (Pathfinder III) of the Shooting Star Experiment (SSE). This model was used to test and characterize the motion and deformation of the structure caused by thermal effects. In this photograph, alignment targets are being placed on the engineering model so that a theodolite (alignment telescope) could be used to accurately measure the deformation and deflections of the engineering model under extreme conditions, such as the coldness of deep space and the hotness of the sun as well as vacuum. This thermal vacuum test was performed at the X-Ray Calibration Facility because of the size of the test article and the capabilities of the facility to simulate in-orbit conditions

  13. Modeling peripheral vision for moving target search and detection.

    PubMed

    Yang, Ji Hyun; Huston, Jesse; Day, Michael; Balogh, Imre

    2012-06-01

    Most target search and detection models focus on foveal vision. In reality, peripheral vision plays a significant role, especially in detecting moving objects. There were 23 subjects who participated in experiments simulating target detection tasks in urban and rural environments while their gaze parameters were tracked. Button responses associated with foveal object and peripheral object (PO) detection and recognition were recorded. In an urban scenario, pedestrians appearing in the periphery holding guns were threats and pedestrians with empty hands were non-threats. In a rural scenario, non-U.S. unmanned aerial vehicles (UAVs) were considered threats and U.S. UAVs non-threats. On average, subjects missed detecting 2.48 POs among 50 POs in the urban scenario and 5.39 POs in the rural scenario. Both saccade reaction time and button reaction time can be predicted by peripheral angle and entrance speed of POs. Fast moving objects were detected faster than slower objects and POs appearing at wider angles took longer to detect than those closer to the gaze center. A second-order mixed-effect model was applied to provide each subject's prediction model for peripheral target detection performance as a function of eccentricity angle and speed. About half the subjects used active search patterns while the other half used passive search patterns. An interactive 3-D visualization tool was developed to provide a representation of macro-scale head and gaze movement in the search and target detection task. An experimentally validated stochastic model of peripheral vision in realistic target detection scenarios was developed.

  14. Injector design for liner-on-target gas-puff experiments

    NASA Astrophysics Data System (ADS)

    Valenzuela, J. C.; Krasheninnikov, I.; Conti, F.; Wessel, F.; Fadeev, V.; Narkis, J.; Ross, M. P.; Rahman, H. U.; Ruskov, E.; Beg, F. N.

    2017-11-01

    We present the design of a gas-puff injector for liner-on-target experiments. The injector is composed of an annular high atomic number (e.g., Ar and Kr) gas and an on-axis plasma gun that delivers an ionized deuterium target. The annular supersonic nozzle injector has been studied using Computational Fluid Dynamics (CFD) simulations to produce a highly collimated (M > 5), ˜1 cm radius gas profile that satisfies the theoretical requirement for best performance on ˜1-MA current generators. The CFD simulations allowed us to study output density profiles as a function of the nozzle shape, gas pressure, and gas composition. We have performed line-integrated density measurements using a continuous wave (CW) He-Ne laser to characterize the liner gas density. The measurements agree well with the CFD values. We have used a simple snowplow model to study the plasma sheath acceleration in a coaxial plasma gun to help us properly design the target injector.

  15. Injector design for liner-on-target gas-puff experiments.

    PubMed

    Valenzuela, J C; Krasheninnikov, I; Conti, F; Wessel, F; Fadeev, V; Narkis, J; Ross, M P; Rahman, H U; Ruskov, E; Beg, F N

    2017-11-01

    We present the design of a gas-puff injector for liner-on-target experiments. The injector is composed of an annular high atomic number (e.g., Ar and Kr) gas and an on-axis plasma gun that delivers an ionized deuterium target. The annular supersonic nozzle injector has been studied using Computational Fluid Dynamics (CFD) simulations to produce a highly collimated (M > 5), ∼1 cm radius gas profile that satisfies the theoretical requirement for best performance on ∼1-MA current generators. The CFD simulations allowed us to study output density profiles as a function of the nozzle shape, gas pressure, and gas composition. We have performed line-integrated density measurements using a continuous wave (CW) He-Ne laser to characterize the liner gas density. The measurements agree well with the CFD values. We have used a simple snowplow model to study the plasma sheath acceleration in a coaxial plasma gun to help us properly design the target injector.

  16. X-ray burst studies with the JENSA gas jet target

    NASA Astrophysics Data System (ADS)

    Schmidt, Konrad; Chipps, Kelly A.; Ahn, Sunghoon; Allen, Jacob M.; Ayoub, Sara; Bardayan, Daniel W.; Blackmon, Jeffrey C.; Blankstein, Drew; Browne, Justin; Cha, Soomi; Chae, Kyung YUK; Cizewski, Jolie; Deibel, Catherine M.; Deleeuw, Eric; Gomez, Orlando; Greife, Uwe; Hager, Ulrike; Hall, Matthew R.; Jones, Katherine L.; Kontos, Antonios; Kozub, Raymond L.; Lee, Eunji; Lepailleur, Alex; Linhardt, Laura E.; Matos, Milan; Meisel, Zach; Montes, Fernando; O'Malley, Patrick D.; Ong, Wei Jia; Pain, Steven D.; Sachs, Alison; Schatz, Hendrik; Schmitt, Kyle T.; Smith, Karl; Smith, Michael S.; Soares de Bem, Natã F.; Thompson, Paul J.; Toomey, Rebecca; Walter, David

    2018-01-01

    When a neutron star accretes hydrogen and helium from the outer layers of its companion star, thermonuclear burning enables the αp-process as a break out mechanism from the hot CNO cycle. Model calculations predict (α, p) reaction rates significantly affect both the light curves and elemental abundances in the burst ashes. The Jet Experiments in Nuclear Structure and Astrophysics (JENSA) gas jet target enables the direct measurement of previously inaccessible (α,p) reactions with radioactive beams provided by the rare isotope re-accelerator ReA3 at the National Superconducting Cyclotron Laboratory (NSCL), USA. JENSA is going to be the main target for the Recoil Separator for Capture Reactions (SECAR) at the Facility for Rare Isotope Beams (FRIB). Commissioning of JENSA and first experiments at Oak Ridge National Laboratory (ORNL) showed a highly localized, pure gas target with a density of ˜1019 atoms per square centimeter. Preliminary results are presented from the first direct cross section measurement of the 34Ar(α, p)37 K reaction at NSCL.

  17. Measurements of cross-section of charge current inclusive of antineutrino scattering off nucleons using carbon, iron, lead and scintillator at MINER$$\

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakotondravohitra, Laza

    2015-08-18

    Neutrino physics is one of the most active fields in the domaine of high energy physics during the last century. The need of precise measurement of neutrino-nucleus interactions required by the neutrino oscillation experiments is a an exiting step. These measurements of cross-section are more than essential for neutrino oscillation experiment. Over the year, many measurements from varieties of experiments have been presented. MINERνA is one of the world leaders in measuring cross-section of neutrino and antineutrino -nucleus interactions. MINERνA is a neutrino-nucleus scattering experiment installed in the few-GeV NuMI beam line at Fermilab. In order to study nuclear dependence,more » MINERνA is endowed with different types of solid nuclear targets as well are liquid targets such as helium and water. This thesis presents measurements of cross-section of antineutrino scattering off nucleons using a variety of solid nuclear targets, carbon, iron, lead and also polystyrene scintillator (CH). The data set of antineutrino used for this analysis was taken between March and July 2010 with a total of 1.60X10 20 protons on target. Charged current inclusive interactions were selected by requiring a positive muon and kinematics limitation of acceptance of the muon spectrometer are applied. The analysis requires neutrino energy between 2GeV et 20GeV and the angle of muon θ mu < 17degree . The absolute cross-section # as function of neutrino energy and the differential cross-section dσ/ dx bj measured and shown the corresponding systematics for each nuclear targets. Data results are compared with prediction of the models implemented in the neutrino events generators GENIE 2.6.2 used by the experiment.« less

  18. A Canopy Density Model for Planar Orchard Target Detection Based on Ultrasonic Sensors

    PubMed Central

    Li, Hanzhe; Zhai, Changyuan; Weckler, Paul; Wang, Ning; Yang, Shuo; Zhang, Bo

    2016-01-01

    Orchard target-oriented variable rate spraying is an effective method to reduce pesticide drift and excessive residues. To accomplish this task, the orchard targets’ characteristic information is needed to control liquid flow rate and airflow rate. One of the most important characteristics is the canopy density. In order to establish the canopy density model for a planar orchard target which is indispensable for canopy density calculation, a target density detection testing system was developed based on an ultrasonic sensor. A time-domain energy analysis method was employed to analyze the ultrasonic signal. Orthogonal regression central composite experiments were designed and conducted using man-made canopies of known density with three or four layers of leaves. Two model equations were obtained, of which the model for the canopies with four layers was found to be the most reliable. A verification test was conducted with different layers at the same density values and detecting distances. The test results showed that the relative errors of model density values and actual values of five, four, three and two layers of leaves were acceptable, while the maximum relative errors were 17.68%, 25.64%, 21.33% and 29.92%, respectively. It also suggested the model equation with four layers had a good applicability with different layers which increased with adjacent layers. PMID:28029132

  19. Aircraft Segmentation in SAR Images Based on Improved Active Shape Model

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Xiong, B.; Kuang, G.

    2018-04-01

    In SAR image interpretation, aircrafts are the important targets arousing much attention. However, it is far from easy to segment an aircraft from the background completely and precisely in SAR images. Because of the complex structure, different kinds of electromagnetic scattering take place on the aircraft surfaces. As a result, aircraft targets usually appear to be inhomogeneous and disconnected. It is a good idea to extract an aircraft target by the active shape model (ASM), since combination of the geometric information controls variations of the shape during the contour evolution. However, linear dimensionality reduction, used in classic ACM, makes the model rigid. It brings much trouble to segment different types of aircrafts. Aiming at this problem, an improved ACM based on ISOMAP is proposed in this paper. ISOMAP algorithm is used to extract the shape information of the training set and make the model flexible enough to deal with different aircrafts. The experiments based on real SAR data shows that the proposed method achieves obvious improvement in accuracy.

  20. Comparison of hydrodynamic simulations with two-shockwave drive target experiments

    NASA Astrophysics Data System (ADS)

    Karkhanis, Varad; Ramaprabhu, Praveen; Buttler, William

    2015-11-01

    We consider hydrodynamic continuum simulations to mimic ejecta generation in two-shockwave target experiments, where metallic surface is loaded by two successive shock waves. Time of second shock in simulations is determined to match experimental amplitudes at the arrival of the second shock. The negative Atwood number A --> - 1 of ejecta simulations leads to two successive phase inversions of the interface corresponding to the passage of the shocks from heavy to light media in each instance. Metallic phase of ejecta (solid/liquid) depends on shock loading pressure in the experiment, and we find that hydrodynamic simulations quantify the liquid phase ejecta physics with a fair degree of accuracy, where RM instability is not suppressed by the strength effect. In particular, we find that our results of free surface velocity, maximum ejecta velocity, and maximum ejecta areal density are in excellent agreement with their experimental counterparts, as well as ejecta models. We also comment on the parametric space for hydrodynamic simulations in which they can be used to compare with the target experiments. This work was supported in part by the (U.S.) Department of Energy (DOE) under Contract No. DE-AC52-06NA2-5396.

  1. Prostate-specific membrane antigen-targeted liposomes specifically deliver the Zn(2+) chelator TPEN inducing oxidative stress in prostate cancer cells.

    PubMed

    Stuart, Christopher H; Singh, Ravi; Smith, Thomas L; D'Agostino, Ralph; Caudell, David; Balaji, K C; Gmeiner, William H

    2016-05-01

    To evaluate the potential use of zinc chelation for prostate cancer therapy using a new liposomal formulation of the zinc chelator, N,N,N',N'-tetrakis(2-pyridylmethyl)-ethylenediamine (TPEN). TPEN was encapsulated in nontargeted liposomes or liposomes displaying an aptamer to target prostate cancer cells overexpression prostate-specific membrane antigen. The prostate cancer selectivity and therapeutic efficacy of liposomal (targeted and nontargeted) and free TPEN were evaluated in vitro and in tumor-bearing mice. TPEN chelates zinc and results in reactive oxygen species imbalance leading to cell death. Delivery of TPEN using aptamer-targeted liposomes results in specific delivery to targeted cells. In vivo experiments show that TPEN-loaded, aptamer-targeted liposomes reduce tumor growth in a human prostate cancer xenograft model.

  2. Increasing the persistence of a heterogeneous behavior chain: Studies of extinction in a rat model of search behavior of working dogs

    PubMed Central

    Thrailkill, Eric A.; Kacelnik, Alex; Porritt, Fay; Bouton, Mark E.

    2016-01-01

    Dogs trained to search for contraband perform a chain of behavior in which they first search for a target and then make a separate response that indicates to the trainer that they have found one. The dogs often conduct multiple searches without encountering a target and receiving the reinforcer (i.e., no contraband is present). Understanding extinction (i.e., the decline in work rate when reinforcers are no longer encountered) may assist in training dogs to work in conditions where targets are rare. We therefore trained rats on a search-target behavior chain modeled on the search behavior of working dogs. A discriminative stimulus signaled that a search response (e.g., chain pull) led to a second stimulus that set the occasion for a target response (e.g., lever press) that was reinforced by a food pellet. In Experiment 1 training with longer search durations and intermittent (partial) reinforcement of searching (i.e. some trials had no target present) both led to more persistent search responding in extinction. The loss of search behavior in extinction was primarily dependent on the number of non-reinforced searches rather than time searching without reinforcement. In Experiments 2 and 3, delivery of non-contingent reinforcers during extinction increased search persistence provided they had also been presented during training. Thus, results with rats suggest that the persistence of working dog performance (or chained behavior generally) may be improved by training with partial reinforcement of searching and non-contingent reinforcement during both training and work (extinction). PMID:27306694

  3. Integrated nanotechnology platform for tumor-targeted multimodal imaging and therapeutic cargo release

    PubMed Central

    Hosoya, Hitomi; Dobroff, Andrey S.; Driessen, Wouter H. P.; Cristini, Vittorio; Brinker, Lina M.; Staquicini, Fernanda I.; Cardó-Vila, Marina; D’Angelo, Sara; Ferrara, Fortunato; Proneth, Bettina; Lin, Yu-Shen; Dunphy, Darren R.; Dogra, Prashant; Melancon, Marites P.; Stafford, R. Jason; Miyazono, Kohei; Gelovani, Juri G.; Kataoka, Kazunori; Brinker, C. Jeffrey; Sidman, Richard L.; Arap, Wadih; Pasqualini, Renata

    2016-01-01

    A major challenge of targeted molecular imaging and drug delivery in cancer is establishing a functional combination of ligand-directed cargo with a triggered release system. Here we develop a hydrogel-based nanotechnology platform that integrates tumor targeting, photon-to-heat conversion, and triggered drug delivery within a single nanostructure to enable multimodal imaging and controlled release of therapeutic cargo. In proof-of-concept experiments, we show a broad range of ligand peptide-based applications with phage particles, heat-sensitive liposomes, or mesoporous silica nanoparticles that self-assemble into a hydrogel for tumor-targeted drug delivery. Because nanoparticles pack densely within the nanocarrier, their surface plasmon resonance shifts to near-infrared, thereby enabling a laser-mediated photothermal mechanism of cargo release. We demonstrate both noninvasive imaging and targeted drug delivery in preclinical mouse models of breast and prostate cancer. Finally, we applied mathematical modeling to predict and confirm tumor targeting and drug delivery. These results are meaningful steps toward the design and initial translation of an enabling nanotechnology platform with potential for broad clinical applications. PMID:26839407

  4. Perturbation biology nominates upstream–downstream drug combinations in RAF inhibitor resistant melanoma cells

    PubMed Central

    Korkut, Anil; Wang, Weiqing; Demir, Emek; Aksoy, Bülent Arman; Jing, Xiaohong; Molinelli, Evan J; Babur, Özgün; Bemis, Debra L; Onur Sumer, Selcuk; Solit, David B; Pratilas, Christine A; Sander, Chris

    2015-01-01

    Resistance to targeted cancer therapies is an important clinical problem. The discovery of anti-resistance drug combinations is challenging as resistance can arise by diverse escape mechanisms. To address this challenge, we improved and applied the experimental-computational perturbation biology method. Using statistical inference, we build network models from high-throughput measurements of molecular and phenotypic responses to combinatorial targeted perturbations. The models are computationally executed to predict the effects of thousands of untested perturbations. In RAF-inhibitor resistant melanoma cells, we measured 143 proteomic/phenotypic entities under 89 perturbation conditions and predicted c-Myc as an effective therapeutic co-target with BRAF or MEK. Experiments using the BET bromodomain inhibitor JQ1 affecting the level of c-Myc protein and protein kinase inhibitors targeting the ERK pathway confirmed the prediction. In conclusion, we propose an anti-cancer strategy of co-targeting a specific upstream alteration and a general downstream point of vulnerability to prevent or overcome resistance to targeted drugs. DOI: http://dx.doi.org/10.7554/eLife.04640.001 PMID:26284497

  5. Integrated nanotechnology platform for tumor-targeted multimodal imaging and therapeutic cargo release.

    PubMed

    Hosoya, Hitomi; Dobroff, Andrey S; Driessen, Wouter H P; Cristini, Vittorio; Brinker, Lina M; Staquicini, Fernanda I; Cardó-Vila, Marina; D'Angelo, Sara; Ferrara, Fortunato; Proneth, Bettina; Lin, Yu-Shen; Dunphy, Darren R; Dogra, Prashant; Melancon, Marites P; Stafford, R Jason; Miyazono, Kohei; Gelovani, Juri G; Kataoka, Kazunori; Brinker, C Jeffrey; Sidman, Richard L; Arap, Wadih; Pasqualini, Renata

    2016-02-16

    A major challenge of targeted molecular imaging and drug delivery in cancer is establishing a functional combination of ligand-directed cargo with a triggered release system. Here we develop a hydrogel-based nanotechnology platform that integrates tumor targeting, photon-to-heat conversion, and triggered drug delivery within a single nanostructure to enable multimodal imaging and controlled release of therapeutic cargo. In proof-of-concept experiments, we show a broad range of ligand peptide-based applications with phage particles, heat-sensitive liposomes, or mesoporous silica nanoparticles that self-assemble into a hydrogel for tumor-targeted drug delivery. Because nanoparticles pack densely within the nanocarrier, their surface plasmon resonance shifts to near-infrared, thereby enabling a laser-mediated photothermal mechanism of cargo release. We demonstrate both noninvasive imaging and targeted drug delivery in preclinical mouse models of breast and prostate cancer. Finally, we applied mathematical modeling to predict and confirm tumor targeting and drug delivery. These results are meaningful steps toward the design and initial translation of an enabling nanotechnology platform with potential for broad clinical applications.

  6. Integrated nanotechnology platform for tumor-targeted multimodal imaging and therapeutic cargo release

    DOE PAGES

    Hosoya, Hitomi; Dobroff, Andrey S.; Driessen, Wouter H. P.; ...

    2016-02-02

    A major challenge of targeted molecular imaging and drug delivery in cancer is establishing a functional combination of ligand-directed cargo with a triggered release system. Here we develop a hydrogel-based nanotechnology platform that integrates tumor targeting, photon-to-heat conversion, and triggered drug delivery within a single nanostructure to enable multimodal imaging and controlled release of therapeutic cargo. In proof-of-concept experiments, we show a broad range of ligand peptide-based applications with phage particles, heat-sensitive liposomes, or mesoporous silica nanoparticles that self-assemble into a hydrogel for tumor-targeted drug delivery. Because nanoparticles pack densely within the nanocarrier, their surface plasmon resonance shifts to near-infrared,more » thereby enabling a laser-mediated photothermal mechanism of cargo release. We demonstrate both noninvasive imaging and targeted drug delivery in preclinical mouse models of breast and prostate cancer. Finally, we applied mathematical modeling to predict and confirm tumor targeting and drug delivery. We conclude that these results are meaningful steps toward the design and initial translation of an enabling nanotechnology platform with potential for broad clinical applications.« less

  7. Integrated nanotechnology platform for tumor-targeted multimodal imaging and therapeutic cargo release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hosoya, Hitomi; Dobroff, Andrey S.; Driessen, Wouter H. P.

    A major challenge of targeted molecular imaging and drug delivery in cancer is establishing a functional combination of ligand-directed cargo with a triggered release system. Here we develop a hydrogel-based nanotechnology platform that integrates tumor targeting, photon-to-heat conversion, and triggered drug delivery within a single nanostructure to enable multimodal imaging and controlled release of therapeutic cargo. In proof-of-concept experiments, we show a broad range of ligand peptide-based applications with phage particles, heat-sensitive liposomes, or mesoporous silica nanoparticles that self-assemble into a hydrogel for tumor-targeted drug delivery. Because nanoparticles pack densely within the nanocarrier, their surface plasmon resonance shifts to near-infrared,more » thereby enabling a laser-mediated photothermal mechanism of cargo release. We demonstrate both noninvasive imaging and targeted drug delivery in preclinical mouse models of breast and prostate cancer. Finally, we applied mathematical modeling to predict and confirm tumor targeting and drug delivery. We conclude that these results are meaningful steps toward the design and initial translation of an enabling nanotechnology platform with potential for broad clinical applications.« less

  8. The effects of aging on the interaction between reinforcement learning and attention.

    PubMed

    Radulescu, Angela; Daniel, Reka; Niv, Yael

    2016-11-01

    Reinforcement learning (RL) in complex environments relies on selective attention to uncover those aspects of the environment that are most predictive of reward. Whereas previous work has focused on age-related changes in RL, it is not known whether older adults learn differently from younger adults when selective attention is required. In 2 experiments, we examined how aging affects the interaction between RL and selective attention. Younger and older adults performed a learning task in which only 1 stimulus dimension was relevant to predicting reward, and within it, 1 "target" feature was the most rewarding. Participants had to discover this target feature through trial and error. In Experiment 1, stimuli varied on 1 or 3 dimensions and participants received hints that revealed the target feature, the relevant dimension, or gave no information. Group-related differences in accuracy and RTs differed systematically as a function of the number of dimensions and the type of hint available. In Experiment 2 we used trial-by-trial computational modeling of the learning process to test for age-related differences in learning strategies. Behavior of both young and older adults was explained well by a reinforcement-learning model that uses selective attention to constrain learning. However, the model suggested that older adults restricted their learning to fewer features, employing more focused attention than younger adults. Furthermore, this difference in strategy predicted age-related deficits in accuracy. We discuss these results suggesting that a narrower filter of attention may reflect an adaptation to the reduced capabilities of the reinforcement learning system. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Sense and simplicity in HADDOCK scoring: Lessons from CASP‐CAPRI round 1

    PubMed Central

    Vangone, A.; Rodrigues, J. P. G. L. M.; Xue, L. C.; van Zundert, G. C. P.; Geng, C.; Kurkcuoglu, Z.; Nellen, M.; Narasimhan, S.; Karaca, E.; van Dijk, M.; Melquiond, A. S. J.; Visscher, K. M.; Trellet, M.; Kastritis, P. L.

    2016-01-01

    ABSTRACT Our information‐driven docking approach HADDOCK is a consistent top predictor and scorer since the start of its participation in the CAPRI community‐wide experiment. This sustained performance is due, in part, to its ability to integrate experimental data and/or bioinformatics information into the modelling process, and also to the overall robustness of the scoring function used to assess and rank the predictions. In the CASP‐CAPRI Round 1 scoring experiment we successfully selected acceptable/medium quality models for 18/14 of the 25 targets – a top‐ranking performance among all scorers. Considering that for only 20 targets acceptable models were generated by the community, our effective success rate reaches as high as 90% (18/20). This was achieved using the standard HADDOCK scoring function, which, thirteen years after its original publication, still consists of a simple linear combination of intermolecular van der Waals and Coulomb electrostatics energies and an empirically derived desolvation energy term. Despite its simplicity, this scoring function makes sense from a physico‐chemical perspective, encoding key aspects of biomolecular recognition. In addition to its success in the scoring experiment, the HADDOCK server takes the first place in the server prediction category, with 16 successful predictions. Much like our scoring protocol, because of the limited time per target, the predictions relied mainly on either an ab initio center‐of‐mass and symmetry restrained protocol, or on a template‐based approach whenever applicable. These results underline the success of our simple but sensible prediction and scoring scheme. Proteins 2017; 85:417–423. © 2016 Wiley Periodicals, Inc. PMID:27802573

  10. Development, Testing, and Validation of a Model-Based Tool to Predict Operator Responses in Unexpected Workload Transitions

    NASA Technical Reports Server (NTRS)

    Sebok, Angelia; Wickens, Christopher; Sargent, Robert

    2015-01-01

    One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.

  11. Hollow-Fiber Cartridges: Model Systems for Virus Removal from Blood

    NASA Astrophysics Data System (ADS)

    Jacobitz, Frank; Menon, Jeevan

    2005-11-01

    Aethlon Medical is developing a hollow-fiber hemodialysis device designed to remove viruses and toxins from blood. Possible target viruses include HIV and pox-viruses. The filter could reduce virus and viral toxin concentration in the patient's blood, delaying illness so the patient's immune system can fight off the virus. In order to optimize the design of such a filter, the fluid mechanics of the device is both modeled analytically and investigated experimentally. The flow configuration of the proposed device is that of Starling flow. Polysulfone hollow-fiber dialysis cartridges were used. The cartridges are charged with water as a model fluid for blood and fluorescent latex beads are used in the experiments as a model for viruses. In the experiments, properties of the flow through the cartridge are determined through pressure and volume flow rate measurements of water. The removal of latex beads, which are captured in the porous walls of the fibers, was measured spectrophotometrically. Experimentally derived coefficients derived from these experiments are used in the analytical model of the flow and removal predictions from the model are compared to those obtained from the experiments.

  12. Ray tracing through a hexahedral mesh in HADES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henderson, G L; Aufderheide, M B

    In this paper we describe a new ray tracing method targeted for inclusion in HADES. The algorithm tracks rays through three-dimensional tetrakis hexahedral mesh objects, like those used by the ARES code to model inertial confinement experiments.

  13. BESMEX: Bering Sea marine mammal experiment. [with the primary target species being the walrus and bowhead whale

    NASA Technical Reports Server (NTRS)

    Ray, G. C.; Wartzok, D.

    1974-01-01

    Predictive ecological models are being studied for the management and conservation of the walrus, and the bowhead whale in the Bering Sea. The influence of sea ice on the distribution, and carrying capacity of the area for these two mammals is to be investigated with the primary target species being the walrus. Remote sensing and radio tracking is considered a requirement for assessing the walrus ecosystem.

  14. Diffuse characteristics study of laser target board using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Yang, Pengling; Wu, Yong; Wang, Zhenbao; Tao, Mengmeng; Wu, Junjie; Wang, Ping; Yan, Yan; Zhang, Lei; Feng, Gang; Zhu, Jinghui; Feng, Guobin

    2013-05-01

    In this paper, Torrance-Sparrow and Oren-Nayar model is adopt to study diffuse characteristics of laser target board. The model which based on geometric optics, assumes that rough surfaces are made up of a series of symmetric V-groove cavities with different slopes at microscopic level. The distribution of the slopes of the V-grooves are modeled as beckman distribution function, and every microfacet of the V-groove cavity is assumed to behave like a perfect mirror, which means the reflected ray follows Fresnel law at the microfacet. The masking and shadowing effects of rough surface are also taken into account through geometric attenuation factor. Monte Carlo method is used to simulate the diffuse reflectance distribution of the laser target board with different materials and processing technology, and all the calculated results are verified by experiment. It is shown that the profile of bidirectional reflectance distribution curve is lobe-shaped with the maximum lies along the mirror reflection direction. The width of the profile is narrower for a lower roughness value, and broader for a higher roughness value. The refractive index of target material will also influence the intensity and distribution of diffuse reflectance of laser target surface.

  15. Target fragmentation in radiobiology

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Cucinotta, Francis A.; Shinn, Judy L.; Townsend, Lawrence W.

    1993-01-01

    Nuclear reactions in biological systems produce low-energy fragments of the target nuclei seen as local high events of linear energy transfer (LET). A nuclear-reaction formalism is used to evaluate the nuclear-induced fields within biosystems and their effects within several biological models. On the basis of direct ionization interaction, one anticipates high-energy protons to have a quality factor and relative biological effectiveness (RBE) of unity. Target fragmentation contributions raise the effective quality factor of 10 GeV protons to 3.3 in reasonable agreement with RBE values for induced micronuclei in bean sprouts. Application of the Katz model indicates that the relative increase in RBE with decreasing exposure observed in cell survival experiments with 160 MeV protons is related solely to target fragmentation events. Target fragment contributions to lens opacity given an RBE of 1.4 for 2 GeV protons in agreement with the work of Lett and Cox. Predictions are made for the effective RBE for Harderian gland tumors induced by high-energy protons. An exposure model for lifetime cancer risk is derived from NCRP 98 risk tables, and protraction effects are examined for proton and helium ion exposures. The implications of dose rate enhancement effects on space radiation protection are considered.

  16. The serial nature of the masked onset priming effect revisited.

    PubMed

    Mousikou, Petroula; Coltheart, Max

    2014-01-01

    Reading aloud is faster when target words/nonwords are preceded by masked prime words/nonwords that share their first sound with the target (e.g., save-SINK) compared to when primes and targets are unrelated to each other (e.g., farm-SINK). This empirical phenomenon is the masked onset priming effect (MOPE) and is known to be due to serial left-to-right processing of the prime by a sublexical reading mechanism. However, the literature in this domain lacks a critical experiment. It is possible that when primes are real words their orthographic/phonological representations are activated in parallel and holistically during prime presentation, so any phoneme overlap between primes and targets (and not just initial-phoneme overlap) could facilitate target reading aloud. This is the prediction made by the only computational models of reading aloud that are able to simulate the MOPE, namely the DRC1.2.1, CDP+, and CDP++ models. We tested this prediction in the present study and found that initial-phoneme overlap (blip-BEST), but not end-phoneme overlap (flat-BEST), facilitated target reading aloud compared to no phoneme overlap (junk-BEST). These results provide support for a reading mechanism that operates serially and from left to right, yet are inconsistent with all existing computational models of single-word reading aloud.

  17. Targeted treatment trials for tuberous sclerosis and autism: no longer a dream.

    PubMed

    Sahin, Mustafa

    2012-10-01

    Genetic disorders that present with a high incidence of autism spectrum disorders (ASD) offer tremendous potential both for elucidating the underlying neurobiology of ASD and identifying therapeutic drugs and/or drug targets. As a result, clinical trials for genetic disorders associated with ASD are no longer a hope for the future but rather an exciting reality whose time has come. Tuberous sclerosis complex (TSC) is one such genetic disorder that presents with ASD, epilepsy, and intellectual disability. Cell culture and mouse model experiments have identified the mTOR pathway as a therapeutic target in this disease. This review summarizes the advantages of using TSC as model of ASD and the recent advances in the translational and clinical treatment trials in TSC. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. The Social Regulation of Emotion: An Integrative, Cross-Disciplinary Model.

    PubMed

    Reeck, Crystal; Ames, Daniel R; Ochsner, Kevin N

    2016-01-01

    Research in emotion regulation has largely focused on how people manage their own emotions, but there is a growing recognition that the ways in which we regulate the emotions of others also are important. Drawing on work from diverse disciplines, we propose an integrative model of the psychological and neural processes supporting the social regulation of emotion. This organizing framework, the 'social regulatory cycle', specifies at multiple levels of description the act of regulating another person's emotions as well as the experience of being a target of regulation. The cycle describes the processing stages that lead regulators to attempt to change the emotions of a target person, the impact of regulation on the processes that generate emotions in the target, and the underlying neural systems. Copyright © 2015. Published by Elsevier Ltd.

  19. The Social Regulation of Emotion: An Integrative, Cross-Disciplinary Model

    PubMed Central

    Reeck, Crystal; Ames, Daniel R.; Ochsner, Kevin N.

    2018-01-01

    Research in emotion regulation has largely focused on how people manage their own emotions, but there is a growing recognition that the ways in which we regulate the emotions of others also are important. Drawing on work from diverse disciplines, we propose an integrative model of the psychological and neural processes supporting the social regulation of emotion. This organizing framework, the ‘social regulatory cycle’, specifies at multiple levels of description the act of regulating another person’s emotions as well as the experience of being a target of regulation. The cycle describes the processing stages that lead regulators to attempt to change the emotions of a target person, the impact of regulation on the processes that generate emotions in the target, and the underlying neural systems. PMID:26564248

  20. Heat transfer to a heavy liquid metal in curved geometry: Code validation and CFD simulation for the MEGAPIE lower target

    NASA Astrophysics Data System (ADS)

    Dury, Trevor V.

    2006-06-01

    The ESS and SINQ Heat Emitting Temperature Sensing Surface (HETSS) mercury experiments have been used to validate the Computational Fluid Dynamics (CFD) code CFX-4 employed in designing the lower region of the international liquid metal cooled MEGAPIE target, to be installed at SINQ, PSI, in 2006. Conclusions were drawn on the best turbulence models and degrees of mesh refinement to apply, and a new CFD model of the MEGAPIE geometry was made, based on the CATIA CAD design of the exact geometry constructed. This model contained the fill and drain tubes as well as the bypass feed duct, with the differences in relative vertical length due to thermal expansion being considered between these tubes and the window. Results of the mercury experiments showed that CFD calculations can be trusted to give peak target window temperature under normal operational conditions to within about ±10%. The target nozzle actually constructed varied from the theoretical design model used for CFD due to the need to apply more generous separation distances between the nozzle and the window. In addition, the bypass duct contraction approaching the nozzle exit was less sharp compared with earlier designs. Both of these changes modified the bypass jet penetration and coverage of the heated window zone. Peak external window temperature with a 1.4 mA proton beam and steady-state operation is now predicted to be 375 °C, with internal temperature 354.0 °C (about 32 °C above earlier predictions). Increasing bypass flow from 2.5 to 3.0 kg/s lowers these peak temperatures by about 12 °C. Stress analysis still needs to be made, based on these thermal data.

  1. Debiasing affective forecasting errors with targeted, but not representative, experience narratives.

    PubMed

    Shaffer, Victoria A; Focella, Elizabeth S; Scherer, Laura D; Zikmund-Fisher, Brian J

    2016-10-01

    To determine whether representative experience narratives (describing a range of possible experiences) or targeted experience narratives (targeting the direction of forecasting bias) can reduce affective forecasting errors, or errors in predictions of experiences. In Study 1, participants (N=366) were surveyed about their experiences with 10 common medical events. Those who had never experienced the event provided ratings of predicted discomfort and those who had experienced the event provided ratings of actual discomfort. Participants making predictions were randomly assigned to either the representative experience narrative condition or the control condition in which they made predictions without reading narratives. In Study 2, participants (N=196) were again surveyed about their experiences with these 10 medical events, but participants making predictions were randomly assigned to either the targeted experience narrative condition or the control condition. Affective forecasting errors were observed in both studies. These forecasting errors were reduced with the use of targeted experience narratives (Study 2) but not representative experience narratives (Study 1). Targeted, but not representative, narratives improved the accuracy of predicted discomfort. Public collections of patient experiences should favor stories that target affective forecasting biases over stories representing the range of possible experiences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Uncontrolled Manifold Reference Feedback Control of Multi-Joint Robot Arms

    PubMed Central

    Togo, Shunta; Kagawa, Takahiro; Uno, Yoji

    2016-01-01

    The brain must coordinate with redundant bodies to perform motion tasks. The aim of the present study is to propose a novel control model that predicts the characteristics of human joint coordination at a behavioral level. To evaluate the joint coordination, an uncontrolled manifold (UCM) analysis that focuses on the trial-to-trial variance of joints has been proposed. The UCM is a nonlinear manifold associated with redundant kinematics. In this study, we directly applied the notion of the UCM to our proposed control model called the “UCM reference feedback control.” To simplify the problem, the present study considered how the redundant joints were controlled to regulate a given target hand position. We considered a conventional method that pre-determined a unique target joint trajectory by inverse kinematics or any other optimization method. In contrast, our proposed control method generates a UCM as a control target at each time step. The target UCM is a subspace of joint angles whose variability does not affect the hand position. The joint combination in the target UCM is then selected so as to minimize the cost function, which consisted of the joint torque and torque change. To examine whether the proposed method could reproduce human-like joint coordination, we conducted simulation and measurement experiments. In the simulation experiments, a three-link arm with a shoulder, elbow, and wrist regulates a one-dimensional target of a hand through proposed method. In the measurement experiments, subjects performed a one-dimensional target-tracking task. The kinematics, dynamics, and joint coordination were quantitatively compared with the simulation data of the proposed method. As a result, the UCM reference feedback control could quantitatively reproduce the difference of the mean value for the end hand position between the initial postures, the peaks of the bell-shape tangential hand velocity, the sum of the squared torque, the mean value for the torque change, the variance components, and the index of synergy as well as the human subjects. We concluded that UCM reference feedback control can reproduce human-like joint coordination. The inference for motor control of the human central nervous system based on the proposed method was discussed. PMID:27462215

  3. Distributed Peer-to-Peer Target Tracking in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Wang, Sheng; Bi, Dao-Wei; Ma, Jun-Jie

    2007-01-01

    Target tracking is usually a challenging application for wireless sensor networks (WSNs) because it is always computation-intensive and requires real-time processing. This paper proposes a practical target tracking system based on the auto regressive moving average (ARMA) model in a distributed peer-to-peer (P2P) signal processing framework. In the proposed framework, wireless sensor nodes act as peers that perform target detection, feature extraction, classification and tracking, whereas target localization requires the collaboration between wireless sensor nodes for improving the accuracy and robustness. For carrying out target tracking under the constraints imposed by the limited capabilities of the wireless sensor nodes, some practically feasible algorithms, such as the ARMA model and the 2-D integer lifting wavelet transform, are adopted in single wireless sensor nodes due to their outstanding performance and light computational burden. Furthermore, a progressive multi-view localization algorithm is proposed in distributed P2P signal processing framework considering the tradeoff between the accuracy and energy consumption. Finally, a real world target tracking experiment is illustrated. Results from experimental implementations have demonstrated that the proposed target tracking system based on a distributed P2P signal processing framework can make efficient use of scarce energy and communication resources and achieve target tracking successfully.

  4. Effects of ongoing task context and target typicality on prospective memory performance: the importance of associative cueing

    NASA Technical Reports Server (NTRS)

    Nowinski, Jessica Lang; Dismukes, Key R.

    2005-01-01

    Two experiments examined whether prospective memory performance is influenced by contextual cues. In our automatic activation model, any information available at encoding and retrieval should aid recall of the prospective task. The first experiment demonstrated an effect of the ongoing task context; performance was better when information about the ongoing task present at retrieval was available at encoding. Performance was also improved by a strong association between the prospective memory target as it was presented at retrieval and the intention as it was encoded. Experiment 2 demonstrated boundary conditions of the ongoing task context effect, which implicate the association between the ongoing and prospective tasks formed at encoding as the source of the context effect. The results of this study are consistent with predictions based on automatic activation of intentions.

  5. Experimental measurements of hydrodynamic instabilities on NOVA of relevance to astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budil, K S; Cherfils, C; Drake, R P

    1998-09-11

    Large lasers such as Nova allow the possibility of achieving regimes of high energy densities in plasmas of millimeter spatial scales and nanosecond time scales. In those plasmas where thermal conductivity and viscosity do not play a significant role, the hydrodynamic evolution is suitable for benchmarking hydrodynamics modeling in astrophysical codes. Several experiments on Nova examine hydrodynamically unstable interfaces. A typical Nova experiment uses a gold millimeter-scale hohlraum to convert the laser energy to a 200 eV blackbody source lasting about a nanosecond. The x-rays ablate a planar target, generating a series of shocks and accelerating the target. The evolvingmore » area1 density is diagnosed by time-resolved radiography, using a second x-ray source. Data from several experiments are presented and diagnostic techniques are discussed.« less

  6. Adaptation of an articulated fetal skeleton model to three-dimensional fetal image data

    NASA Astrophysics Data System (ADS)

    Klinder, Tobias; Wendland, Hannes; Wachter-Stehle, Irina; Roundhill, David; Lorenz, Cristian

    2015-03-01

    The automatic interpretation of three-dimensional fetal images poses specific challenges compared to other three-dimensional diagnostic data, especially since the orientation of the fetus in the uterus and the position of the extremities is highly variable. In this paper, we present a comprehensive articulated model of the fetal skeleton and the adaptation of the articulation for pose estimation in three-dimensional fetal images. The model is composed out of rigid bodies where the articulations are represented as rigid body transformations. Given a set of target landmarks, the model constellation can be estimated by optimization of the pose parameters. Experiments are carried out on 3D fetal MRI data yielding an average error per case of 12.03+/-3.36 mm between target and estimated landmark positions.

  7. Modelling short pulse, high intensity laser plasma interactions

    NASA Astrophysics Data System (ADS)

    Evans, R. G.

    2006-06-01

    Modelling the interaction of ultra-intense laser pulses with solid targets is made difficult through the large range of length and time scales involved in the transport of relativistic electrons. An implicit hybrid PIC-fluid model using the commercial code LSP (LSP is marketed by MRC (Albuquerque), New Mexico, USA) reveals a variety of complex phenomena which seem to be borne out in experiments and some existing theories.

  8. Selecting among competing models of electro-optic, infrared camera system range performance

    USGS Publications Warehouse

    Nichols, Jonathan M.; Hines, James E.; Nichols, James D.

    2013-01-01

    Range performance is often the key requirement around which electro-optical and infrared camera systems are designed. This work presents an objective framework for evaluating competing range performance models. Model selection based on the Akaike’s Information Criterion (AIC) is presented for the type of data collected during a typical human observer and target identification experiment. These methods are then demonstrated on observer responses to both visible and infrared imagery in which one of three maritime targets was placed at various ranges. We compare the performance of a number of different models, including those appearing previously in the literature. We conclude that our model-based approach offers substantial improvements over the traditional approach to inference, including increased precision and the ability to make predictions for some distances other than the specific set for which experimental trials were conducted.

  9. Interaction of fluorescently labeled pyrrole-imidazole polyamide probes with fixed and living murine and human cells.

    PubMed

    Nozeret, Karine; Loll, François; Cardoso, Gildas Mouta; Escudé, Christophe; Boutorine, Alexandre S

    2018-06-01

    Pericentromeric heterochromatin plays important roles in controlling gene expression and cellular differentiation. Fluorescent pyrrole-imidazole polyamides targeting murine pericentromeric DNA (major satellites) can be used for the visualization of pericentromeric heterochromatin foci in live mouse cells. New derivatives targeting human repeated DNA sequences (α-satellites) were synthesized and their interaction with target DNA was characterized. The possibility to use major satellite and α -satellite binding polyamides as tools for staining pericentromeric heterochromatin was further investigated in fixed and living mouse and human cells. The staining that was previously observed using the mouse model was further characterized and optimized, but remained limited regarding the fluorophores that can be used. The promising results regarding the staining in the mouse model could not be extended to the human model. Experiments performed in human cells showed chromosomal DNA staining without selectivity. Factors limiting the use of fluorescent polyamides, in particular probe aggregation in the cytoplasm, were investigated. Results are discussed with regards to structure and affinity of probes, density of target sites and chromatin accessibility in both models. Copyright © 2018 Elsevier B.V. and Société Française de Biochimie et Biologie Moléculaire (SFBBM). All rights reserved.

  10. A Balanced Portfolio Model For Improving Health: Concept And Vermont's Experience.

    PubMed

    Hester, James

    2018-04-01

    A successful strategy for improving population health requires acting in several sectors by implementing a portfolio of interventions. The mix of interventions should be both tailored to meet the community's needs and balanced in several dimensions-for example, time frame, level of risk, and target population. One obstacle is finding sustainable financing for both the interventions and the community infrastructure needed. This article first summarizes Vermont's experience as a laboratory for health reform. It then presents a conceptual model for a community-based population health strategy, using a balanced portfolio and diversified funding approaches. The article then reviews Vermont's population health initiative, including an example of a balanced portfolio and lessons learned from the state's experience.

  11. Differential cross-sections measurements for hadrontherapy: 50 MeV/A 12C reactions on H, C, O, Al and natTi targets

    NASA Astrophysics Data System (ADS)

    Divay, C.; Colin, J.; Cussol, D.; Finck, Ch.; Karakaya, Y.; Labalme, M.; Rousseau, M.; Salvador, S.; Vanstalle, M.

    2017-09-01

    In order to keep the benefits of a carbon treatment, the dose and biological effects induced by secondary fragments must be taken into account when simulating the treatment plan. These Monte-Carlo simulations codes are done using nuclear models that are constrained by experimental data. It is hence necessary to have precise measurements of the production rates of these fragments all along the beam path and for its whole energy range. In this context, a series of experiments aiming to measure the double differential fragmentation cross-sections of carbon on thin targets of medical interest has been started by our collaboration. In March 2015, an experiment was performed with a 50 MeV/nucleon 12C beam at GANIL. During this experiment, energy and angular differential cross-section distributions on H, C, O, Al and natTi have been measured. In the following, the experimental set-up and analysis process are briefly described and some experimental results are presented. Comparisons between several exit channel models from Phits and Geant4 show great discrepancies with the experimental data. Finally, the homemade Sliipie model is briefly presented and preliminary results are compared to the data with a promising outcome.

  12. Decision-Making under Ambiguity Is Modulated by Visual Framing, but Not by Motor vs. Non-Motor Context. Experiments and an Information-Theoretic Ambiguity Model

    PubMed Central

    Grau-Moya, Jordi; Ortega, Pedro A.; Braun, Daniel A.

    2016-01-01

    A number of recent studies have investigated differences in human choice behavior depending on task framing, especially comparing economic decision-making to choice behavior in equivalent sensorimotor tasks. Here we test whether decision-making under ambiguity exhibits effects of task framing in motor vs. non-motor context. In a first experiment, we designed an experience-based urn task with varying degrees of ambiguity and an equivalent motor task where subjects chose between hitting partially occluded targets. In a second experiment, we controlled for the different stimulus design in the two tasks by introducing an urn task with bar stimuli matching those in the motor task. We found ambiguity attitudes to be mainly influenced by stimulus design. In particular, we found that the same subjects tended to be ambiguity-preferring when choosing between ambiguous bar stimuli, but ambiguity-avoiding when choosing between ambiguous urn sample stimuli. In contrast, subjects’ choice pattern was not affected by changing from a target hitting task to a non-motor context when keeping the stimulus design unchanged. In both tasks subjects’ choice behavior was continuously modulated by the degree of ambiguity. We show that this modulation of behavior can be explained by an information-theoretic model of ambiguity that generalizes Bayes-optimal decision-making by combining Bayesian inference with robust decision-making under model uncertainty. Our results demonstrate the benefits of information-theoretic models of decision-making under varying degrees of ambiguity for a given context, but also demonstrate the sensitivity of ambiguity attitudes across contexts that theoretical models struggle to explain. PMID:27124723

  13. Decision-Making under Ambiguity Is Modulated by Visual Framing, but Not by Motor vs. Non-Motor Context. Experiments and an Information-Theoretic Ambiguity Model.

    PubMed

    Grau-Moya, Jordi; Ortega, Pedro A; Braun, Daniel A

    2016-01-01

    A number of recent studies have investigated differences in human choice behavior depending on task framing, especially comparing economic decision-making to choice behavior in equivalent sensorimotor tasks. Here we test whether decision-making under ambiguity exhibits effects of task framing in motor vs. non-motor context. In a first experiment, we designed an experience-based urn task with varying degrees of ambiguity and an equivalent motor task where subjects chose between hitting partially occluded targets. In a second experiment, we controlled for the different stimulus design in the two tasks by introducing an urn task with bar stimuli matching those in the motor task. We found ambiguity attitudes to be mainly influenced by stimulus design. In particular, we found that the same subjects tended to be ambiguity-preferring when choosing between ambiguous bar stimuli, but ambiguity-avoiding when choosing between ambiguous urn sample stimuli. In contrast, subjects' choice pattern was not affected by changing from a target hitting task to a non-motor context when keeping the stimulus design unchanged. In both tasks subjects' choice behavior was continuously modulated by the degree of ambiguity. We show that this modulation of behavior can be explained by an information-theoretic model of ambiguity that generalizes Bayes-optimal decision-making by combining Bayesian inference with robust decision-making under model uncertainty. Our results demonstrate the benefits of information-theoretic models of decision-making under varying degrees of ambiguity for a given context, but also demonstrate the sensitivity of ambiguity attitudes across contexts that theoretical models struggle to explain.

  14. Comparing different kinds of words and word-word relations to test an habituation model of priming.

    PubMed

    Rieth, Cory A; Huber, David E

    2017-06-01

    Huber and O'Reilly (2003) proposed that neural habituation exists to solve a temporal parsing problem, minimizing blending between one word and the next when words are visually presented in rapid succession. They developed a neural dynamics habituation model, explaining the finding that short duration primes produce positive priming whereas long duration primes produce negative repetition priming. The model contains three layers of processing, including a visual input layer, an orthographic layer, and a lexical-semantic layer. The predicted effect of prime duration depends both on this assumed representational hierarchy and the assumption that synaptic depression underlies habituation. The current study tested these assumptions by comparing different kinds of words (e.g., words versus non-words) and different kinds of word-word relations (e.g., associative versus repetition). For each experiment, the predictions of the original model were compared to an alternative model with different representational assumptions. Experiment 1 confirmed the prediction that non-words and inverted words require longer prime durations to eliminate positive repetition priming (i.e., a slower transition from positive to negative priming). Experiment 2 confirmed the prediction that associative priming increases and then decreases with increasing prime duration, but remains positive even with long duration primes. Experiment 3 replicated the effects of repetition and associative priming using a within-subjects design and combined these effects by examining target words that were expected to repeat (e.g., viewing the target word 'BACK' after the prime phrase 'back to'). These results support the originally assumed representational hierarchy and more generally the role of habituation in temporal parsing and priming. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Image-based in vivo assessment of targeting accuracy of stereotactic brain surgery in experimental rodent models

    NASA Astrophysics Data System (ADS)

    Rangarajan, Janaki Raman; Vande Velde, Greetje; van Gent, Friso; de Vloo, Philippe; Dresselaers, Tom; Depypere, Maarten; van Kuyck, Kris; Nuttin, Bart; Himmelreich, Uwe; Maes, Frederik

    2016-11-01

    Stereotactic neurosurgery is used in pre-clinical research of neurological and psychiatric disorders in experimental rat and mouse models to engraft a needle or electrode at a pre-defined location in the brain. However, inaccurate targeting may confound the results of such experiments. In contrast to the clinical practice, inaccurate targeting in rodents remains usually unnoticed until assessed by ex vivo end-point histology. We here propose a workflow for in vivo assessment of stereotactic targeting accuracy in small animal studies based on multi-modal post-operative imaging. The surgical trajectory in each individual animal is reconstructed in 3D from the physical implant imaged in post-operative CT and/or its trace as visible in post-operative MRI. By co-registering post-operative images of individual animals to a common stereotaxic template, targeting accuracy is quantified. Two commonly used neuromodulation regions were used as targets. Target localization errors showed not only variability, but also inaccuracy in targeting. Only about 30% of electrodes were within the subnucleus structure that was targeted and a-specific adverse effects were also noted. Shifting from invasive/subjective 2D histology towards objective in vivo 3D imaging-based assessment of targeting accuracy may benefit a more effective use of the experimental data by excluding off-target cases early in the study.

  16. Computational Transport Modeling of High-Energy Neutrons Found in the Space Environment

    NASA Technical Reports Server (NTRS)

    Cox, Brad; Theriot, Corey A.; Rohde, Larry H.; Wu, Honglu

    2012-01-01

    The high charge and high energy (HZE) particle radiation environment in space interacts with spacecraft materials and the human body to create a population of neutrons encompassing a broad kinetic energy spectrum. As an HZE ion penetrates matter, there is an increasing chance of fragmentation as penetration depth increases. When an ion fragments, secondary neutrons are released with velocities up to that of the primary ion, giving some neutrons very long penetration ranges. These secondary neutrons have a high relative biological effectiveness, are difficult to effectively shield, and can cause more biological damage than the primary ions in some scenarios. Ground-based irradiation experiments that simulate the space radiation environment must account for this spectrum of neutrons. Using the Particle and Heavy Ion Transport Code System (PHITS), it is possible to simulate a neutron environment that is characteristic of that found in spaceflight. Considering neutron dosimetry, the focus lies on the broad spectrum of recoil protons that are produced in biological targets. In a biological target, dose at a certain penetration depth is primarily dependent upon recoil proton tracks. The PHITS code can be used to simulate a broad-energy neutron spectrum traversing biological targets, and it account for the recoil particle population. This project focuses on modeling a neutron beamline irradiation scenario for determining dose at increasing depth in water targets. Energy-deposition events and particle fluence can be simulated by establishing cross-sectional scoring routines at different depths in a target. This type of model is useful for correlating theoretical data with actual beamline radiobiology experiments. Other work exposed human fibroblast cells to a high-energy neutron source to study micronuclei induction in cells at increasing depth behind water shielding. Those findings provide supporting data describing dose vs. depth across a water-equivalent medium. This poster presents PHITS data suggesting an increase in dose, up to roughly 10 cm depth, followed by a continual decrease as neutrons come to a stop in the target.

  17. Social marketing, stages of change, and public health smoking interventions.

    PubMed

    Diehr, Paula; Hannon, Peggy; Pizacani, Barbara; Forehand, Mark; Meischke, Hendrika; Curry, Susan; Martin, Diane P; Weaver, Marcia R; Harris, Jeffrey

    2011-04-01

    As a "thought experiment," the authors used a modified stages of change model for smoking to define homogeneous segments within various hypothetical populations. The authors then estimated the population effect of public health interventions that targeted the different segments. Under most assumptions, interventions that emphasized primary and secondary prevention, by targeting the Never Smoker, Maintenance, or Action segments, resulted in the highest nonsmoking life expectancy. This result is consistent with both social marketing and public health principles. Although the best thing for an individual smoker is to stop smoking, the greatest public health benefit is achieved by interventions that target nonsmokers.

  18. Creating fair lineups for suspects with distinctive features.

    PubMed

    Zarkadi, Theodora; Wade, Kimberley A; Stewart, Neil

    2009-12-01

    In their descriptions, eyewitnesses often refer to a culprit's distinctive facial features. However, in a police lineup, selecting the only member with the described distinctive feature is unfair to the suspect and provides the police with little further information. For fair and informative lineups, the distinctive feature should be either replicated across foils or concealed on the target. In the present experiments, replication produced more correct identifications in target-present lineups--without increasing the incorrect identification of foils in target-absent lineups--than did concealment. This pattern, and only this pattern, is predicted by the hybrid-similarity model of recognition.

  19. The end-state comfort effect in bimanual grip selection.

    PubMed

    Fischman, Mark G; Stodden, David F; Lehman, Davana M

    2003-03-01

    During a unimanual grip selection task in which people pick up a lightweight dowel and place one end against targets at variable heights, the choice of hand grip (overhand vs. underhand) typically depends on the perception of how comfortable the arm will be at the end of the movement: an end-state comfort effect. The two experiments reported here extend this work to bimanual tasks. In each experiment, 26 right-handed participants used their left and right hands to simultaneously pick up two wooden dowels and place either the right or left end against a series of 14 targets ranging from 14 to 210 cm above the floor. These tasks were performed in systematic ascending and descending orders in Experiment 1 and in random order in Expiment 2. Results were generally consistent with predictions of end-state comfort in that, for the extreme highest and lowest targets, participants tended to select opposite grips with each hand. Taken together, our findings are consistent with the concept of constraint hierarchies within a posture-based motion-planning model.

  20. Automated antibody structure prediction using Accelrys tools: Results and best practices

    PubMed Central

    Fasnacht, Marc; Butenhof, Ken; Goupil-Lamy, Anne; Hernandez-Guzman, Francisco; Huang, Hongwei; Yan, Lisa

    2014-01-01

    We describe the methodology and results from our participation in the second Antibody Modeling Assessment experiment. During the experiment we predicted the structure of eleven unpublished antibody Fv fragments. Our prediction methods centered on template-based modeling; potential templates were selected from an antibody database based on their sequence similarity to the target in the framework regions. Depending on the quality of the templates, we constructed models of the antibody framework regions either using a single, chimeric or multiple template approach. The hypervariable loop regions in the initial models were rebuilt by grafting the corresponding regions from suitable templates onto the model. For the H3 loop region, we further refined models using ab initio methods. The final models were subjected to constrained energy minimization to resolve severe local structural problems. The analysis of the models submitted show that Accelrys tools allow for the construction of quite accurate models for the framework and the canonical CDR regions, with RMSDs to the X-ray structure on average below 1 Å for most of these regions. The results show that accurate prediction of the H3 hypervariable loops remains a challenge. Furthermore, model quality assessment of the submitted models show that the models are of quite high quality, with local geometry assessment scores similar to that of the target X-ray structures. Proteins 2014; 82:1583–1598. © 2014 The Authors. Proteins published by Wiley Periodicals, Inc. PMID:24833271

  1. A multi-electrode biomimetic electrolocation sensor

    NASA Astrophysics Data System (ADS)

    Mayekar, K.; Damalla, D.; Gottwald, M.; Bousack, H.; von der Emde, G.

    2012-04-01

    We present the concept of an active multi-electrode catheter inspired by the electroreceptive system of the weakly electric fish, Gnathonemus petersii. The skin of this fish exhibits numerous electroreceptor organs which are capable of sensing a self induced electrical field. Our sensor is composed of a sending electrode and sixteen receiving electrodes. The electrical field produced by the sending electrode was measured by the receiving electrodes and objects were detected by the perturbation of the electrical field they induce. The intended application of such a sensor is in coronary diagnostics, in particular in distinguishing various types of plaques, which are major causes of heart attack. For calibration of the sensor system, finite element modeling (FEM) was performed. To validate the model, experimental measurements were carried out with two different systems. The physical system was glass tubing with metal and plastic wall insertions as targets. For the control of the experiment and for data acquisition, the software LabView designed for 17 electrodes was used. Different parameters of the electric images were analyzed for the prediction of the electrical properties and size of the inserted targets in the tube. Comparisons of the voltage modulations predicted from the FEM model and the experiments showed a good correspondence. It can be concluded that this novel biomimetic method can be further developed for detailed investigations of atherosclerotic lesions. Finally, we discuss various design strategies to optimize the output of the sensor using different simulated models to enhance target recognition.

  2. Enhanced reproducibility of L-mode plasma discharges via physics-model-based q-profile feedback control in DIII-D

    NASA Astrophysics Data System (ADS)

    Schuster, E.; Wehner, W. P.; Barton, J. E.; Boyer, M. D.; Luce, T. C.; Ferron, J. R.; Holcomb, C. T.; Walker, M. L.; Humphreys, D. A.; Solomon, W. M.; Penaflor, B. G.; Johnson, R. D.

    2017-11-01

    Recent experiments on DIII-D demonstrate the potential of physics-model-based q-profile control to improve reproducibility of plasma discharges. A combined feedforward  +  feedback control scheme is employed to optimize the current ramp-up phase by consistently achieving target q profiles (Target 1: q_min=1.3, q95=4.4 ; Target 2: q_min=1.65, q95=5.0 ; Target 3: q_min=2.1, q95=6.2 ) at prescribed times during the plasma formation phase (Target 1: t=1.5 s; Target 2: t=1.3 s; Target 3: t=1.0 s). At the core of the control scheme is a nonlinear, first-principles-driven, physics-based, control-oriented model of the plasma dynamics valid for low confinement (L-mode) scenarios. To prevent undesired L-H transitions, a constraint on the maximum allowable total auxiliary power is imposed in addition to the maximum powers for the individual heating and current-drive sources. Experimental results are presented to demonstrate the effectiveness of the combined feedforward  +  feedback control scheme to consistently achieve the desired target profiles at the predefined times. These results also show how the addition of feedback control significantly improves upon the feedforward-only control solution by reducing the matching error and also how the feedback controller is able to reduce the matching error as the constraint on the maximum allowable total auxiliary power is relaxed while keeping the plasma in L-mode.

  3. Novel characterization of capsule x-ray drive at the National Ignition Facility.

    PubMed

    MacLaren, S A; Schneider, M B; Widmann, K; Hammer, J H; Yoxall, B E; Moody, J D; Bell, P M; Benedetti, L R; Bradley, D K; Edwards, M J; Guymer, T M; Hinkel, D E; Hsing, W W; Kervin, M L; Meezan, N B; Moore, A S; Ralph, J E

    2014-03-14

    Indirect drive experiments at the National Ignition Facility are designed to achieve fusion by imploding a fuel capsule with x rays from a laser-driven hohlraum. Previous experiments have been unable to determine whether a deficit in measured ablator implosion velocity relative to simulations is due to inadequate models of the hohlraum or ablator physics. ViewFactor experiments allow for the first time a direct measure of the x-ray drive from the capsule point of view. The experiments show a 15%-25% deficit relative to simulations and thus explain nearly all of the disagreement with the velocity data. In addition, the data from this open geometry provide much greater constraints on a predictive model of laser-driven hohlraum performance than the nominal ignition target.

  4. Is it better to be average? High and low performance as predictors of employee victimization.

    PubMed

    Jensen, Jaclyn M; Patel, Pankaj C; Raver, Jana L

    2014-03-01

    Given increased interest in whether targets' behaviors at work are related to their victimization, we investigated employees' job performance level as a precipitating factor for being victimized by peers in one's work group. Drawing on rational choice theory and the victim precipitation model, we argue that perpetrators take into consideration the risks of aggressing against particular targets, such that high performers tend to experience covert forms of victimization from peers, whereas low performers tend to experience overt forms of victimization. We further contend that the motivation to punish performance deviants will be higher when performance differentials are salient, such that the effects of job performance on covert and overt victimization will be exacerbated by group performance polarization, yet mitigated when the target has high equity sensitivity (benevolence). Finally, we investigate whether victimization is associated with future performance impairments. Results from data collected at 3 time points from 576 individuals in 62 work groups largely support the proposed model. The findings suggest that job performance is a precipitating factor to covert victimization for high performers and overt victimization for low performers in the workplace with implications for subsequent performance.

  5. The Effects of Spatial Endogenous Pre-cueing across Eccentricities

    PubMed Central

    Feng, Jing; Spence, Ian

    2017-01-01

    Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field. PMID:28638353

  6. The Effects of Spatial Endogenous Pre-cueing across Eccentricities.

    PubMed

    Feng, Jing; Spence, Ian

    2017-01-01

    Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants' ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field.

  7. Discriminating Famous from Fictional Names Based on Lifetime Experience: Evidence in Support of a Signal-Detection Model Based on Finite Mixture Distributions

    ERIC Educational Resources Information Center

    Bowles, Ben; Harlow, Iain M.; Meeking, Melissa M.; Kohler, Stefan

    2012-01-01

    It is widely accepted that signal-detection mechanisms contribute to item-recognition memory decisions that involve discriminations between targets and lures based on a controlled laboratory study episode. Here, the authors employed mathematical modeling of receiver operating characteristics (ROC) to determine whether and how a signal-detection…

  8. Bubbles in Sediments

    DTIC Science & Technology

    1999-09-30

    saturated poroelastic medium. The transition matrix scattering formalism was used to develop the scattered acoustic field(s) such that appropriate...sediment increases from a fluid model (simplest) to a fluid-saturated poroelastic model (most complex). Laboratory experiments in carefully quantified...of a linear acoustic field from a bubble, collection of bubbles, or other targets embedded in a fluid-saturated sediment are not well known. This

  9. A new setup for experimental investigations of solar wind sputtering

    NASA Astrophysics Data System (ADS)

    Szabo, Paul S.; Berger, Bernhard M.; Chiba, Rimpei; Stadlmayr, Reinhard; Aumayr, Friedrich

    2017-04-01

    The surfaces of Mercury and Moon are not shielded by a thick atmosphere and therefore they are exposed to bombardment by charged particles, ultraviolet photons and micrometeorites. These influences lead to an alteration and erosion of the surface, and the emitted atoms and molecules form a thin atmosphere, an exosphere, around these celestial bodies [1]. The composition of these exospheres is connected to the surface composition and has been subject to flyby measurements by satellites. Model calculations which include the erosion mechanisms can be used as a method of comparison for such exosphere measurements and allow conclusions about the surface composition. Surface sputtering induced by solar wind ions hereby represents a major contribution to the erosion of the surfaces of Mercury and Moon [1]. However, the experimental database for sputtering of respective analogue materials by solar wind ions, which would be necessary for exact modelling of the space weathering process, is still in its early stages. Sputtering experiments have been performed at TU Wien during the past years using a quartz crystal microbalance (QCM) technique [2]. Target material is deposited on the quartz surface as a thin layer and the quartz's resonance frequency is measured under ion bombardment. The sputter yield can then be calculated from the frequency change and the ion current [2]. In order to remove the restrictions of a thin layer QCM target and simplify experiments with composite targets, a new QCM catcher setup was developed. In the new design, the QCM is placed beside the target holder and acts as a catcher for material that is sputtered from the target surface. By comparing the catcher signal to reference measurements and SDTrimSP simulations [3], the target sputter yield can be determined. In order to test the setup, we have performed experiments with a Au-coated QCM target under 2 keV Ar+ bombardment so that both the mass changes at the target and at the catcher could be obtained simultaneously. The results coincide very well with SDTrimSP predictions showing the feasibility of the new design [4]. Furthermore, Fe-coated QCM targets with different surface roughness were investigated in the new setup. The surface roughness represents a key factor for the solar wind induced erosion of planetary or lunar rocks. It has a strong influence on the absolute sputtering yield as well as on the spatial distribution of sputtered particles and was therefore investigated. As a next step, sputtering experiments with Mercury or Moon analogues will be conducted. Knowledge gained in the course of this research will enhance the understanding of surface sputtering by solar wind ions and used to improve theoretical models of the Mercury's and Moon's exosphere formation. References: [1] E. Kallio, et al., Planetary and Space Science, 56, 1506 (2008). [2] G. Hayderer, et al., Review of Scientific Instruments, 70, 3696 (1999). [3] A. Mutzke, R. Schneider, W. Eckstein, R. Dohmen, SDTrimSP: Version 5.00, IPP Report, 12/8, (2011). [4] B. M. Berger, P. S. Szabo, R. Stadlmayr, F. Aumayr, Nucl. Instrum. Meth. Phys. Res. B, doi: 10.1016/j.nimb.2016.11.039

  10. Observing a light dark matter beam with neutrino experiments

    NASA Astrophysics Data System (ADS)

    Deniverville, Patrick; Pospelov, Maxim; Ritz, Adam

    2011-10-01

    We consider the sensitivity of fixed-target neutrino experiments at the luminosity frontier to light stable states, such as those present in models of MeV-scale dark matter. To ensure the correct thermal relic abundance, such states must annihilate via light mediators, which in turn provide an access portal for direct production in colliders or fixed targets. Indeed, this framework endows the neutrino beams produced at fixed-target facilities with a companion “dark matter beam,” which may be detected via an excess of elastic scattering events off electrons or nuclei in the (near-)detector. We study the high-luminosity proton fixed-target experiments at LSND and MiniBooNE, and determine that the ensuing sensitivity to light dark matter generally surpasses that of other direct probes. For scenarios with a kinetically-mixed U(1)' vector mediator of mass mV, we find that a large volume of parameter space is excluded for mDM˜1-5MeV, covering vector masses 2mDM≲mV≲mη and a range of kinetic mixing parameters reaching as low as κ˜10-5. The corresponding MeV-scale dark matter scenarios motivated by an explanation of the galactic 511 keV line are thus strongly constrained.

  11. Implosion and heating experiments of fast ignition targets by Gekko-XII and LFEX lasers

    NASA Astrophysics Data System (ADS)

    Shiraga, H.; Fujioka, S.; Nakai, M.; Watari, T.; Nakamura, H.; Arikawa, Y.; Hosoda, H.; Nagai, T.; Koga, M.; Kikuchi, H.; Ishii, Y.; Sogo, T.; Shigemori, K.; Nishimura, H.; Zhang, Z.; Tanabe, M.; Ohira, S.; Fujii, Y.; Namimoto, T.; Sakawa, Y.; Maegawa, O.; Ozaki, T.; Tanaka, K. A.; Habara, H.; Iwawaki, T.; Shimada, K.; Key, M.; Norreys, P.; Pasley, J.; Nagatomo, H.; Johzaki, T.; Sunahara, A.; Murakami, M.; Sakagami, H.; Taguchi, T.; Norimatsu, T.; Homma, H.; Fujimoto, Y.; Iwamoto, A.; Miyanaga, N.; Kawanaka, J.; Kanabe, T.; Jitsuno, T.; Nakata, Y.; Tsubakimoto, K.; Sueda, K.; Kodama, R.; Kondo, K.; Morio, N.; Matsuo, S.; Kawasaki, T.; Sawai, K.; Tsuji, K.; Murakami, H.; Sarukura, N.; Shimizu, T.; Mima, K.; Azechi, H.

    2013-11-01

    The FIREX-1 project, the goal of which is to demonstrate fuel heating up to 5 keV by fast ignition scheme, has been carried out since 2003 including construction and tuning of LFEX laser and integrated experiments. Implosion and heating experiment of Fast Ignition targets have been performed since 2009 with Gekko-XII and LFEX lasers. A deuterated polystyrene shell target was imploded with the 0.53- μm Gekko-XII, and the 1.053- μm beam of the LFEX laser was injected through a gold cone attached to the shell to generate hot electrons to heat the imploded fuel plasma. Pulse contrast ratio of the LFEX beam was significantly improved. Also a variety of plasma diagnostic instruments were developed to be compatible with harsh environment of intense hard x-rays (γ rays) and electromagnetic pulses due to the intense LFEX beam on the target. Large background signals around the DD neutron signal in time-of-flight record of neutron detector were found to consist of neutrons via (γ,n) reactions and scattered gamma rays. Enhanced neutron yield was confirmed by carefully eliminating such backgrounds. Neutron enhancement up to 3.5 × 107 was observed. Heating efficiency was estimated to be 10-20% assuming a uniform temperature rise model.

  12. Pretest predictions of surface strain and fluid pressures in mercury targets undergoing thermal shock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taleyarkhan, R.P.; Kim, S.H.; Haines, J.

    The authors provide a perspective overview of pretest modeling and analysis work related to thermal shock effects in spallation neutron source targets that were designed for conducting thermal shock experiments at the Los Alamos Neutron Science Center (LANSCE). Data to be derived are to be used for benchmarking computational tools as well as to assess the efficacy of optical gauges for monitoring dynamic fluid pressures and phenomena such as the onset of cavitation.

  13. Constraining Calcium Production in Novae

    NASA Astrophysics Data System (ADS)

    Tiwari, Pranjal; C. Fry, C. Wrede Team; A. Chen, J. Liang Collaboration; S. Bishop, T. Faestermann, D. Seiler Collaboration; R. Hertenberger, H. Wirth Collaboration

    2017-09-01

    Calcium is an element that can be produced by thermonuclear reactions in the hottest classical novae. There are discrepancies between the abundance of Calcium observed in novae and expectations based on astrophysical models. Unbound states 1 MeV above the proton threshold affect the production of Calcium in nova models because they act as resonances in the 38 K(p , γ) 39 Ca reaction present. This work describes an experiment to measure the energies of the excited states of 39 Ca . We will bombard a thin target of 40 Ca with a beam of 22 MeV deuterons, resulting in tritons and 39Ca. We will use a Q3D magnetic spectrograph from the MLL in Garching, Germany to momenta analyze the tritons to observe the excitation energies of the resulting 39 Ca states. Simulations have been run to determine the optimal spectrograph settings. We decided to use a chemically stable target composed of CaF2 , doing so resulted in an extra contaminant, Fluorine, which is dealt with by measuring the background from a LiF target. These simulations have led to settings and targets that will result in the observation of the 39 Ca states of interest with minimal interference from contaminants. Preliminary results from this experiment will be presented. National Sciences and Engineering Research Council of Canada and U.S. National Science Foundation.

  14. Theoretical quantification of shock-timing sensitivities for direct-drive inertial confinement fusion implosions on OMEGA

    DOE PAGES

    Cao, D.; Boehly, T. R.; Gregor, M. C.; ...

    2018-05-16

    Using temporally shaped laser pulses, multiple shocks can be launched in direct-drive inertial confinement fusion implosion experiments to set the shell on a desired isentrope or adiabat. The velocity of the first shock and the times at which subsequent shocks catch up to it are measured through the VISAR diagnostic on OMEGA. Simulations reproduce these velocity and shock-merger time measurements when using laser pulses designed for setting mid-adiabat (α ~ 3) implosions, but agreement degrades for lower-adiabat (α ~ 1) designs. Several possibilities for this difference are studied: errors in placing the target at the center of irradiation (target offset),more » variations in energy between the different incident beams (power imbalance), and errors in modeling the laser energy coupled into the capsule. Simulation results indicate that shock timing is most sensitive to details of the density and temperature profiles in the coronal plasma, which influences the laser energy coupled into the target, and only marginally sensitive to target offset and beam power imbalance. A new technique under development to infer coronal profiles using x-ray self-emission imaging can be applied to the pulse shapes used in shock-timing experiments. In conclusion, this will help identify improved physics models to implement in codes and consequently enhance shock-timing predictive capability for low-adiabat pulses.« less

  15. Distractor Evoked Deviations of Saccade Trajectory Are Modulated by Fixation Activity in the Superior Colliculus: Computational and Behavioral Evidence

    PubMed Central

    Wang, Zhiguo; Theeuwes, Jan

    2014-01-01

    Previous studies have shown that saccades may deviate towards or away from task irrelevant visual distractors. This observation has been attributed to active suppression (inhibition) of the distractor location unfolding over time: early in time inhibition at the distractor location is incomplete causing deviation towards the distractor, while later in time when inhibition is complete the eyes deviate away from the distractor. In a recent computational study, Wang, Kruijne and Theeuwes proposed an alternative theory that the lateral interactions in the superior colliculus (SC), which are characterized by short-distance excitation and long-distance inhibition, are sufficient for generating both deviations towards and away from distractors. In the present study, we performed a meta-analysis of the literature, ran model simulations and conducted two behavioral experiments to further explore this unconventional theory. Confirming predictions generated by the model simulations, the behavioral experiments show that a) saccades deviate towards close distractors and away from remote distractors, and b) the amount of deviation depends on the strength of fixation activity in the SC, which can be manipulated by turning off the fixation stimulus before or after target onset (Experiment 1), or by varying the eccentricity of the target and distractor (Experiment 2). PMID:25551552

  16. Predicting Adolescent Perceptions of the Risks and Benefits of Cigarette Smoking: A Longitudinal Investigation

    PubMed Central

    Morrell, Holly E. R.; Song, Anna V.; Halpern-Felsher, Bonnie L.

    2010-01-01

    Objective To evaluate developmental changes, personal smoking experiences, and vicarious smoking experiences as predictors of adolescents’ perceptions of the risks and benefits of cigarette smoking over time, in order to identify new and effective targets for youth smoking prevention programs. Design 395 adolescents were surveyed every six months for two school years, from the beginning of 9th grade to the end of 10th grade. Main Outcome Measures Time, participant smoking, friend smoking, parental smoking, and sex were evaluated as predictors of smoking-related short-term risk perceptions, long-term risk perceptions, and benefits perceptions using multilevel modeling techniques. Results Perceptions of benefits did not change over time. Perceptions of risk decreased with time, but not after sex and parental smoking were included in the model. Adolescents with personal smoking experience reported decreasing perceptions of risk and increasing perceptions of benefits over time. Adolescents with more than 6 friends who smoked also reported increasing perceptions of benefits over time. Conclusions Changes in risk perceptions may not purely be the result of developmental processes, but may also be influenced by personal and vicarious experience with smoking. Findings highlight the importance of identifying and targeting modifiable factors that may influence perceptions. PMID:20939640

  17. An ERP Investigation of Visual Word Recognition in Syllabary Scripts

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2013-01-01

    The bi-modal interactive-activation model has been successfully applied to understanding the neuro-cognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, the current study examined word recognition in a different writing system, the Japanese syllabary scripts Hiragana and Katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words where the prime and target words were both in the same script (within-script priming, Experiment 1) or were in the opposite script (cross-script priming, Experiment 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sub-lexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time-course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in Experiment 1 where prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neuro-cognitive processes that operate in a similar manner across different writing systems and languages, as well as pointing to the viability of the bi-modal interactive activation framework for modeling such processes. PMID:23378278

  18. Ensemble-sensitivity Analysis Based Observation Targeting for Mesoscale Convection Forecasts and Factors Influencing Observation-Impact Prediction

    NASA Astrophysics Data System (ADS)

    Hill, A.; Weiss, C.; Ancell, B. C.

    2017-12-01

    The basic premise of observation targeting is that additional observations, when gathered and assimilated with a numerical weather prediction (NWP) model, will produce a more accurate forecast related to a specific phenomenon. Ensemble-sensitivity analysis (ESA; Ancell and Hakim 2007; Torn and Hakim 2008) is a tool capable of accurately estimating the proper location of targeted observations in areas that have initial model uncertainty and large error growth, as well as predicting the reduction of forecast variance due to the assimilated observation. ESA relates an ensemble of NWP model forecasts, specifically an ensemble of scalar forecast metrics, linearly to earlier model states. A thorough investigation is presented to determine how different factors of the forecast process are impacting our ability to successfully target new observations for mesoscale convection forecasts. Our primary goals for this work are to determine: (1) If targeted observations hold more positive impact over non-targeted (i.e. randomly chosen) observations; (2) If there are lead-time constraints to targeting for convection; (3) How inflation, localization, and the assimilation filter influence impact prediction and realized results; (4) If there exist differences between targeted observations at the surface versus aloft; and (5) how physics errors and nonlinearity may augment observation impacts.Ten cases of dryline-initiated convection between 2011 to 2013 are simulated within a simplified OSSE framework and presented here. Ensemble simulations are produced from a cycling system that utilizes the Weather Research and Forecasting (WRF) model v3.8.1 within the Data Assimilation Research Testbed (DART). A "truth" (nature) simulation is produced by supplying a 3-km WRF run with GFS analyses and integrating the model forward 90 hours, from the beginning of ensemble initialization through the end of the forecast. Target locations for surface and radiosonde observations are computed 6, 12, and 18 hours into the forecast based on a chosen scalar forecast response metric (e.g., maximum reflectivity at convection initiation). A variety of experiments are designed to achieve the aforementioned goals and will be presented, along with their results, detailing the feasibility of targeting for mesoscale convection forecasts.

  19. An autonomous robot inspired by insect neurophysiology pursues moving features in natural environments

    NASA Astrophysics Data System (ADS)

    Bagheri, Zahra M.; Cazzolato, Benjamin S.; Grainger, Steven; O'Carroll, David C.; Wiederman, Steven D.

    2017-08-01

    Objective. Many computer vision and robotic applications require the implementation of robust and efficient target-tracking algorithms on a moving platform. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. Lightweight and low-powered flying insects, such as dragonflies, track prey or conspecifics within cluttered natural environments, illustrating an efficient biological solution to the target-tracking problem. Approach. We used our recent recordings from ‘small target motion detector’ neurons in the dragonfly brain to inspire the development of a closed-loop target detection and tracking algorithm. This model exploits facilitation, a slow build-up of response to targets which move along long, continuous trajectories, as seen in our electrophysiological data. To test performance in real-world conditions, we implemented this model on a robotic platform that uses active pursuit strategies based on insect behaviour. Main results. Our robot performs robustly in closed-loop pursuit of targets, despite a range of challenging conditions used in our experiments; low contrast targets, heavily cluttered environments and the presence of distracters. We show that the facilitation stage boosts responses to targets moving along continuous trajectories, improving contrast sensitivity and detection of small moving targets against textured backgrounds. Moreover, the temporal properties of facilitation play a useful role in handling vibration of the robotic platform. We also show that the adoption of feed-forward models which predict the sensory consequences of self-movement can significantly improve target detection during saccadic movements. Significance. Our results provide insight into the neuronal mechanisms that underlie biological target detection and selection (from a moving platform), as well as highlight the effectiveness of our bio-inspired algorithm in an artificial visual system.

  20. Comparison of the development of performance skills in ultrasound-guided regional anesthesia simulations with different phantom models.

    PubMed

    Liu, Yang; Glass, Nancy L; Glover, Chris D; Power, Robert W; Watcha, Mehernoor F

    2013-12-01

    Ultrasound-guided regional anesthesia (UGRA) skills are traditionally obtained by supervised performance on patients, but practice on phantom models improves success. Currently available models are expensive or use perishable products, for example, olive-in-chicken breasts (OCB). We constructed 2 inexpensive phantom (transparent and opaque) models with readily available nonperishable products and compared the process of learning UGRA skills by novice practitioners on these models with the OCB model. Three experts first established criteria for a satisfactory completion of the simulated UGRA task in the 3 models. Thirty-six novice trainees (<20 previous UGRA experience) were randomly assigned to perform a UGRA task on 1 of 3 models-the transparent, opaque, and OCB models, where the hyperechoic target was identified, a needle was advanced to it under ultrasound guidance, fluid was injected, and images were saved. We recorded the errors during task completion, number of attempts and needle passes, and the time for target identification and needle placement until the predetermined benchmark of 3 consecutive successful UGRA simulations was accomplished. The number of errors, needle passes, and time for task completion per attempt progressively decreased in all 3 groups. However, failure to identify the target and to visualize the needle on the ultrasound image occurred more frequently with the OCB model. The time to complete simulator training was shortest with the transparent model, owing to shorter target identification times. However, trainees were less likely to agree strongly that this model was realistic for teaching UGRA skills. Training on inexpensive synthetic simulation models with no perishable products permits learning of UGRA skills by novices. The OCB model has disadvantages of containing potentially infective material, requires refrigeration, cannot be used after multiple needle punctures, and is associated with more failures during simulated UGRA. Direct visualization of the target in the transparent model allows the trainee to focus on needle insertion skills, but the opaque model may be more realistic for learning target identification skills required when UGRA is performed on real patients in the operating room.

  1. Cross-orientation suppression in human visual cortex

    PubMed Central

    Heeger, David J.

    2011-01-01

    Cross-orientation suppression was measured in human primary visual cortex (V1) to test the normalization model. Subjects viewed vertical target gratings (of varying contrasts) with or without a superimposed horizontal mask grating (fixed contrast). We used functional magnetic resonance imaging (fMRI) to measure the activity in each of several hypothetical channels (corresponding to subpopulations of neurons) with different orientation tunings and fit these orientation-selective responses with the normalization model. For the V1 channel maximally tuned to the target orientation, responses increased with target contrast but were suppressed when the horizontal mask was added, evident as a shift in the contrast gain of this channel's responses. For the channel maximally tuned to the mask orientation, a constant baseline response was evoked for all target contrasts when the mask was absent; responses decreased with increasing target contrast when the mask was present. The normalization model provided a good fit to the contrast-response functions with and without the mask. In a control experiment, the target and mask presentations were temporally interleaved, and we found no shift in contrast gain, i.e., no evidence for suppression. We conclude that the normalization model can explain cross-orientation suppression in human visual cortex. The approach adopted here can be applied broadly to infer, simultaneously, the responses of several subpopulations of neurons in the human brain that span particular stimulus or feature spaces, and characterize their interactions. In addition, it allows us to investigate how stimuli are represented by the inferred activity of entire neural populations. PMID:21775720

  2. Aging and the rate of visual information processing.

    PubMed

    Guest, Duncan; Howard, Christina J; Brown, Louise A; Gleeson, Harriet

    2015-01-01

    Multiple methods exist for measuring how age influences the rate of visual information processing. The most advanced methods model the processing dynamics in a task in order to estimate processing rates independently of other factors that might be influenced by age, such as overall performance level and the time at which processing onsets. However, such modeling techniques have produced mixed evidence for age effects. Using a time-accuracy function (TAF) analysis, Kliegl, Mayr, and Krampe (1994) showed clear evidence for age effects on processing rate. In contrast, using the diffusion model to examine the dynamics of decision processes, Ratcliff and colleagues (e.g., Ratcliff, Thapar, & McKoon, 2006) found no evidence for age effects on processing rate across a range of tasks. Examination of these studies suggests that the number of display stimuli might account for the different findings. In three experiments we measured the precision of younger and older adults' representations of target stimuli after different amounts of stimulus exposure. A TAF analysis found little evidence for age differences in processing rate when a single stimulus was presented (Experiment 1). However, adding three nontargets to the display resulted in age-related slowing of processing (Experiment 2). Similar slowing was observed when simply presenting two stimuli and using a post-cue to indicate the target (Experiment 3). Although there was some interference from distracting objects and from previous responses, these age-related effects on processing rate seem to reflect an age-related difficulty in processing multiple objects, particularly when encoding them into visual working memory.

  3. Rapid Processing of a Global Feature in the ON Visual Pathways of Behaving Monkeys.

    PubMed

    Huang, Jun; Yang, Yan; Zhou, Ke; Zhao, Xudong; Zhou, Quan; Zhu, Hong; Yang, Yingshan; Zhang, Chunming; Zhou, Yifeng; Zhou, Wu

    2017-01-01

    Visual objects are recognized by their features. Whereas, some features are based on simple components (i.e., local features, such as orientation of line segments), some features are based on the whole object (i.e., global features, such as an object having a hole in it). Over the past five decades, behavioral, physiological, anatomical, and computational studies have established a general model of vision, which starts from extracting local features in the lower visual pathways followed by a feature integration process that extracts global features in the higher visual pathways. This local-to-global model is successful in providing a unified account for a vast sets of perception experiments, but it fails to account for a set of experiments showing human visual systems' superior sensitivity to global features. Understanding the neural mechanisms underlying the "global-first" process will offer critical insights into new models of vision. The goal of the present study was to establish a non-human primate model of rapid processing of global features for elucidating the neural mechanisms underlying differential processing of global and local features. Monkeys were trained to make a saccade to a target in the black background, which was different from the distractors (white circle) in color (e.g., red circle target), local features (e.g., white square target), a global feature (e.g., white ring with a hole target) or their combinations (e.g., red square target). Contrary to the predictions of the prevailing local-to-global model, we found that (1) detecting a distinction or a change in the global feature was faster than detecting a distinction or a change in color or local features; (2) detecting a distinction in color was facilitated by a distinction in the global feature, but not in the local features; and (3) detecting the hole was interfered by the local features of the hole (e.g., white ring with a squared hole). These results suggest that monkey ON visual systems have a subsystem that is more sensitive to distinctions in the global feature than local features. They also provide the behavioral constraints for identifying the underlying neural substrates.

  4. Magnetohydrodynamic simulation study of plasma jets and plasma-surface contact in coaxial plasma accelerators

    DOE PAGES

    Subramaniam, Vivek; Raja, Laxminarayan L.

    2017-06-13

    Recent experiments by Loebner et al. [IEEE Trans. Plasma Sci. 44, 1534 (2016)] studied the effect of a hypervelocity jet emanating from a coaxial plasma accelerator incident on target surfaces in an effort to mimic the transient loading created during edge localized mode disruption events in fusion plasmas. In this study, we present a magnetohydrodynamic (MHD) numerical model to simulate plasma jet formation and plasma-surface contact in this coaxial plasma accelerator experiment. The MHD system of equations is spatially discretized using a cell-centered finite volume formulation. The temporal discretization is performed using a fully implicit backward Euler scheme and themore » resultant stiff system of nonlinear equations is solved using the Newton method. The numerical model is employed to obtain some key insights into the physical processes responsible for the generation of extreme stagnation conditions on the target surfaces. Simulations of the plume (without the target plate) are performed to isolate and study phenomena such as the magnetic pinch effect that is responsible for launching pressure pulses into the jet free stream. The simulations also yield insights into the incipient conditions responsible for producing the pinch, such as the formation of conductive channels. The jet-target impact studies indicate the existence of two distinct stages involved in the plasma-surface interaction. A fast transient stage characterized by a thin normal shock transitions into a pseudo-steady stage that exhibits an extended oblique shock structure. A quadratic scaling of the pinch and stagnation conditions with the total current discharged between the electrodes is in qualitative agreement with the results obtained in the experiments. Finally, this also illustrates the dominant contribution of the magnetic pressure term in determining the magnitude of the quantities of interest.« less

  5. Magnetohydrodynamic simulation study of plasma jets and plasma-surface contact in coaxial plasma accelerators

    NASA Astrophysics Data System (ADS)

    Subramaniam, Vivek; Raja, Laxminarayan L.

    2017-06-01

    Recent experiments by Loebner et al. [IEEE Trans. Plasma Sci. 44, 1534 (2016)] studied the effect of a hypervelocity jet emanating from a coaxial plasma accelerator incident on target surfaces in an effort to mimic the transient loading created during edge localized mode disruption events in fusion plasmas. In this paper, we present a magnetohydrodynamic (MHD) numerical model to simulate plasma jet formation and plasma-surface contact in this coaxial plasma accelerator experiment. The MHD system of equations is spatially discretized using a cell-centered finite volume formulation. The temporal discretization is performed using a fully implicit backward Euler scheme and the resultant stiff system of nonlinear equations is solved using the Newton method. The numerical model is employed to obtain some key insights into the physical processes responsible for the generation of extreme stagnation conditions on the target surfaces. Simulations of the plume (without the target plate) are performed to isolate and study phenomena such as the magnetic pinch effect that is responsible for launching pressure pulses into the jet free stream. The simulations also yield insights into the incipient conditions responsible for producing the pinch, such as the formation of conductive channels. The jet-target impact studies indicate the existence of two distinct stages involved in the plasma-surface interaction. A fast transient stage characterized by a thin normal shock transitions into a pseudo-steady stage that exhibits an extended oblique shock structure. A quadratic scaling of the pinch and stagnation conditions with the total current discharged between the electrodes is in qualitative agreement with the results obtained in the experiments. This also illustrates the dominant contribution of the magnetic pressure term in determining the magnitude of the quantities of interest.

  6. Magnetohydrodynamic simulation study of plasma jets and plasma-surface contact in coaxial plasma accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subramaniam, Vivek; Raja, Laxminarayan L.

    Recent experiments by Loebner et al. [IEEE Trans. Plasma Sci. 44, 1534 (2016)] studied the effect of a hypervelocity jet emanating from a coaxial plasma accelerator incident on target surfaces in an effort to mimic the transient loading created during edge localized mode disruption events in fusion plasmas. In this study, we present a magnetohydrodynamic (MHD) numerical model to simulate plasma jet formation and plasma-surface contact in this coaxial plasma accelerator experiment. The MHD system of equations is spatially discretized using a cell-centered finite volume formulation. The temporal discretization is performed using a fully implicit backward Euler scheme and themore » resultant stiff system of nonlinear equations is solved using the Newton method. The numerical model is employed to obtain some key insights into the physical processes responsible for the generation of extreme stagnation conditions on the target surfaces. Simulations of the plume (without the target plate) are performed to isolate and study phenomena such as the magnetic pinch effect that is responsible for launching pressure pulses into the jet free stream. The simulations also yield insights into the incipient conditions responsible for producing the pinch, such as the formation of conductive channels. The jet-target impact studies indicate the existence of two distinct stages involved in the plasma-surface interaction. A fast transient stage characterized by a thin normal shock transitions into a pseudo-steady stage that exhibits an extended oblique shock structure. A quadratic scaling of the pinch and stagnation conditions with the total current discharged between the electrodes is in qualitative agreement with the results obtained in the experiments. Finally, this also illustrates the dominant contribution of the magnetic pressure term in determining the magnitude of the quantities of interest.« less

  7. Translating Research into a Seamless Transition Model

    ERIC Educational Resources Information Center

    Luecking, Debra Martin; Luecking, Richard G.

    2015-01-01

    Recently, consensus among researchers and professionals has emerged about factors that contribute to postschool success of youth with disabilities. Prominent among these factors are targeted academic preparation, family involvement, youth empowerment, and service collaboration and linkages. Work experience and paid employment have been identified…

  8. A Computational Model for Aperture Control in Reach-to-Grasp Movement Based on Predictive Variability

    PubMed Central

    Takemura, Naohiro; Fukui, Takao; Inui, Toshio

    2015-01-01

    In human reach-to-grasp movement, visual occlusion of a target object leads to a larger peak grip aperture compared to conditions where online vision is available. However, no previous computational and neural network models for reach-to-grasp movement explain the mechanism of this effect. We simulated the effect of online vision on the reach-to-grasp movement by proposing a computational control model based on the hypothesis that the grip aperture is controlled to compensate for both motor variability and sensory uncertainty. In this model, the aperture is formed to achieve a target aperture size that is sufficiently large to accommodate the actual target; it also includes a margin to ensure proper grasping despite sensory and motor variability. To this end, the model considers: (i) the variability of the grip aperture, which is predicted by the Kalman filter, and (ii) the uncertainty of the object size, which is affected by visual noise. Using this model, we simulated experiments in which the effect of the duration of visual occlusion was investigated. The simulation replicated the experimental result wherein the peak grip aperture increased when the target object was occluded, especially in the early phase of the movement. Both predicted motor variability and sensory uncertainty play important roles in the online visuomotor process responsible for grip aperture control. PMID:26696874

  9. N-3 fatty acids and membrane microdomains: from model membranes to lymphocyte function.

    PubMed

    Shaikh, Saame Raza; Teague, Heather

    2012-12-01

    This article summarizes the author's research on fish oil derived n-3 fatty acids, plasma membrane organization and B cell function. We first cover basic model membrane studies that investigated how docosahexaenoic acid (DHA) targeted the organization of sphingolipid-cholesterol enriched lipid microdomains. A key finding here was that DHA had a relatively poor affinity for cholesterol. This work led to a model that predicted DHA acyl chains in cells would manipulate lipid-protein microdomain organization and thereby function. We then review how the predictions of the model were tested with B cells in vitro followed by experiments using mice fed fish oil. These studies reveal a highly complex picture on how n-3 fatty acids target lipid-protein organization and B cell function. Key findings are as follows: (1) n-3 fatty acids target not just the plasma membrane but also endomembrane organization; (2) DHA, but not eicosapentaenoic acid (EPA), disrupts microdomain spatial distribution (i.e. clustering), (3) DHA alters protein lateral organization and (4) changes in membrane organization are accompanied by functional effects on both innate and adaptive B cell function. Altogether, the research over the past 10 years has led to an evolution of the original model on how DHA reorganizes membrane microdomains. The work raises the intriguing possibility of testing the model at the human level to target health and disease. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Can healthy, young adults uncover personal details of unknown target individuals in their dreams?

    PubMed

    Smith, Carlyle

    2013-01-01

    We investigated the possibility that undergraduate college students could incubate dreams containing information about unknown target individuals with significant life problems. In Experiment 1, students provided two baseline dreams. They were then exposed to a photo of an individual and invited to dream about a health problem (unknown to them and the experimenter) of that individual and asked to provide two more dreams. From a class of 65 students, 12 dreamers volunteered dreams about the unknown target. In Experiment 2, 66 students were asked to dream about the life problems of a second individual, simply by looking at the photo (experimental group). Another 56 students were exposed to this same paradigm, but the photo that they examined was computer generated and the target individual was fictitious (control group). The dream elements were objectively scored with categories devised using the Hall-Van de Castle system as a model. Data were ordinal, and the nonparametric Wilcoxon signed rank test was used to examine preincubation (baseline) versus postincubation (photo examination and incubation) dream content in Experiment 1. In Experiment 2, a Z score for proportions was used to compare differences in frequency of devised categories between experimental and control groups. In Experiment 1, the comparison of postincubation dreams (all categories combined) was significant compared with the preincubation dreams (Z = 2.09, P = .036). The postincubation dreams reflected the health problem of the target. In Experiment 2, the proportion of scored categories in experimental and control groups were compared at the preincubation and postincubation conditions. The proportions of "Combined" (all categories) was very significantly larger at the postincubation condition (Z = 6.27, P < .00001). The groups did not differ at the preincubation condition (Z = -1.12, not significant). Individual postincubation condition comparisons of the experimental versus control groups revealed significant differences in three of the devised scoring categories, ranging from P < .002 to P < .05. There were no experimental versus control preincubation differences. The postincubation dreams of the experimental group were related to the problems of the target individual. Young, healthy adults are capable of dreaming details about the personal problems of an unknown individual simply by examining a picture of the target and then planning to dream about that individual's problems. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Large Area Solid Radiochemistry (LASR) collector at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Waltz, Cory; Gharibyan, Narek; Hardy, Mike; Shaughnessy, Dawn; Jedlovec, Don; Smith, Cal

    2017-08-01

    The flux of neutrons and charged particles produced from inertial confinement fusion experiments at the National Ignition Facility (NIF) induces measurable concentrations of nuclear reaction products in various target materials. The collection and radiochemical analysis of the post-shot debris can be utilized as an implosion diagnostic to obtain information regarding fuel areal density and ablator-fuel mixing. Furthermore, assessment of the debris from specially designed targets, material doped in capsules or mounted on the external surface of the target assembly, can support experiments relevant to nuclear forensic research. To collect the shot debris, we have deployed the Large Area Solid Radiochemistry Collector (LASR) at NIF. LASR uses a main collector plate that contains a large collection foil with an exposed 20 cm diameter surface located ˜50 cm from the NIF target. This covers ˜0.12 steradians, or about 1% of the total solid angle. We will describe the design, analysis, and operation of this experimental platform as well as the initial results. To speed up the design process 3-dimensional printing was utilized. Design analysis includes the dynamic loading of the NIF target vaporized mass, which was modeled using LS-DYNA.

  12. Plasma and Shock Generation by Indirect Laser Pulse Action

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasperczuk, A.; Borodziuk, S.; Pisarczyk, T.

    2006-01-15

    In the paper the results of our experiment with flyer disks, accelerated to high velocities by the PALS iodine laser and subsequently creating craters when hitting massive targets , are presented. We have carried out experiments with the double targets consisted of a disk placed in front of a massive target part at distances of either 200 or 500 {mu}m. Both elements of the targets were made of Al. The following disk irradiation conditions were used: laser energy of 130 J, laser wavelength of 1.315 {mu}m, pulse duration of 0.4 ns, and laser spot diameter of 250 {mu}m. To measuremore » some plasma parameters and accelerated disk velocity a three frame interferometric system was used. Efficiency of crater creation by a disk impact was determined from the crater parameters, which were obtained by means of a crater replica technique. The experimental results concern two main stages: (a) ablative plasma generation and disk acceleration and (b) disk impact and crater creation. Spatial density distributions at different moments of plasma generation and expansion are shown. Discussion of the experimental results on the basis of a 2-D theoretical model of the laser -- solid target interaction is carried out.« less

  13. Evidence for simultaneous syntactic processing of multiple words during reading.

    PubMed

    Snell, Joshua; Meeter, Martijn; Grainger, Jonathan

    2017-01-01

    A hotly debated issue in reading research concerns the extent to which readers process parafoveal words, and how parafoveal information might influence foveal word recognition. We investigated syntactic word processing both in sentence reading and in reading isolated foveal words when these were flanked by parafoveal words. In Experiment 1 we found a syntactic parafoveal preview benefit in sentence reading, meaning that fixation durations on target words were decreased when there was a syntactically congruent preview word at the target location (n) during the fixation on the pre-target (n-1). In Experiment 2 we used a flanker paradigm in which participants had to classify foveal target words as either noun or verb, when those targets were flanked by syntactically congruent or incongruent words (stimulus on-time 170 ms). Lower response times and error rates in the congruent condition suggested that higher-order (syntactic) information can be integrated across foveal and parafoveal words. Although higher-order parafoveal-on-foveal effects have been elusive in sentence reading, results from our flanker paradigm show that the reading system can extract higher-order information from multiple words in a single glance. We propose a model of reading to account for the present findings.

  14. A combined theoretical and in vitro modeling approach for predicting the magnetic capture and retention of magnetic nanoparticles in vivo

    PubMed Central

    David, Allan E.; Cole, Adam J.; Chertok, Beata; Park, Yoon Shin; Yang, Victor C.

    2011-01-01

    Magnetic nanoparticles (MNP) continue to draw considerable attention as potential diagnostic and therapeutic tools in the fight against cancer. Although many interacting forces present themselves during magnetic targeting of MNP to tumors, most theoretical considerations of this process ignore all except for the magnetic and drag forces. Our validation of a simple in vitro model against in vivo data, and subsequent reproduction of the in vitro results with a theoretical model indicated that these two forces do indeed dominate the magnetic capture of MNP. However, because nanoparticles can be subject to aggregation, and large MNP experience an increased magnetic force, the effects of surface forces on MNP stability cannot be ignored. We accounted for the aggregating surface forces simply by measuring the size of MNP retained from flow by magnetic fields, and utilized this size in the mathematical model. This presumably accounted for all particle-particle interactions, including those between magnetic dipoles. Thus, our “corrected” mathematical model provided a reasonable estimate of not only fractional MNP retention, but also predicted the regions of accumulation in a simulated capillary. Furthermore, the model was also utilized to calculate the effects of MNP size and spatial location, relative to the magnet, on targeting of MNPs to tumors. This combination of an in vitro model with a theoretical model could potentially assist with parametric evaluations of magnetic targeting, and enable rapid enhancement and optimization of magnetic targeting methodologies. PMID:21295085

  15. The Hohlraum Drive Campaign on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Moody, John D.

    2013-10-01

    The Hohlraum drive effort on the National Ignition Facility (NIF) laser has three primary goals: 1) improve hohlraum performance by improving laser beam propagation, reducing backscatter from laser plasma interactions (LPI), controlling x-ray and electron preheat, and modifying the x-ray drive spectrum; 2) improve understanding of crossbeam energy transfer physics to better evaluate this as a symmetry tuning method; and 3) improve modeling in order to find optimum designs. Our experimental strategy for improving performance explores the impact of significant changes to the hohlraum shape, wall material, gasfill composition, and gasfill density on integrated implosion experiments. We are investigating the performance of a rugby-shaped design that has a significantly larger diameter (7 mm) at the waist than our standard 5.75 mm diameter cylindrical-shaped hohlraum but maintains approximately the same wall area. We are also exploring changes to the gasfill composition in cylindrical hohlraums by using neopentane at room temperature to compare with our standard helium gasfill. In addition, we are also investigating higher He gasfill density (1.6 mg/cc vs nominal 0.96 mg/cc) and increased x-ray drive very early in the pulse. Besides these integrated experiments, our strategy includes experiments testing separate aspects of the hohlraum physics. These include time-resolved and time-integrated measurements of cross-beam transfer rates and laser-beam spatial power distribution at early and late times using modified targets. Non-local thermal equilibrium modeling and heat transport relevant to ignition experiments are being studied using sphere targets on the Omega laser system. These simpler targets provide benchmarks for improving our modeling tools. This talk will summarize the results of the Hohlraum Drive campaign and discuss future directions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA2-344.

  16. Identification of Direct Target Genes Using Joint Sequence and Expression Likelihood with Application to DAF-16

    PubMed Central

    Yu, Ron X.; Liu, Jie; True, Nick; Wang, Wei

    2008-01-01

    A major challenge in the post-genome era is to reconstruct regulatory networks from the biological knowledge accumulated up to date. The development of tools for identifying direct target genes of transcription factors (TFs) is critical to this endeavor. Given a set of microarray experiments, a probabilistic model called TRANSMODIS has been developed which can infer the direct targets of a TF by integrating sequence motif, gene expression and ChIP-chip data. The performance of TRANSMODIS was first validated on a set of transcription factor perturbation experiments (TFPEs) involving Pho4p, a well studied TF in Saccharomyces cerevisiae. TRANSMODIS removed elements of arbitrariness in manual target gene selection process and produced results that concur with one's intuition. TRANSMODIS was further validated on a genome-wide scale by comparing it with two other methods in Saccharomyces cerevisiae. The usefulness of TRANSMODIS was then demonstrated by applying it to the identification of direct targets of DAF-16, a critical TF regulating ageing in Caenorhabditis elegans. We found that 189 genes were tightly regulated by DAF-16. In addition, DAF-16 has differential preference for motifs when acting as an activator or repressor, which awaits experimental verification. TRANSMODIS is computationally efficient and robust, making it a useful probabilistic framework for finding immediate targets. PMID:18350157

  17. Performance and strategy comparisons of human listeners and logistic regression in discriminating underwater targets.

    PubMed

    Yang, Lixue; Chen, Kean

    2015-11-01

    To improve the design of underwater target recognition systems based on auditory perception, this study compared human listeners with automatic classifiers. Performances measures and strategies in three discrimination experiments, including discriminations between man-made and natural targets, between ships and submarines, and among three types of ships, were used. In the experiments, the subjects were asked to assign a score to each sound based on how confident they were about the category to which it belonged, and logistic regression, which represents linear discriminative models, also completed three similar tasks by utilizing many auditory features. The results indicated that the performances of logistic regression improved as the ratio between inter- and intra-class differences became larger, whereas the performances of the human subjects were limited by their unfamiliarity with the targets. Logistic regression performed better than the human subjects in all tasks but the discrimination between man-made and natural targets, and the strategies employed by excellent human subjects were similar to that of logistic regression. Logistic regression and several human subjects demonstrated similar performances when discriminating man-made and natural targets, but in this case, their strategies were not similar. An appropriate fusion of their strategies led to further improvement in recognition accuracy.

  18. Trade-offs of Solar Geoengineering and Mitigation under Climate Targets

    NASA Astrophysics Data System (ADS)

    Mohammadi Khabbazan, M.; Stankoweit, M.; Roshan, E.; Schmidt, H.; Held, H.

    2016-12-01

    Scientific analyses have hitherto focused on the pros and cons of solar-radiation management (SRM) as a climate-policy option mainly in mere isolation. Here we put SRM into the context of mitigation by a strictly temperature-target-based approach. To the best of our knowledge, for the first time, we introduce a concept for a regional integrated analysis of SRM and mitigation in-line with the `2°C target'. We explicitly account for a risk-risk comparison of SRM and global warming, extending the applicability regime of temperature targets from mitigation-only to joint-SRM-mitigation analysis while minimizing economic costs required for complying with the 2°C target. Upgrading it to include SRM, we employ the integrated energy-economy-climate model MIND. We utilize the two-box climate model of DICE and calibrate the short and long time scales respectively into GeoMIP G3 experiment and quadrupled atmospheric CO2 concentrations experiment from CEMIP5 suite. Our results show that without risk-risk accounting SRM will displace mitigation. However, our analysis highlights that the value system enshrined in the 2°C target can almost preclude SRM; this is exemplified by one single regional climate variable, here precipitation, which is confined to regional bounds compatible with 2°C of global warming. Although about a half of policy costs can be saved, the results indicate that the additional amount of CO2 that could be released to the atmosphere corresponds to only 0.2°C of further global warming. Hence, the society might debate whether the risks of SRM should be taken for that rather small amount of additional carbon emissions. Nonetheless, our results point out a significantly larger role for SRM implementation if the guardrails of some regions are relaxed.

  19. Increasing the persistence of a heterogeneous behavior chain: Studies of extinction in a rat model of search behavior of working dogs.

    PubMed

    Thrailkill, Eric A; Kacelnik, Alex; Porritt, Fay; Bouton, Mark E

    2016-08-01

    Dogs trained to search for contraband perform a chain of behavior in which they first search for a target and then make a separate response that indicates to the trainer that they have found one. The dogs often conduct multiple searches without encountering a target and receiving the reinforcer (i.e., no contraband is present). Understanding extinction (i.e., the decline in work rate when reinforcers are no longer encountered) may assist in training dogs to work in conditions where targets are rare. We therefore trained rats on a search-target behavior chain modeled on the search behavior of working dogs. A discriminative stimulus signaled that a search response (e.g., chain pull) led to a second stimulus that set the occasion for a target response (e.g., lever press) that was reinforced by a food pellet. In Experiment 1 training with longer search durations and intermittent (partial) reinforcement of searching (i.e. some trials had no target present) both led to more persistent search responding in extinction. The loss of search behavior in extinction was primarily dependent on the number of non-reinforced searches rather than time searching without reinforcement. In Experiments 2 and 3, delivery of non-contingent reinforcers during extinction increased search persistence provided they had also been presented during training. Thus, results with rats suggest that the persistence of working dog performance (or chained behavior generally) may be improved by training with partial reinforcement of searching and non-contingent reinforcement during both training and work (extinction). Copyright © 2016 Elsevier B.V. All rights reserved.

  20. A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP).

    PubMed

    Bitar, A; Lisbona, A; Thedrez, P; Sai Maurel, C; Le Forestier, D; Barbet, J; Bardies, M

    2007-02-21

    Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.

  1. Search for α-Cluster Structure in Exotic Nuclei with the Prototype Active-Target Time-Projection Chamber

    NASA Astrophysics Data System (ADS)

    Fritsch, A.; Ayyad, Y.; Bazin, D.; Beceiro-Novo, S.; Bradt, J.; Carpenter, L.; Cortesi, M.; Mittig, W.; Suzuki, D.; Ahn, T.; Kolata, J. J.; Becchetti, F. D.; Howard, A. M.

    2016-03-01

    Some exotic nuclei appear to exhibit α-cluster structure. While various theoretical models currently describe such clustering, more experimental data are needed to constrain model predictions. The Prototype Active-Target Time-Projection Chamber (PAT-TPC) has low-energy thresholds for charged-particle decay and a high luminosity due to its thick gaseous active target volume, making it well-suited to search for low-energy α-cluster reactions. Radioactive-ion beams produced by the TwinSol facility at the University of Notre Dame were delivered to the PAT-TPC to study nuclei including 14C and 14O via α-resonant scattering. Differential cross sections and excitation functions were measured. Preliminary results from our recent experiments will be presented. This work is supported by the U.S. National Science Foundation.

  2. An Impact Ejecta Behavior Model for Small, Irregular Bodies

    NASA Technical Reports Server (NTRS)

    Richardson, J. E.; Melosh, H. J.; Greenberg, R.

    2003-01-01

    In recent years, spacecraft observations of asteroids 951 Gaspra, 243 Ida, 253 Mathilde, and 433 Eros have shown the overriding dominance of impact processes with regard to the structure and surface morphology of these small, irregular bodies. In particular, impact ejecta play an important role in regolith formation, ranging from small particles to large blocks, as well as surface feature modification and obscuration. To investigate these processes, a numerical model has been developed based upon the impact ejecta scaling laws provided by Housen, Schmidt, and Holsapple, and modified to more properly simulate the late-stage ejection velocities and ejecta plume shape changes (ejection angle variations) shown in impact cratering experiments. A target strength parameter has also been added to allow the simulation of strength-dominated cratering events in addition to the more familiar gravity-dominated cratering events. The result is a dynamical simulation which models -- via tracer particles -- the ejecta plume behavior, ejecta blanket placement, and impact crater area resulting from a specified impact on an irregularly shaped target body, which is modeled in 3-dimensional polygon fashion. This target body can be placed in a simple rotation state about one of its principal axes, with the impact site and projectile/target parameters selected by the user. The gravitational force from the irregular target body (on each tracer particle) is determined using the polygonized surface (polyhedron) gravity technique developed by Werner.

  3. A Biophysical Model of CRISPR/Cas9 Activity for Rational Design of Genome Editing and Gene Regulation

    PubMed Central

    Farasat, Iman; Salis, Howard M.

    2016-01-01

    The ability to precisely modify genomes and regulate specific genes will greatly accelerate several medical and engineering applications. The CRISPR/Cas9 (Type II) system binds and cuts DNA using guide RNAs, though the variables that control its on-target and off-target activity remain poorly characterized. Here, we develop and parameterize a system-wide biophysical model of Cas9-based genome editing and gene regulation to predict how changing guide RNA sequences, DNA superhelical densities, Cas9 and crRNA expression levels, organisms and growth conditions, and experimental conditions collectively control the dynamics of dCas9-based binding and Cas9-based cleavage at all DNA sites with both canonical and non-canonical PAMs. We combine statistical thermodynamics and kinetics to model Cas9:crRNA complex formation, diffusion, site selection, reversible R-loop formation, and cleavage, using large amounts of structural, biochemical, expression, and next-generation sequencing data to determine kinetic parameters and develop free energy models. Our results identify DNA supercoiling as a novel mechanism controlling Cas9 binding. Using the model, we predict Cas9 off-target binding frequencies across the lambdaphage and human genomes, and explain why Cas9’s off-target activity can be so high. With this improved understanding, we propose several rules for designing experiments for minimizing off-target activity. We also discuss the implications for engineering dCas9-based genetic circuits. PMID:26824432

  4. The role of empathy in experiencing vicarious anxiety.

    PubMed

    Shu, Jocelyn; Hassell, Samuel; Weber, Jochen; Ochsner, Kevin N; Mobbs, Dean

    2017-08-01

    With depictions of others facing threats common in the media, the experience of vicarious anxiety may be prevalent in the general population. However, the phenomenon of vicarious anxiety-the experience of anxiety in response to observing others expressing anxiety-and the interpersonal mechanisms underlying it have not been fully investigated in prior research. In 4 studies, we investigate the role of empathy in experiencing vicarious anxiety, using film clips depicting target victims facing threats. In Studies 1 and 2, trait emotional empathy was associated with greater self-reported anxiety when observing target victims, and with perceiving greater anxiety to be experienced by the targets. Study 3 extended these findings by demonstrating that trait empathic concern-the tendency to feel concern and compassion for others-was associated with experiencing vicarious anxiety, whereas trait personal distress-the tendency to experience distress in stressful situations-was not. Study 4 manipulated state empathy to establish a causal relationship between empathy and experience of vicarious anxiety. Participants who took an empathic perspective when observing target victims, as compared to those who took an objective perspective using reappraisal-based strategies, reported experiencing greater anxiety, risk-aversion, and sleep disruption the following night. These results highlight the impact of one's social environment on experiencing anxiety, particularly for those who are highly empathic. In addition, these findings have implications for extending basic models of anxiety to incorporate interpersonal processes, understanding the role of empathy in social learning, and potential applications for therapeutic contexts. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. A model of service and training: threat assessment on a community college campus.

    PubMed

    Mrad, David F; Hanigan, Antoni J S; Bateman, Joyce R

    2015-02-01

    Forensic psychological assessment for targeted violence is a growing area of practice and community need. These threat assessments are particularly challenging on community college campuses given the broad range of students and the limited internal resources. A collaborative model of partnership between a community college and the training clinic of a doctoral program in clinical psychology has been developed and implemented. The model provides needed service to the community college and rich training experiences for doctoral students in psychology. Implementation of similar partnerships in other settings may be limited by the training and experience of doctoral faculty and the openness of behavioral intervention teams to external participants.

  6. Calibration of a flexible measurement system based on industrial articulated robot and structured light sensor

    NASA Astrophysics Data System (ADS)

    Mu, Nan; Wang, Kun; Xie, Zexiao; Ren, Ping

    2017-05-01

    To realize online rapid measurement for complex workpieces, a flexible measurement system based on an articulated industrial robot with a structured light sensor mounted on the end-effector is developed. A method for calibrating the system parameters is proposed in which the hand-eye transformation parameters and the robot kinematic parameters are synthesized in the calibration process. An initial hand-eye calibration is first performed using a standard sphere as the calibration target. By applying the modified complete and parametrically continuous method, we establish a synthesized kinematic model that combines the initial hand-eye transformation and distal link parameters as a whole with the sensor coordinate system as the tool frame. According to the synthesized kinematic model, an error model is constructed based on spheres' center-to-center distance errors. Consequently, the error model parameters can be identified in a calibration experiment using a three-standard-sphere target. Furthermore, the redundancy of error model parameters is eliminated to ensure the accuracy and robustness of the parameter identification. Calibration and measurement experiments are carried out based on an ER3A-C60 robot. The experimental results show that the proposed calibration method enjoys high measurement accuracy, and this efficient and flexible system is suitable for online measurement in industrial scenes.

  7. Indexing strategic retrieval of colour information with event-related potentials.

    PubMed

    Wilding, E L; Fraser, C S; Herron, J E

    2005-09-01

    Event-related potentials (ERPs) were acquired during two experiments in order to determine boundary conditions for when recollection of colour information can be controlled strategically. In initial encoding phases, participants saw an equal number of words presented in red or green. In subsequent retrieval phases, all words were shown in white. Participants were asked to endorse old words that had been shown at encoding in one colour (targets), and to reject new test words as well as old words shown in the alternate colour (non-targets). Study and test lists were longer in Experiment 1, and as a result, the accuracy of memory judgments was superior in Experiment 2. The left-parietal ERP old/new effect--the electrophysiological signature of recollection--was reliable for targets in both experiments, and reliable for non-targets in Experiment 1 only. These findings are consistent with the view that participants were able to restrict recollection to targets in Experiment 2, while recollecting information about targets as well as non-targets in Experiment 1. The fact that this selective strategy was implemented in Experiment 2 despite the close correspondence between the kinds of information associated with targets and non-targets indicates that participants were able to exert considerable control over the conditions under which recollection of task-relevant information occurred.

  8. Control-based continuation: Bifurcation and stability analysis for physical experiments

    NASA Astrophysics Data System (ADS)

    Barton, David A. W.

    2017-02-01

    Control-based continuation is technique for tracking the solutions and bifurcations of nonlinear experiments. The idea is to apply the method of numerical continuation to a feedback-controlled physical experiment such that the control becomes non-invasive. Since in an experiment it is not (generally) possible to set the state of the system directly, the control target becomes a proxy for the state. Control-based continuation enables the systematic investigation of the bifurcation structure of a physical system, much like if it was numerical model. However, stability information (and hence bifurcation detection and classification) is not readily available due to the presence of stabilising feedback control. This paper uses a periodic auto-regressive model with exogenous inputs (ARX) to approximate the time-varying linearisation of the experiment around a particular periodic orbit, thus providing the missing stability information. This method is demonstrated using a physical nonlinear tuned mass damper.

  9. Is attention based on spatial contextual memory preferentially guided by low spatial frequency signals?

    PubMed

    Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina

    2013-01-01

    A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.

  10. Is Attention Based on Spatial Contextual Memory Preferentially Guided by Low Spatial Frequency Signals?

    PubMed Central

    Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina

    2013-01-01

    A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception. PMID:23776509

  11. How does consumer knowledge affect environmentally sustainable choices? Evidence from a cross-country latent class analysis of food labels.

    PubMed

    Peschel, Anne O; Grebitus, Carola; Steiner, Bodo; Veeman, Michele

    2016-11-01

    This paper examines consumers' knowledge and lifestyle profiles and preferences regarding two environmentally labeled food staples, potatoes and ground beef. Data from online choice experiments conducted in Canada and Germany are analyzed through latent class choice modeling to identify the influence of consumer knowledge (subjective and objective knowledge as well as usage experience) on environmentally sustainable choices. We find that irrespective of product or country under investigation, high subjective and objective knowledge levels drive environmentally sustainable food choices. Subjective knowledge was found to be more important in this context. Usage experience had relatively little impact on environmentally sustainable choices. Our results suggest that about 20% of consumers in both countries are ready to adopt footprint labels in their food choices. Another 10-20% could be targeted by enhancing subjective knowledge, for example through targeted marketing campaigns. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Design and performance of the spin asymmetries of the nucleon experiment

    NASA Astrophysics Data System (ADS)

    Maxwell, J. D.; Armstrong, W. R.; Choi, S.; Jones, M. K.; Kang, H.; Liyanage, A.; Meziani, Z.-E.; Mulholland, J.; Ndukum, L.; Rondón, O. A.; Ahmidouch, A.; Albayrak, I.; Asaturyan, A.; Ates, O.; Baghdasaryan, H.; Boeglin, W.; Bosted, P.; Brash, E.; Brock, J.; Butuceanu, C.; Bychkov, M.; Carlin, C.; Carter, P.; Chen, C.; Chen, J.-P.; Christy, M. E.; Covrig, S.; Crabb, D.; Danagoulian, S.; Daniel, A.; Davidenko, A. M.; Davis, B.; Day, D.; Deconinck, W.; Deur, A.; Dunne, J.; Dutta, D.; El Fassi, L.; Elaasar, M.; Ellis, C.; Ent, R.; Flay, D.; Frlez, E.; Gaskell, D.; Geagla, O.; German, J.; Gilman, R.; Gogami, T.; Gomez, J.; Goncharenko, Y. M.; Hashimoto, O.; Higinbotham, D. W.; Horn, T.; Huber, G. M.; Jones, M.; Kalantarians, N.; Kang, H. K.; Kawama, D.; Keith, C.; Keppel, C.; Khandaker, M.; Kim, Y.; King, P. M.; Kohl, M.; Kovacs, K.; Kubarovsky, V.; Li, Y.; Liyanage, N.; Luo, W.; Mamyan, V.; Markowitz, P.; Maruta, T.; Meekins, D.; Melnik, Y. M.; Mkrtchyan, A.; Mkrtchyan, H.; Mochalov, V. V.; Monaghan, P.; Narayan, A.; Nakamura, S. N.; Nuruzzaman; Pentchev, L.; Pocanic, D.; Posik, M.; Puckett, A.; Qiu, X.; Reinhold, J.; Riordan, S.; Roche, J.; Sawatzky, B.; Shabestari, M.; Slifer, K.; Smith, G.; Soloviev, L.; Solvignon, P.; Tadevosyan, V.; Tang, L.; Vasiliev, A. N.; Veilleux, M.; Walton, T.; Wesselmann, F.; Wood, S. A.; Yao, H.; Ye, Z.; Zhu, L.

    2018-03-01

    The Spin Asymmetries of the Nucleon Experiment (SANE) performed inclusive, double-polarized electron scattering measurements of the proton at the Continuous Electron Beam Accelerator Facility at Jefferson Lab. A novel detector array observed scattered electrons of four-momentum transfer 2 . 5

  13. a Landmark Extraction Method Associated with Geometric Features and Location Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Li, J.; Wang, Y.; Xiao, Y.; Liu, P.; Zhang, S.

    2018-04-01

    Landmark plays an important role in spatial cognition and spatial knowledge organization. Significance measuring model is the main method of landmark extraction. It is difficult to take account of the spatial distribution pattern of landmarks because that the significance of landmark is built in one-dimensional space. In this paper, we start with the geometric features of the ground object, an extraction method based on the target height, target gap and field of view is proposed. According to the influence region of Voronoi Diagram, the description of target gap is established to the geometric representation of the distribution of adjacent targets. Then, segmentation process of the visual domain of Voronoi K order adjacent is given to set up target view under the multi view; finally, through three kinds of weighted geometric features, the landmarks are identified. Comparative experiments show that this method has a certain coincidence degree with the results of traditional significance measuring model, which verifies the effectiveness and reliability of the method and reduces the complexity of landmark extraction process without losing the reference value of landmark.

  14. Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator

    NASA Astrophysics Data System (ADS)

    Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi

    Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.

  15. Pivots for pointing: visually-monitored pointing has higher arm elevations than pointing blindfolded.

    PubMed

    Wnuczko, Marta; Kennedy, John M

    2011-10-01

    Observers pointing to a target viewed directly may elevate their fingertip close to the line of sight. However, pointing blindfolded, after viewing the target, they may pivot lower, from the shoulder, aligning the arm with the target as if reaching to the target. Indeed, in Experiment 1 participants elevated their arms more in visually monitored than blindfolded pointing. In Experiment 2, pointing to a visible target they elevated a short pointer more than a long one, raising its tip to the line of sight. In Experiment 3, the Experimenter aligned the participant's arm with the target. Participants judged they were pointing below a visually monitored target. In Experiment 4, participants viewing another person pointing, eyes-open or eyes-closed, judged the target was aligned with the pointing arm. In Experiment 5, participants viewed their arm and the target via a mirror and posed their arm so that it was aligned with the target. Arm elevation was higher in pointing directly.

  16. Economic communication model set

    NASA Astrophysics Data System (ADS)

    Zvereva, Olga M.; Berg, Dmitry B.

    2017-06-01

    This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.

  17. Masking of Figure-Ground Texture and Single Targets by Surround Inhibition: A Computational Spiking Model

    PubMed Central

    Supèr, Hans; Romeo, August

    2012-01-01

    A visual stimulus can be made invisible, i.e. masked, by the presentation of a second stimulus. In the sensory cortex, neural responses to a masked stimulus are suppressed, yet how this suppression comes about is still debated. Inhibitory models explain masking by asserting that the mask exerts an inhibitory influence on the responses of a neuron evoked by the target. However, other models argue that the masking interferes with recurrent or reentrant processing. Using computer modeling, we show that surround inhibition evoked by ON and OFF responses to the mask suppresses the responses to a briefly presented stimulus in forward and backward masking paradigms. Our model results resemble several previously described psychophysical and neurophysiological findings in perceptual masking experiments and are in line with earlier theoretical descriptions of masking. We suggest that precise spatiotemporal influence of surround inhibition is relevant for visual detection. PMID:22393370

  18. A Car-Steering Model Based on an Adaptive Neuro-Fuzzy Controller

    NASA Astrophysics Data System (ADS)

    Amor, Mohamed Anis Ben; Oda, Takeshi; Watanabe, Shigeyoshi

    This paper is concerned with the development of a car-steering model for traffic simulation. Our focus in this paper is to propose a model of the steering behavior of a human driver for different driving scenarios. These scenarios are modeled in a unified framework using the idea of target position. The proposed approach deals with the driver’s approximation and decision-making mechanisms in tracking a target position by means of fuzzy set theory. The main novelty in this paper lies in the development of a learning algorithm that has the intention to imitate the driver’s self-learning from his driving experience and to mimic his maneuvers on the steering wheel, using linear networks as local approximators in the corresponding fuzzy areas. Results obtained from the simulation of an obstacle avoidance scenario show the capability of the model to carry out a human-like behavior with emphasis on learned skills.

  19. Surface characteristics modeling and performance evaluation of urban building materials using LiDAR data.

    PubMed

    Li, Xiaolu; Liang, Yu

    2015-05-20

    Analysis of light detection and ranging (LiDAR) intensity data to extract surface features is of great interest in remote sensing research. One potential application of LiDAR intensity data is target classification. A new bidirectional reflectance distribution function (BRDF) model is derived for target characterization of rough and smooth surfaces. Based on the geometry of our coaxial full-waveform LiDAR system, the integration method is improved through coordinate transformation to establish the relationship between the BRDF model and intensity data of LiDAR. A series of experiments using typical urban building materials are implemented to validate the proposed BRDF model and integration method. The fitting results show that three parameters extracted from the proposed BRDF model can distinguish the urban building materials from perspectives of roughness, specular reflectance, and diffuse reflectance. A comprehensive analysis of these parameters will help characterize surface features in a physically rigorous manner.

  20. Reexamining the heavy-ion reactions 238U+238U and 238U+248Cm and actinide production close to the barrier

    NASA Astrophysics Data System (ADS)

    Kratz, J. V.; Schädel, M.; Gäggeler, H. W.

    2013-11-01

    Recent theoretical work has renewed interest in radiochemically determined isotope distributions in reactions of 238U projectiles with heavy targets that had previously been published only in parts. These data are being reexamined. The cross sections σ(Z) below the uranium target have been determined as a function of incident energy in thick-target bombardments. These are compared to predictions by a diffusion model whereby consistency with the experimental data is found in the energy intervals 7.65-8.30 MeV/u and 6.06-7.50 MeV/u. In the energy interval 6.06-6.49 MeV/u, the experimental data are lower by a factor of 5 compared to the diffusion model prediction indicating a threshold behavior for massive charge and mass transfer close to the barrier. For the intermediate energy interval, the missing mass between the primary fragment masses deduced from the generalized Qgg systematics including neutron pair-breaking corrections and the centroid of the experimental isotope distributions as a function of Z have been used to determine the average excitation energy as a function of Z. From this, the Z dependence of the average total kinetic-energy loss (TKEL¯) has been determined. This is compared to that measured in a thin-target counter experiment at 7.42 MeV/u. For small charge transfers, the values of TKEL¯ of this work are typically about 30 MeV lower than in the thin-target experiment. This difference is decreasing with increasing charge transfer developing into even slightly larger values in the thick-target experiment for the largest charge transfers. This is the expected behavior which is also found in a comparison of the partial cross sections for quasielastic and deep-inelastic reactions in both experiments. The cross sections for surviving heavy actinides, e.g., 98Cf, 99Es, and 100Fm indicate that these are produced in the low-energy tails of the dissipated energy distributions, however, with a low-energy cutoff at about 35 MeV. Excitation functions show that identical isotope distributions are populated independent of the bombarding energy indicating that the same bins of excitation energy are responsible for the production of these fissile isotopes. A comparison of the survival probabilities of the residues of equal charge and neutron transfers in the reactions of 238U projectiles with either 238U or 248Cm targets is consistent with such a cutoff as evaporation calculations assign the surviving heavy actinides to the 3n and/or 4n evaporation channels.

  1. MODELING THE EFFECTS OF SENSORY REINFORCERS ON BEHAVIORAL PERSISTENCE WITH ALTERNATIVE REINFORCEMENT

    PubMed Central

    Sweeney, Mary M.; Moore, Keira; Shahan, Timothy A.; Ahearn, William H.; Dube, William V.; Nevin, John A.

    2014-01-01

    Problem behavior often has sensory consequences that cannot be separated from the target response, even if external, social reinforcers are removed during treatment. Because sensory reinforcers that accompany socially mediated problem behavior may contribute to persistence and relapse, research must develop analog sensory reinforcers that can be experimentally manipulated. In this research, we devised analogs to sensory reinforcers in order to control for their presence and determine how sensory reinforcers may impact treatment efficacy. Experiments 1 and 2 compared the efficacy of differential reinforcement of alternative behavior (DRA) versus noncontingent reinforcement (NCR) with and without analog sensory reinforcers in a multiple schedule. Experiment 1 measured the persistence of key pecking in pigeons, whereas Experiment 2 measured the persistence of touchscreen responses in children with intellectual and developmental disabilities. Across both experiments, the presence of analog sensory reinforcers increased the levels, persistence, and variability of responding relative to when analog sensory reinforcers were absent. Also in both experiments, target responding was less persistent under conditions of DRA compared to NCR regardless of the presence or absence of analog sensory reinforcers. PMID:25130416

  2. Transport simulations of linear plasma generators with the B2.5-Eirene and EMC3-Eirene codes

    DOE PAGES

    Rapp, Juergen; Owen, Larry W.; Bonnin, X.; ...

    2014-12-20

    Linear plasma generators are cost effective facilities to simulate divertor plasma conditions of present and future fusion reactors. For this research, the codes B2.5-Eirene and EMC3-Eirene were extensively used for design studies of the planned Material Plasma Exposure eXperiment (MPEX). Effects on the target plasma of the gas fueling and pumping locations, heating power, device length, magnetic configuration and transport model were studied with B2.5-Eirene. Effects of tilted or vertical targets were calculated with EMC3-Eirene and showed that spreading the incident flux over a larger area leads to lower density, higher temperature and off-axis profile peaking in front of themore » target. In conclusion, the simulations indicate that with sufficient heating power MPEX can reach target plasma conditions that are similar to those expected in the ITER divertor. B2.5-Eirene simulations of the MAGPIE experiment have been carried out in order to establish an additional benchmark with experimental data from a linear device with helicon wave heating.« less

  3. A novel 3-dimensional electromagnetic guidance system increases intraoperative microwave antenna placement accuracy.

    PubMed

    Sastry, Amit V; Swet, Jacob H; Murphy, Keith J; Baker, Erin H; Vrochides, Dionisios; Martinie, John B; McKillop, Iain H; Iannitti, David A

    2017-12-01

    Failure to locate lesions and accurately place microwave antennas can lead to incomplete tumor ablation. The Emprint™ SX Ablation Platform employs real-time 3D-electromagnetic spatial antenna tracking to generate intraoperative laparoscopic antenna guidance. We sought to determine whether Emprint™ SX affected time/accuracy of antenna-placement in a laparoscopic training model. Targets (7-10 mm) were set in agar within a laparoscopic training device. Novices (no surgical experience), intermediates (surgical residents), and experts (HPB-surgeons) were asked to locate and hit targets using a MWA antenna (10-ultrasound only, 10-Emprint™ SX). Time to locate target, number of attempts to hit the target, first-time hit rate, and time from initiating antenna advance to hitting the target were measured. Participants located 100% of targets using ultrasound, with experts taking significantly less time than novices and intermediates. Using ultrasound only, successful hit-rates were 70% for novices and 90% for intermediates and experts. Using Emprint™ SX, successful hit rates for all 3-groups were 100%, with significantly increased first-time hit-rates and reduced time required to hit targets compared to ultrasound only. Emprint™ SX significantly improved accuracy and speed of antenna-placement independent of experience, and was particularly beneficial for novice users. Copyright © 2017 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.

  4. Signal Enhancement and Suppression During Visual-Spatial Selective Attention

    PubMed Central

    Couperus, J. W.; Mangun, G.R.

    2010-01-01

    Selective attention involves the relative enhancement of relevant versus irrelevant stimuli. However, whether this relative enhancement involves primarily enhancement of attended stimuli, or suppression of irrelevant stimuli, remains controversial. Moreover, if both enhancement and suppression are involved, whether they result from a single mechanism or separate mechanisms during attentional control or selection is not known. In two experiments using a spatial cuing paradigm with task-relevant targets and irrelevant distractors, target and distracter processing was examined as a function of distractor expectancy. Additionally, in the second study the interaction of perceptual load and distractor expectancy was explored. In both experiments, distractors were either validly cued (70%) or invalidly cued (30%) in order to examine the effects of distractor expectancy on attentional control as well as target and distractor processing. The effects of distractor expectancy were assessed using event-related potentials recorded during the cue-to-target period (preparatory attention) and in response to the task-relevant target stimuli (selective stimulus processing). Analyses of distractor-present displays (anticipated versus unanticipated), showed modulations in brain activity during both the preparatory period and during target processing. The pattern of brain responses suggest both facilitation of attended targets and suppression of unattended distractors. These findings provide evidence for a two-process model of visual spatial selective attention, where one mechanism (facilitation) influences relevant stimuli and another (suppression) acts to filter distracting stimuli. PMID:20807513

  5. Application of plug-plug technique to ACE experiments for discovery of peptides binding to a larger target protein: a model study of calmodulin-binding fragments selected from a digested mixture of reduced BSA.

    PubMed

    Saito, Kazuki; Nakato, Mamiko; Mizuguchi, Takaaki; Wada, Shinji; Uchimura, Hiromasa; Kataoka, Hiroshi; Yokoyama, Shigeyuki; Hirota, Hiroshi; Kiso, Yoshiaki

    2014-03-01

    To discover peptide ligands that bind to a target protein with a higher molecular mass, a concise screening methodology has been established, by applying a "plug-plug" technique to ACE experiments. Exploratory experiments using three mixed peptides, mastoparan-X, β-endorphin, and oxytocin, as candidates for calmodulin-binding ligands, revealed that the technique not only reduces the consumption of the protein sample, but also increases the flexibility of the experimental conditions, by allowing the use of MS detection in the ACE experiments. With the plug-plug technique, the ACE-MS screening methodology successfully selected calmodulin-binding peptides from a random library with diverse constituents, such as protease digests of BSA. Three peptides with Kd values between 8-147 μM for calmodulin were obtained from a Glu-C endoprotease digest of reduced BSA, although the digest showed more than 70 peaks in its ACE-MS electropherogram. The method established here will be quite useful for the screening of peptide ligands, which have only low affinities due to their flexible chain structures but could potentially provide primary information for designing inhibitors against the target protein. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  7. Design and Fabrication of DebriSat - A Representative LEO Satellite for Improvements to Standard Satellite Breakup Models

    NASA Technical Reports Server (NTRS)

    Clark, S.; Dietrich, A.; Fitz-Coy, N.; Weremeyer, M.; Liou, J.-C.

    2012-01-01

    This paper discusses the design and fabrication of DebriSat, a 50 kg satellite developed to be representative of a modern low Earth orbit satellite in terms of its components, materials used, and fabrication procedures. DebriSat will be the target of a future hypervelocity impact experiment to determine the physical characteristics of debris generated after an on-orbit collision of a modern LEO satellite. The major ground-based satellite impact experiment used by DoD and NASA in their development of satellite breakup models was SOCIT, conducted in 1992. The target used for that experiment was a Navy transit satellite (40 cm, 35 kg) fabricated in the 1960's. Modern satellites are very different in materials and construction techniques than those built 40 years ago. Therefore, there is a need to conduct a similar experiment using a modern target satellite to improve the fidelity of the satellite breakup models. To ensure that DebriSat is truly representative of typical LEO missions, a comprehensive study of historical LEO satellite designs and missions within the past 15 years for satellites ranging from 1 kg to 5000 kg was conducted. This study identified modern trends in hardware, material, and construction practices utilized in recent LEO missions. Although DebriSat is an engineering model, specific attention is placed on the quality, type, and quantity of the materials used in its fabrication to ensure the integrity of the outcome. With the exception of software, all other aspects of the satellite s design, fabrication, and assembly integration and testing will be as rigorous as that of an actual flight vehicle. For example, to simulate survivability of launch loads, DebriSat will be subjected to a vibration test. As well, the satellite will undergo thermal vacuum tests to verify that the components and overall systems meet typical environmental standards. Proper assembly and integration techniques will involve comprehensive joint analysis, including the precise torqueing of fasteners and thread locking. Finally, the implementation of process documentation and verification procedures is discussed to provide a comprehensive overview of the design and fabrication of this representative LEO satellite.

  8. A model of the formation of illusory conjunctions in the time domain.

    PubMed

    Botella, J; Suero, M; Barriopedro, M I

    2001-12-01

    The authors present a model to account for the miscombination of features when stimuli are presented using the rapid serial visual presentation (RSVP) technique (illusory conjunctions in the time domain). It explains the distributions of responses through a mixture of trial outcomes. In some trials, attention is successfully focused on the target, whereas in others, the responses are based on partial information. Two experiments are presented that manipulated the mean processing time of the target-defining dimension and of the to-be-reported dimension, respectively. As predicted, the average origin of the responses is delayed when lengthening the target-defining dimension, whereas it is earlier when lengthening the to-be-reported dimension; in the first case the number of correct responses is dramatically reduced, whereas in the second it does not change. The results, a review of other research, and simulations carried out with a formal version of the model are all in close accordance with the predictions.

  9. Modeling target normal sheath acceleration using handoffs between multiple simulations

    NASA Astrophysics Data System (ADS)

    McMahon, Matthew; Willis, Christopher; Mitchell, Robert; King, Frank; Schumacher, Douglass; Akli, Kramer; Freeman, Richard

    2013-10-01

    We present a technique to model the target normal sheath acceleration (TNSA) process using full-scale LSP PIC simulations. The technique allows for a realistic laser, full size target and pre-plasma, and sufficient propagation length for the accelerated ions and electrons. A first simulation using a 2D Cartesian grid models the laser-plasma interaction (LPI) self-consistently and includes field ionization. Electrons accelerated by the laser are imported into a second simulation using a 2D cylindrical grid optimized for the initial TNSA process and incorporating an equation of state. Finally, all of the particles are imported to a third simulation optimized for the propagation of the accelerated ions and utilizing a static field solver for initialization. We also show use of 3D LPI simulations. Simulation results are compared to recent ion acceleration experiments using SCARLET laser at The Ohio State University. This work was performed with support from ASOFR under contract # FA9550-12-1-0341, DARPA, and allocations of computing time from the Ohio Supercomputing Center.

  10. Attention Modulates Spatial Precision in Multiple-Object Tracking.

    PubMed

    Srivastava, Nisheeth; Vul, Ed

    2016-01-01

    We present a computational model of multiple-object tracking that makes trial-level predictions about the allocation of visual attention and the effect of this allocation on observers' ability to track multiple objects simultaneously. This model follows the intuition that increased attention to a location increases the spatial resolution of its internal representation. Using a combination of empirical and computational experiments, we demonstrate the existence of a tight coupling between cognitive and perceptual resources in this task: Low-level tracking of objects generates bottom-up predictions of error likelihood, and high-level attention allocation selectively reduces error probabilities in attended locations while increasing it at non-attended locations. Whereas earlier models of multiple-object tracking have predicted the big picture relationship between stimulus complexity and response accuracy, our approach makes accurate predictions of both the macro-scale effect of target number and velocity on tracking difficulty and micro-scale variations in difficulty across individual trials and targets arising from the idiosyncratic within-trial interactions of targets and distractors. Copyright © 2016 Cognitive Science Society, Inc.

  11. Addressing recent docking challenges: A hybrid strategy to integrate template-based and free protein-protein docking.

    PubMed

    Yan, Yumeng; Wen, Zeyu; Wang, Xinxiang; Huang, Sheng-You

    2017-03-01

    Protein-protein docking is an important computational tool for predicting protein-protein interactions. With the rapid development of proteomics projects, more and more experimental binding information ranging from mutagenesis data to three-dimensional structures of protein complexes are becoming available. Therefore, how to appropriately incorporate the biological information into traditional ab initio docking has been an important issue and challenge in the field of protein-protein docking. To address these challenges, we have developed a Hybrid DOCKing protocol of template-based and template-free approaches, referred to as HDOCK. The basic procedure of HDOCK is to model the structures of individual components based on the template complex by a template-based method if a template is available; otherwise, the component structures will be modeled based on monomer proteins by regular homology modeling. Then, the complex structure of the component models is predicted by traditional protein-protein docking. With the HDOCK protocol, we have participated in the CPARI experiment for rounds 28-35. Out of the 25 CASP-CAPRI targets for oligomer modeling, our HDOCK protocol predicted correct models for 16 targets, ranking one of the top algorithms in this challenge. Our docking method also made correct predictions on other CAPRI challenges such as protein-peptide binding for 6 out of 8 targets and water predictions for 2 out of 2 targets. The advantage of our hybrid docking approach over pure template-based docking was further confirmed by a comparative evaluation on 20 CASP-CAPRI targets. Proteins 2017; 85:497-512. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Steering a virtual blowfly: simulation of visual pursuit.

    PubMed

    Boeddeker, Norbert; Egelhaaf, Martin

    2003-09-22

    The behavioural repertoire of male flies includes visually guided chasing after moving targets. The visuomotor control system for these pursuits belongs to the fastest found in the animal kingdom. We simulated a virtual fly, to test whether or not experimentally established hypotheses on the underlying control system are sufficient to explain chasing behaviour. Two operating instructions for steering the chasing virtual fly were derived from behavioural experiments: (i) the retinal size of the target controls the fly's forward speed and, thus, indirectly its distance to the target; and (ii) a smooth pursuit system uses the retinal position of the target to regulate the fly's flight direction. Low-pass filters implement neuronal processing time. Treating the virtual fly as a point mass, its kinematics are modelled in consideration of the effects of translatory inertia and air friction. Despite its simplicity, the model shows behaviour similar to that of real flies. Depending on its starting position and orientation as well as on target size and speed, the virtual fly either catches the target or follows it indefinitely without capture. These two behavioural modes of the virtual fly emerge from the control system for flight steering without implementation of an explicit decision maker.

  13. A ranking method for the concurrent learning of compounds with various activity profiles.

    PubMed

    Dörr, Alexander; Rosenbaum, Lars; Zell, Andreas

    2015-01-01

    In this study, we present a SVM-based ranking algorithm for the concurrent learning of compounds with different activity profiles and their varying prioritization. To this end, a specific labeling of each compound was elaborated in order to infer virtual screening models against multiple targets. We compared the method with several state-of-the-art SVM classification techniques that are capable of inferring multi-target screening models on three chemical data sets (cytochrome P450s, dehydrogenases, and a trypsin-like protease data set) containing three different biological targets each. The experiments show that ranking-based algorithms show an increased performance for single- and multi-target virtual screening. Moreover, compounds that do not completely fulfill the desired activity profile are still ranked higher than decoys or compounds with an entirely undesired profile, compared to other multi-target SVM methods. SVM-based ranking methods constitute a valuable approach for virtual screening in multi-target drug design. The utilization of such methods is most helpful when dealing with compounds with various activity profiles and the finding of many ligands with an already perfectly matching activity profile is not to be expected.

  14. Unified anomaly suppression and boundary extraction in laser radar range imagery based on a joint curve-evolution and expectation-maximization algorithm.

    PubMed

    Feng, Haihua; Karl, William Clem; Castañon, David A

    2008-05-01

    In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.

  15. Must analysis of meaning follow analysis of form? A time course analysis

    PubMed Central

    Feldman, Laurie B.; Milin, Petar; Cho, Kit W.; Moscoso del Prado Martín, Fermín; O’Connor, Patrick A.

    2015-01-01

    Many models of word recognition assume that processing proceeds sequentially from analysis of form to analysis of meaning. In the context of morphological processing, this implies that morphemes are processed as units of form prior to any influence of their meanings. Some interpret the apparent absence of differences in recognition latencies to targets (SNEAK) in form and semantically similar (sneaky-SNEAK) and in form similar and semantically dissimilar (sneaker-SNEAK) prime contexts at a stimulus onset asynchrony (SOA) of 48 ms as consistent with this claim. To determine the time course over which degree of semantic similarity between morphologically structured primes and their targets influences recognition in the forward masked priming variant of the lexical decision paradigm, we compared facilitation for the same targets after semantically similar and dissimilar primes across a range of SOAs (34–100 ms). The effect of shared semantics on recognition latency increased linearly with SOA when long SOAs were intermixed (Experiments 1A and 1B) and latencies were significantly faster after semantically similar than dissimilar primes at homogeneous SOAs of 48 ms (Experiment 2) and 34 ms (Experiment 3). Results limit the scope of form-then-semantics models of recognition and demonstrate that semantics influences even the very early stages of recognition. Finally, once general performance across trials has been accounted for, we fail to provide evidence for individual differences in morphological processing that can be linked to measures of reading proficiency. PMID:25852512

  16. Must analysis of meaning follow analysis of form? A time course analysis.

    PubMed

    Feldman, Laurie B; Milin, Petar; Cho, Kit W; Moscoso Del Prado Martín, Fermín; O'Connor, Patrick A

    2015-01-01

    Many models of word recognition assume that processing proceeds sequentially from analysis of form to analysis of meaning. In the context of morphological processing, this implies that morphemes are processed as units of form prior to any influence of their meanings. Some interpret the apparent absence of differences in recognition latencies to targets (SNEAK) in form and semantically similar (sneaky-SNEAK) and in form similar and semantically dissimilar (sneaker-SNEAK) prime contexts at a stimulus onset asynchrony (SOA) of 48 ms as consistent with this claim. To determine the time course over which degree of semantic similarity between morphologically structured primes and their targets influences recognition in the forward masked priming variant of the lexical decision paradigm, we compared facilitation for the same targets after semantically similar and dissimilar primes across a range of SOAs (34-100 ms). The effect of shared semantics on recognition latency increased linearly with SOA when long SOAs were intermixed (Experiments 1A and 1B) and latencies were significantly faster after semantically similar than dissimilar primes at homogeneous SOAs of 48 ms (Experiment 2) and 34 ms (Experiment 3). Results limit the scope of form-then-semantics models of recognition and demonstrate that semantics influences even the very early stages of recognition. Finally, once general performance across trials has been accounted for, we fail to provide evidence for individual differences in morphological processing that can be linked to measures of reading proficiency.

  17. Target Highlights in CASP9: Experimental Target Structures for the Critical Assessment of Techniques for Protein Structure Prediction

    PubMed Central

    Kryshtafovych, Andriy; Moult, John; Bartual, Sergio G.; Bazan, J. Fernando; Berman, Helen; Casteel, Darren E.; Christodoulou, Evangelos; Everett, John K.; Hausmann, Jens; Heidebrecht, Tatjana; Hills, Tanya; Hui, Raymond; Hunt, John F.; Jayaraman, Seetharaman; Joachimiak, Andrzej; Kennedy, Michael A.; Kim, Choel; Lingel, Andreas; Michalska, Karolina; Montelione, Gaetano T.; Otero, José M.; Perrakis, Anastassis; Pizarro, Juan C.; van Raaij, Mark J.; Ramelot, Theresa A.; Rousseau, Francois; Tong, Liang; Wernimont, Amy K.; Young, Jasmine; Schwede, Torsten

    2011-01-01

    One goal of the CASP Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction is to identify the current state of the art in protein structure prediction and modeling. A fundamental principle of CASP is blind prediction on a set of relevant protein targets, i.e. the participating computational methods are tested on a common set of experimental target proteins, for which the experimental structures are not known at the time of modeling. Therefore, the CASP experiment would not have been possible without broad support of the experimental protein structural biology community. In this manuscript, several experimental groups discuss the structures of the proteins which they provided as prediction targets for CASP9, highlighting structural and functional peculiarities of these structures: the long tail fibre protein gp37 from bacteriophage T4, the cyclic GMP-dependent protein kinase Iβ (PKGIβ) dimerization/docking domain, the ectodomain of the JTB (Jumping Translocation Breakpoint) transmembrane receptor, Autotaxin (ATX) in complex with an inhibitor, the DNA-Binding J-Binding Protein 1 (JBP1) domain essential for biosynthesis and maintenance of DNA base-J (β-D-glucosyl-hydroxymethyluracil) in Trypanosoma and Leishmania, an so far uncharacterized 73 residue domain from Ruminococcus gnavus with a fold typical for PDZ-like domains, a domain from the Phycobilisome (PBS) core-membrane linker (LCM) phycobiliprotein ApcE from Synechocystis, the Heat shock protein 90 (Hsp90) activators PFC0360w and PFC0270w from Plasmodium falciparum, and 2-oxo-3-deoxygalactonate kinase from Klebsiella pneumoniae. PMID:22020785

  18. Visual pop-out in barn owls: Human-like behavior in the avian brain.

    PubMed

    Orlowski, Julius; Beissel, Christian; Rohn, Friederike; Adato, Yair; Wagner, Hermann; Ben-Shahar, Ohad

    2015-01-01

    Visual pop-out is a phenomenon by which the latency to detect a target in a scene is independent of the number of other elements, the distractors. Pop-out is an effective visual-search guidance that occurs typically when the target is distinct in one feature from the distractors, thus facilitating fast detection of predators or prey. However, apart from studies on primates, pop-out has been examined in few species and demonstrated thus far in rats, archer fish, and pigeons only. To fill this gap, here we study pop-out in barn owls. These birds are a unique model system for such exploration because their lack of eye movements dictates visual behavior dominated by head movements. Head saccades and interspersed fixation periods can therefore be tracked and analyzed with a head-mounted wireless microcamera--the OwlCam. Using this methodology we confronted two owls with scenes containing search arrays of one target among varying numbers (15-63) of similar looking distractors. We tested targets distinct either by orientation (Experiment 1) or luminance contrast (Experiment 2). Search time and the number of saccades until the target was fixated remained largely independent of the number of distractors in both experiments. This suggests that barn owls can exhibit pop-out during visual search, thus expanding the group of species and brain structures that can cope with this fundamental visual behavior. The utility of our automatic analysis method is further discussed for other species and scientific questions.

  19. Extrapolation of vertical target motion through a brief visual occlusion.

    PubMed

    Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco

    2010-03-01

    It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects.

  20. Visual search for conjunctions of physical and numerical size shows that they are processed independently.

    PubMed

    Sobel, Kenith V; Puri, Amrita M; Faulkenberry, Thomas J; Dague, Taylor D

    2017-03-01

    The size congruity effect refers to the interaction between numerical magnitude and physical digit size in a symbolic comparison task. Though this effect is well established in the typical 2-item scenario, the mechanisms at the root of the interference remain unclear. Two competing explanations have emerged in the literature: an early interaction model and a late interaction model. In the present study, we used visual conjunction search to test competing predictions from these 2 models. Participants searched for targets that were defined by a conjunction of physical and numerical size. Some distractors shared the target's physical size, and the remaining distractors shared the target's numerical size. We held the total number of search items fixed and manipulated the ratio of the 2 distractor set sizes. The results from 3 experiments converge on the conclusion that numerical magnitude is not a guiding feature for visual search, and that physical and numerical magnitude are processed independently, which supports a late interaction model of the size congruity effect. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Simulation study of neutron production in thick beryllium targets by 35 MeV and 50.5 MeV proton beams

    NASA Astrophysics Data System (ADS)

    Shin, Jae Won; Park, Tae-Sun

    2017-09-01

    A data-driven nuclear model dedicated to an accurate description of neutron productions in beryllium targets bombarded by proton beams is developed as a custom development that can be used as an add-on to GEANT4 code. The developed model, G4Data(Endf7.1), takes as inputs the total and differential cross section data of ENDF/B-VII.1 for not only the charge-exchange 9Be(p,n)9B reaction which produces discrete neutrons but also the nuclear reactions relevant for the production of continuum neutrons such as 9Be(p,pn)8Be and 9Be(p,n α) 5Li . In our benchmarking simulations for two experiments with 35 MeV and 50.5 MeV proton beams impinged on 1.16 and 1.05 cm thick beryllium targets, respectively, we find that the G4Data(Endf7.1) model can reproduce both the total amounts and the spectral shapes of the measured neutron yield data in a satisfactory manner, while all the considered hadronic models of GEANT4 cannot.

  2. Temporal Restricted Visual Tracking Via Reverse-Low-Rank Sparse Learning.

    PubMed

    Yang, Yehui; Hu, Wenrui; Xie, Yuan; Zhang, Wensheng; Zhang, Tianzhu

    2017-02-01

    An effective representation model, which aims to mine the most meaningful information in the data, plays an important role in visual tracking. Some recent particle-filter-based trackers achieve promising results by introducing the low-rank assumption into the representation model. However, their assumed low-rank structure of candidates limits the robustness when facing severe challenges such as abrupt motion. To avoid the above limitation, we propose a temporal restricted reverse-low-rank learning algorithm for visual tracking with the following advantages: 1) the reverse-low-rank model jointly represents target and background templates via candidates, which exploits the low-rank structure among consecutive target observations and enforces the temporal consistency of target in a global level; 2) the appearance consistency may be broken when target suffers from sudden changes. To overcome this issue, we propose a local constraint via l 1,2 mixed-norm, which can not only ensures the local consistency of target appearance, but also tolerates the sudden changes between two adjacent frames; and 3) to alleviate the inference of unreasonable representation values due to outlier candidates, an adaptive weighted scheme is designed to improve the robustness of the tracker. By evaluating on 26 challenge video sequences, the experiments show the effectiveness and favorable performance of the proposed algorithm against 12 state-of-the-art visual trackers.

  3. Urban Street Gang Enforcement.

    ERIC Educational Resources Information Center

    Institute for Law and Justice, Inc., Alexandria, VA.

    Strategies to enhance prosecution of gang-related crimes are presented, with a focus on enforcement and prosecution targeting urban street gangs. The model programs introduced offer strategies largely based on the practical experiences of agencies that participated in a demonstration program, the Urban Street Gang Drug Trafficking Enforcement…

  4. Adaptive Focusing For Ultrasonic Transcranial Brain Therapy: First In Vivo Investigation On 22 Sheep

    NASA Astrophysics Data System (ADS)

    Pernot, Mathieu; Aubry, Jean-François; Tanter, Mickael; Boch, Anne Laure; Kujas, Michelle; Fink, Mathias

    2005-03-01

    A high power prototype dedicated to trans-skull therapy has been tested in vivo on 22 sheep. The array is made of 300 high power transducers working at 1MHz central frequency and is able to achieve 400 bars at focus in water during five seconds with a 50% percent duty cycle. In the first series of experiments, 10 sheep were treated and sacrificed immediately after treatment. A complete craniotomy was performed on half of the treated animal models in order to get a reference model. On the other half, minimally invasive surgery has been performed: a hydrophone was inserted at a given target location inside the brain through a craniotomy of a few mm2. A time reversal experiment was then conducted through the skull bone with the therapeutic array to treat the targeted point. Thanks to the high power technology of the prototype, trans-skull adaptive treatment could be achieved. In a second series of experiments, 12 animals were divided into three groups and sacrificed respectively one, two or three weeks after treatment. Finally, Magnetic Resonance Imaging and histological examination were performed to confirm tissue damage.

  5. Neural Extrapolation of Motion for a Ball Rolling Down an Inclined Plane

    PubMed Central

    La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka

    2014-01-01

    It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion. PMID:24940874

  6. Neural extrapolation of motion for a ball rolling down an inclined plane.

    PubMed

    La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka

    2014-01-01

    It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion.

  7. Nuclear fragmentation studies for microelectronic application

    NASA Technical Reports Server (NTRS)

    Ngo, Duc M.; Wilson, John W.; Buck, Warren W.; Fogarty, Thomas N.

    1989-01-01

    A formalism for target fragment transport is presented with application to energy loss spectra in thin silicon devices. Predicted results are compared to experiments with the surface barrier detectors of McNulty et al. The intranuclear cascade nuclear reaction model does not predict the McNulty experimental data for the highest energy events. A semiempirical nuclear cross section gives an adequate explanation of McNulty's experiments. Application of the formalism to specific electronic devices is discussed.

  8. Are All Letters Really Processed Equally and in Parallel? Further Evidence of a Robust First Letter Advantage

    PubMed Central

    Scaltritti, Michele; Balota, David A.

    2013-01-01

    This present study examined accuracy and response latency of letter processing as a function of position within a horizontal array. In a series of 4 Experiments, target-strings were briefly (33 ms for Experiment 1 to 3, 83 ms for Experiment 4) displayed and both forward and backward masked. Participants then made a two alternative forced choice. The two alternative responses differed just in one element of the string, and position of mismatch was systematically manipulated. In Experiment 1, words of different lengths (from 3 to 6 letters) were presented in separate blocks. Across different lengths, there was a robust advantage in performance when the alternative response was different for the letter occurring at the first position, compared to when the difference occurred at any other position. Experiment 2 replicated this finding with the same materials used in Experiment 1, but with words of different lengths randomly intermixed within blocks. Experiment 3 provided evidence of the first position advantage with legal nonwords and strings of consonants, but did not provide any first position advantage for non-alphabetic symbols. The lack of a first position advantage for symbols was replicated in Experiment 4, where target-strings were displayed for a longer duration (83 ms). Taken together these results suggest that the first position advantage is a phenomenon that occurs specifically and selectively for letters, independent of lexical constraints. We argue that the results are consistent with models that assume a processing advantage for coding letters in the first position, and are inconsistent with the commonly held assumption in visual word recognition models that letters are equally processed in parallel independent of letter position. PMID:24012723

  9. Infrared small target detection in heavy sky scene clutter based on sparse representation

    NASA Astrophysics Data System (ADS)

    Liu, Depeng; Li, Zhengzhou; Liu, Bing; Chen, Wenhao; Liu, Tianmei; Cao, Lei

    2017-09-01

    A novel infrared small target detection method based on sky clutter and target sparse representation is proposed in this paper to cope with the representing uncertainty of clutter and target. The sky scene background clutter is described by fractal random field, and it is perceived and eliminated via the sparse representation on fractal background over-complete dictionary (FBOD). The infrared small target signal is simulated by generalized Gaussian intensity model, and it is expressed by the generalized Gaussian target over-complete dictionary (GGTOD), which could describe small target more efficiently than traditional structured dictionaries. Infrared image is decomposed on the union of FBOD and GGTOD, and the sparse representation energy that target signal and background clutter decomposed on GGTOD differ so distinctly that it is adopted to distinguish target from clutter. Some experiments are induced and the experimental results show that the proposed approach could improve the small target detection performance especially under heavy clutter for background clutter could be efficiently perceived and suppressed by FBOD and the changing target could also be represented accurately by GGTOD.

  10. Coherent and Noncoherent Joint Processing of Sonar for Detection of Small Targets in Shallow Water.

    PubMed

    Pan, Xiang; Jiang, Jingning; Li, Si; Ding, Zhenping; Pan, Chen; Gong, Xianyi

    2018-04-10

    A coherent-noncoherent joint processing framework is proposed for active sonar to combine diversity gain and beamforming gain for detection of a small target in shallow water environments. Sonar utilizes widely-spaced arrays to sense environments and illuminate a target of interest from multiple angles. Meanwhile, it exploits spatial diversity for time-reversal focusing to suppress reverberation, mainly strong bottom reverberation. For enhancement of robustness of time-reversal focusing, an adaptive iterative strategy is utilized in the processing framework. A probing signal is firstly transmitted and echoes of a likely target are utilized as steering vectors for the second transmission. With spatial diversity, target bearing and range are estimated using a broadband signal model. Numerical simulations show that the novel sonar outperforms the traditional phased-array sonar due to benefits of spatial diversity. The effectiveness of the proposed framework has been validated by localization of a small target in at-lake experiments.

  11. Fabrication of 121Sb isotopic targets for the study of nuclear high spin features

    NASA Astrophysics Data System (ADS)

    Devi, K. Rojeeta; Kumar, Suresh; Kumar, Neeraj; Abhilash, S. R.; Kabiraj, D.

    2018-06-01

    Isotopic 121Sb targets with 197Au backing have been prepared by Physical Vapor Deposition (PVD) method using the diffusion pump based coating unit at target laboratory, Inter University Accelerator Centre (IUAC), New Delhi, India. The target thickness was measured by stylus profilo-meter and the purity of the targets was investigated by Energy Dispersive X-ray Analysis (EDXA). One of these targets has been used in an experiment which was performed at IUAC for nuclear structure study through fusion evaporation reaction. The excitation function of the 121Sb(12C, yxnγ) reaction has been performed for energies 58 to 70 MeV in steps of 4 MeV. The experimental results were compared with the calculations of statistical models : PACE4 and CASCADE. The methods adopted to achieve best quality foils and good deposition efficiency are reported in this paper.

  12. Optical Radiation from Shock-Compressed Materials. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Svendsen, Robert F., Jr.

    1987-01-01

    Recent observations of shock-induced radiation from oxides, silicates, and metals of geophysical interest constrain the shock-compressed temperature of these materials. The relationships between the temperature inferred from the observed radiation and the temperature of the shock-compressed film or foil and/or window were investigated. Changes of the temperature field in each target component away from that of their respective shock-compressed states occur because of: shock-impedance mismatch between target components; thermal mismatch between target components; surface roughness at target interfaces; and conduction within and between target components. In particular, conduction may affect the temperature of the film/foil window interface on the time scale of the experiments, and so control the intensity and history of the dominant thermal radiation sources in the target. This type of model was used to interpret the radiation emitted by a variety of shock-compressed materials and interfaces.

  13. Congruency effects in the remote distractor paradigm: evidence for top-down modulation.

    PubMed

    Born, Sabine; Kerzel, Dirk

    2009-08-10

    In three experiments, we examined effects of target-distractor similarity in the remote distractor effect (RDE). Observers made saccades to peripheral targets that were either gray or green. Foveal or peripheral distractors were presented at the same time. The distractors could either share the target's defining property (congruent) or be different from the target (incongruent). Congruent distractors slowed down saccadic reaction times more than incongruent distractors. The increase of the RDE with target-distractor congruency depended on task demands. The more participants had to rely on the target property to locate the target, the larger the congruency effect. We conclude that the RDE can be modulated in a top-down manner. Alternative explanations such as persisting memory traces for the target property or differences in stimulus arrangement were considered but discarded. Our claim is in line with models of saccade generation which assume that the structures underlying the RDE (e.g. the superior colliculus) receive bottom-up as well as top-down information.

  14. Monte Carlo Solution to Find Input Parameters in Systems Design Problems

    NASA Astrophysics Data System (ADS)

    Arsham, Hossein

    2013-06-01

    Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.

  15. Open and closed cortico-subcortical loops: A neuro-computational account of access to consciousness in the distractor-induced blindness paradigm.

    PubMed

    Ebner, Christian; Schroll, Henning; Winther, Gesche; Niedeggen, Michael; Hamker, Fred H

    2015-09-01

    How the brain decides which information to process 'consciously' has been debated over for decades without a simple explanation at hand. While most experiments manipulate the perceptual energy of presented stimuli, the distractor-induced blindness task is a prototypical paradigm to investigate gating of information into consciousness without or with only minor visual manipulation. In this paradigm, subjects are asked to report intervals of coherent dot motion in a rapid serial visual presentation (RSVP) stream, whenever these are preceded by a particular color stimulus in a different RSVP stream. If distractors (i.e., intervals of coherent dot motion prior to the color stimulus) are shown, subjects' abilities to perceive and report intervals of target dot motion decrease, particularly with short delays between intervals of target color and target motion. We propose a biologically plausible neuro-computational model of how the brain controls access to consciousness to explain how distractor-induced blindness originates from information processing in the cortex and basal ganglia. The model suggests that conscious perception requires reverberation of activity in cortico-subcortical loops and that basal-ganglia pathways can either allow or inhibit this reverberation. In the distractor-induced blindness paradigm, inadequate distractor-induced response tendencies are suppressed by the inhibitory 'hyperdirect' pathway of the basal ganglia. If a target follows such a distractor closely, temporal aftereffects of distractor suppression prevent target identification. The model reproduces experimental data on how delays between target color and target motion affect the probability of target detection. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Enhanced reproducibility of L-mode plasma discharges via physics-model-based q-profile feedback control in DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuster, Eugenio J.; Wehner, William P.; Barton, Joseph E.

    Recent experiments on DIII-D demonstrate the potential of physics-model-based q-profile control to improve reproducibility of plasma discharges. A combined feed forward + feedback control scheme is employed to optimize the current ramp-up phase by consistently achieving target q profiles (Target 1: q min = 1.3,q 95 = 4:4; Target 2: q min = 1.65,q 95 = 5.0; Target 3: q min = 2.1,q 95 = 6:2) at prescribed times during the plasma formation phase (Target 1: t = 1.5 s; Target 2: t = 1:3 s; Target 3: t = 1.0 s). At the core of the control scheme ismore » a nonlinear, first-principles-driven, physics-based, control-oriented model of the plasma dynamics valid for low confinement (L-mode) scenarios. To prevent undesired L-H transitions, a constraint on the maximum allowable total auxiliary power is imposed in addition to the maximum powers for the individual heating and current-drive sources. Experimental results are presented to demonstrate the effectiveness of the combined feed forward + feedback control scheme to consistently achieve the desired target profiles at the predefined times. Here, these results also show how the addition of feedback control significantly improves upon the feed forward only control solution by reducing the matching error and also how the feedback controller is able to reduce the matching error as the constraint on the maximum allowable total auxiliary power is relaxed while keeping the plasma in L-mode.« less

  17. Enhanced reproducibility of L-mode plasma discharges via physics-model-based q-profile feedback control in DIII-D

    DOE PAGES

    Schuster, Eugenio J.; Wehner, William P.; Barton, Joseph E.; ...

    2017-08-09

    Recent experiments on DIII-D demonstrate the potential of physics-model-based q-profile control to improve reproducibility of plasma discharges. A combined feed forward + feedback control scheme is employed to optimize the current ramp-up phase by consistently achieving target q profiles (Target 1: q min = 1.3,q 95 = 4:4; Target 2: q min = 1.65,q 95 = 5.0; Target 3: q min = 2.1,q 95 = 6:2) at prescribed times during the plasma formation phase (Target 1: t = 1.5 s; Target 2: t = 1:3 s; Target 3: t = 1.0 s). At the core of the control scheme ismore » a nonlinear, first-principles-driven, physics-based, control-oriented model of the plasma dynamics valid for low confinement (L-mode) scenarios. To prevent undesired L-H transitions, a constraint on the maximum allowable total auxiliary power is imposed in addition to the maximum powers for the individual heating and current-drive sources. Experimental results are presented to demonstrate the effectiveness of the combined feed forward + feedback control scheme to consistently achieve the desired target profiles at the predefined times. Here, these results also show how the addition of feedback control significantly improves upon the feed forward only control solution by reducing the matching error and also how the feedback controller is able to reduce the matching error as the constraint on the maximum allowable total auxiliary power is relaxed while keeping the plasma in L-mode.« less

  18. The Underlying Physics in Wetted Particle Collisions

    NASA Astrophysics Data System (ADS)

    Donahue, Carly; Hrenya, Christine; Davis, Robert

    2008-11-01

    Wetted granular particles are relevant in many industries including the pharmaceutical and chemical industries and has applications to granulation, filtration, coagulation, spray coating, drying and pneumatic transport. In our current focus, we investigate the dynamics of a three-body normal wetted particle collision. In order to conduct collisions we use an apparatus called a ``Stokes Cradle,'' similar to the Newton's Cradle (desktop toy) except that the target particles are covered with oil. Here, we are able to vary the oil thickness, oil viscosity, and material properties. With a three particle collision there are four possible outcomes: fully agglomerated (FA); Newton's Cradle (NC), the striker and the first target ball are agglomerated and the last target ball is separated; Reverse Newton's Cradle (RNC), the striker is separated and the two targets are agglomerated; and fully separated (FS). Varying the properties of the collisions, we have observed all four outcomes. We use elastohydrodynamics as a theoretical basis for modeling the system. We also have considered the glass transition of the oil as the pressure increases upon impact and the cavitation of the oil as the pressure drops below the vapor pressure upon rebound. A toy model has been developed where the collision is modeled as a series of two-body collisions. A qualitative agreement between the toy model and experiments gives insight into the underlying physics.

  19. Three-dimensional particle simulation of back-sputtered carbon in electric propulsion test facility

    NASA Astrophysics Data System (ADS)

    Zheng, Hongru; Cai, Guobiao; Liu, Lihui; Shang, Shengfei; He, Bijiao

    2017-03-01

    The back-sputtering deposition on thruster surface caused by ion bombardment on chamber wall material affects the performance of thrusters during the ground based electric propulsion endurance tests. In order to decrease the back-sputtering deposition, most of vacuum chambers applied in electric propulsion experiments are equipped with anti-sputtering targets. In this paper, a three-dimensional model of plume experimental system (PES) including double layer anti-sputtering target is established. Simulation cases are made to simulate the plasma environment and sputtering effects when an ion thruster is working. The particle in cell (PIC) method and direct simulation Monte Carlo (DSMC) method is used to calculate the velocity and position of particles. Yamamura's model is used to simulate the sputtering process. The distribution of sputtered anti-sputtering target material is presented. The results show that the double layer anti-sputtering target can significantly reduce the deposition on thruster surface. The back-sputtering deposition rates on thruster exit surface for different cases are compared. The chevrons on the secondary target are rearranged to improve its performance. The position of secondary target has relation with the ion beam divergence angle, and the radius of the vacuum chamber. The back-sputtering deposition rate is lower when the secondary target covers the entire ion beam.

  20. Distancing, not embracing, the Distancing-Embracing model of art reception.

    PubMed

    Davies, Stephen

    2017-01-01

    Despite denials in the target article, the Distancing-Embracing model appeals to compensatory ideas in explaining the appeal of artworks that elicit negative affect. The model also appeals to the deflationary effects of psychological distancing. Having pointed to the famous rejection in the 1960s of the view that aesthetic experience involves psychological distancing, I suggest that "distance" functions here as a weak metaphor that cannot sustain the explanatory burden the theory demands of it.

  1. Kicking the (barking) dog effect: the moderating role of target attributes on triggered displaced aggression.

    PubMed

    Pedersen, William C; Bushman, Brad J; Vasquez, Eduardo A; Miller, Norman

    2008-10-01

    Sometimes aggression is displaced onto a target who is not totally innocent but emits a mildly irritating behavior called a triggering event. In three experiments, the authors examine stable personal attributes of targets that can impact such triggered displaced aggression (TDA). Lower levels of TDA were directed to targets whose attitudes were similar as compared to dissimilar to those of the actor (Experiment 1) and to targets who were ingroup as compared to out-group members (Experiment 2). Conceptually replicating the findings of Experiments 1 and 2, the manipulated valence of the target (viz., liked, neutral, and disliked) functioned in a similar manner, with positive valence serving a buffering function against a triggering action that followed an initial provocation (Experiment 3). The results from all three experiments are consistent with cognitive neoassociationist theory.

  2. Spatial Language and the Embedded Listener Model in Parents’ Input to Children

    PubMed Central

    Ferrara, Katrina; Silva, Malena; Wilson, Colin; Landau, Barbara

    2015-01-01

    Language is a collaborative act: in order to communicate successfully, speakers must generate utterances that are not only semantically valid, but also sensitive to the knowledge state of the listener. Such sensitivity could reflect use of an “embedded listener model,” where speakers choose utterances on the basis of an internal model of the listeners’ conceptual and linguistic knowledge. In this paper, we ask whether parents’ spatial descriptions incorporate an embedded listener model that reflects their children’s understanding of spatial relations and spatial terms. Adults described the positions of targets in spatial arrays to their children or to the adult experimenter. Arrays were designed so that targets could not be identified unless spatial relationships within the array were encoded and described. Parents of 3–4 year-old children encoded relationships in ways that were well-matched to their children’s level of spatial language. These encodings differed from those of the same relationships in speech to the adult experimenter (Experiment 1). By contrast, parents of individuals with severe spatial impairments (Williams syndrome) did not show clear evidence of sensitivity to their children’s level of spatial language (Experiment 2). The results provide evidence for an embedded listener model in the domain of spatial language, and indicate conditions under which the ability to model listener knowledge may be more challenging. PMID:26717804

  3. Spatial Language and the Embedded Listener Model in Parents' Input to Children.

    PubMed

    Ferrara, Katrina; Silva, Malena; Wilson, Colin; Landau, Barbara

    2016-11-01

    Language is a collaborative act: To communicate successfully, speakers must generate utterances that are not only semantically valid but also sensitive to the knowledge state of the listener. Such sensitivity could reflect the use of an "embedded listener model," where speakers choose utterances on the basis of an internal model of the listener's conceptual and linguistic knowledge. In this study, we ask whether parents' spatial descriptions incorporate an embedded listener model that reflects their children's understanding of spatial relations and spatial terms. Adults described the positions of targets in spatial arrays to their children or to the adult experimenter. Arrays were designed so that targets could not be identified unless spatial relationships within the array were encoded and described. Parents of 3-4-year-old children encoded relationships in ways that were well-matched to their children's level of spatial language. These encodings differed from those of the same relationships in speech to the adult experimenter (Experiment 1). In contrast, parents of individuals with severe spatial impairments (Williams syndrome) did not show clear evidence of sensitivity to their children's level of spatial language (Experiment 2). The results provide evidence for an embedded listener model in the domain of spatial language and indicate conditions under which the ability to model listener knowledge may be more challenging. Copyright © 2015 Cognitive Science Society, Inc.

  4. Early Results from the Qweak Experiment

    NASA Astrophysics Data System (ADS)

    Androic, D.; Armstrong, D. S.; Asaturyan, A.; Averett, T.; Balewski, J.; Beaufait, J.; Beminiwattha, R. S.; Benesch, J.; Benmokhtar, F.; Birchall, J.; Carlini, R. D.; Cates, G. D.; Cornejo, J. C.; Covrig, S.; Dalton, M. M.; Davis, C. A.; Deconinck, W.; Diefenbach, J.; Dowd, J. F.; Dunne, J. A.; Dutta, D.; Duvall, W. S.; Elaasar, M.; Falk, W. R.; Finn, J. M.; Forest, T.; Gaskell, D.; Gericke, M. T. W.; Grames, J.; Gray, V. M.; Grimm, K.; Guo, F.; Hoskins, J. R.; Johnston, K.; Jones, D.; Jones, M.; Jones, R.; Kargiantoulakis, M.; King, P. M.; Korkmaz, E.; Kowalski, S.; Leacock, J.; Leckey, J.; Lee, A. R.; Lee, J. H.; Lee, L.; MacEwan, S.; Mack, D.; Magee, J. A.; Mahurin, R.; Mammei, J.; Martin, J.; McHugh, M. J.; Meekins, D.; Mei, J.; Michaels, R.; Micherdzinska, A.; Mkrtchyan, A.; Mkrtchyan, H.; Morgan, N.; Myers, K. E.; Narayan, A.; Ndukum, L. Z.; Nelyubin, V.; Nuruzzaman; van Oers, W. T. H.; Opper, A. K.; Page, S. A.; Pan, J.; Paschke, K.; Phillips, S. K.; Pitt, M. L.; Poelker, M.; Rajotte, J. F.; Ramsay, W. D.; Roche, J.; Sawatzky, B.; Seva, T.; Shabestari, M. H.; Silwal, R.; Simicevic, N.; Smith, G. R.; Solvignon, P.; Spayde, D. T.; Subedi, A.; Subedi, R.; Suleiman, R.; Tadevosyan, V.; Tobias, W. A.; Tvaskis, V.; Waidyawansa, B.; Wang, P.; Wells, S. P.; Wood, S. A.; Yang, S.; Young, R. D.; Zhamkochyan, S.

    2014-03-01

    A subset of results from the recently completed Jefferson Lab Qweak experiment are reported. This experiment, sensitive to physics beyond the Standard Model, exploits the small parity-violating asymmetry in elastic e{{p}} scattering to provide the first determination of the proton's weak charge Q_w^p. The experiment employed a 180 μA longitudinally polarized 1.16 GeV electron beam on a 35 cm long liquid hydrogen target. Scattered electrons in the angular range 6° < θ < 12° corresponding to Q2 = 0.025 GeV2 were detected in eight Cerenkov detectors arrayed symmetrically around the beam axis. The goals of the experiment were to provide a measure of e{{p}} to 4.2% (combined statisstatistical and systematic error), which implies a measure of sin2(θw) at the level of 0.3%, and to help constrain the vector weak quark charges C1u and C1d. The experimental method is described, with particular focus on the challenges associated with the world's highest power LH2 target. The new constraints on C1u and C1d provided by the subset of the experiment's data analyzed to date will also be shown, together with the extracted weak charge of the neutron.

  5. Affective divergence: automatic responses to others' emotions depend on group membership.

    PubMed

    Weisbuch, Max; Ambady, Nalini

    2008-11-01

    Extant research suggests that targets' emotion expressions automatically evoke similar affect in perceivers. The authors hypothesized that the automatic impact of emotion expressions depends on group membership. In Experiments 1 and 2, an affective priming paradigm was used to measure immediate and preconscious affective responses to same-race or other-race emotion expressions. In Experiment 3, spontaneous vocal affect was measured as participants described the emotions of an ingroup or outgroup sports team fan. In these experiments, immediate and spontaneous affective responses depended on whether the emotional target was ingroup or outgroup. Positive responses to fear expressions and negative responses to joy expressions were observed in outgroup perceivers, relative to ingroup perceivers. In Experiments 4 and 5, discrete emotional responses were examined. In a lexical decision task (Experiment 4), facial expressions of joy elicited fear in outgroup perceivers, relative to ingroup perceivers. In contrast, facial expressions of fear elicited less fear in outgroup than in ingroup perceivers. In Experiment 5, felt dominance mediated emotional responses to ingroup and outgroup vocal emotion. These data support a signal-value model in which emotion expressions signal environmental conditions. (c) 2008 APA, all rights reserved.

  6. A user-targeted synthesis of the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutierrez, Jose; Kotlarski, Sven; Hertig, Elke; Wibig, Joanna; Rössler, Ole; Huth, Radan

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. We consider different aspects: (1) marginal aspects such as mean, variance and extremes; (2) temporal aspects such as spell length characteristics; (3) spatial aspects such as the de-correlation length of precipitation extremes; and multi-variate aspects such as the interplay of temperature and precipitation or scale-interactions. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur. Experiment 1 (perfect predictors): what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Experiment 2 (Global climate model predictors): how is the overall representation of regional climate, including errors inherited from global climate models? Experiment 3 (pseudo reality): do methods fail in representing regional climate change? Here, we present a user-targeted synthesis of the results of the first VALUE experiment. In this experiment, downscaling methods are driven with ERA-Interim reanalysis data to eliminate global climate model errors, over the period 1979-2008. As reference data we use, depending on the question addressed, (1) observations from 86 meteorological stations distributed across Europe; (2) gridded observations at the corresponding 86 locations or (3) gridded spatially extended observations for selected European regions. With more than 40 contributing methods, this study is the most comprehensive downscaling inter-comparison project so far. The results clearly indicate that for several aspects, the downscaling skill varies considerably between different methods. For specific purposes, some methods can therefore clearly be excluded.

  7. Design of a Representative Low Earth Orbit Satellite to Improve Existing Debris Models

    NASA Technical Reports Server (NTRS)

    Clark, S.; Dietrich, A.; Werremeyer, M.; Fitz-Coy, N.; Liou, J.-C.

    2012-01-01

    This paper summarizes the process and methodologies used in the design of a small-satellite, DebriSat, that represents materials and construction methods used in modern day Low Earth Orbit (LEO) satellites. This satellite will be used in a future hypervelocity impact test with the overall purpose to investigate the physical characteristics of modern LEO satellites after an on-orbit collision. The major ground-based satellite impact experiment used by DoD and NASA in their development of satellite breakup models was conducted in 1992. The target used for that experiment was a Navy Transit satellite (40 cm, 35 kg) fabricated in the 1960 s. Modern satellites are very different in materials and construction techniques from a satellite built 40 years ago. Therefore, there is a need to conduct a similar experiment using a modern target satellite to improve the fidelity of the satellite breakup models. The design of DebriSat will focus on designing and building a next-generation satellite to more accurately portray modern satellites. The design of DebriSat included a comprehensive study of historical LEO satellite designs and missions within the past 15 years for satellites ranging from 10 kg to 5000 kg. This study identified modern trends in hardware, material, and construction practices utilized in recent LEO missions, and helped direct the design of DebriSat.

  8. The muon component in extensive air showers and new p+C data in fixed target experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meurer, C.; Bluemer, J.; Engel, R.

    2007-03-19

    One of the most promising approaches to determine the energy spectrum and composition of the cosmic rays with energies above 1015 eV is the measurement of the number of electrons and muons produced in extensive air showers (EAS). Therefore simulation of air showers using electromagnetic and hadronic interaction models are necessary. These simulations show uncertainties which come mainly from hadronic interaction models. One aim of this work is to specify the low energy hadronic interactions which are important for the muon production in EAS. Therefore we simulate extensive air showers with a modified version of the simulation package CORSIKA. Inmore » particular we investigate in detail the energy and the phase space regions of secondary particle production, which are most important for muon production. This phase space region is covered by fixed target experiments at CERN. In the second part of this work we present preliminary momentum spectra of secondary {pi}+ and {pi}- in p+C collisions at 12 GeV/c measured with the HARP spectrometer at the PS accelerator at CERN. In addition we use the new p+C NA49 data at 158 GeV/c to check the reliability of hadronic interaction models for muon production in EAS. Finally, possibilities to measure relevant quantities of hadron production in existing and planned accelerator experiments are discussed.« less

  9. Microsputterer with integrated ion-drag focusing for additive manufacturing of thin, narrow conductive lines

    NASA Astrophysics Data System (ADS)

    Kornbluth, Y. S.; Mathews, R. H.; Parameswaran, L.; Racz, L. M.; Velásquez-García, L. F.

    2018-04-01

    We report the design, modelling, and proof-of-concept demonstration of a continuously fed, atmospheric-pressure microplasma metal sputterer that is capable of printing conductive lines narrower than the width of the target without the need for post-processing or lithographic patterning. Ion drag-induced focusing is harnessed to print narrow lines; the focusing mechanism is modelled via COMSOL Multiphysics simulations and validated with experiments. A microplasma sputter head with gold target is constructed and used to deposit imprints with minimum feature sizes as narrow as 9 µm, roughness as small as 55 nm, and electrical resistivity as low as 1.1 µΩ · m.

  10. Policy Transfer via Markov Logic Networks

    NASA Astrophysics Data System (ADS)

    Torrey, Lisa; Shavlik, Jude

    We propose using a statistical-relational model, the Markov Logic Network, for knowledge transfer in reinforcement learning. Our goal is to extract relational knowledge from a source task and use it to speed up learning in a related target task. We show that Markov Logic Networks are effective models for capturing both source-task Q-functions and source-task policies. We apply them via demonstration, which involves using them for decision making in an initial stage of the target task before continuing to learn. Through experiments in the RoboCup simulated-soccer domain, we show that transfer via Markov Logic Networks can significantly improve early performance in complex tasks, and that transferring policies is more effective than transferring Q-functions.

  11. Increased subjective experience of non-target emotions in patients with frontotemporal dementia and Alzheimer’s disease

    PubMed Central

    Chen, Kuan-Hua; Lwi, Sandy J.; Hua, Alice Y.; Haase, Claudia M.; Miller, Bruce L.; Levenson, Robert W.

    2017-01-01

    Although laboratory procedures are designed to produce specific emotions, participants often experience mixed emotions (i.e., target and non-target emotions). We examined non-target emotions in patients with frontotemporal dementia (FTD), Alzheimer’s disease (AD), other neurodegenerative diseases, and healthy controls. Participants watched film clips designed to produce three target emotions. Subjective experience of non-target emotions was assessed and emotional facial expressions were coded. Compared to patients with other neurodegenerative diseases and healthy controls, FTD patients reported more positive and negative non-target emotions, whereas AD patients reported more positive non-target emotions. There were no group differences in facial expressions of non-target emotions. We interpret these findings as reflecting deficits in processing interoceptive and contextual information resulting from neurodegeneration in brain regions critical for creating subjective emotional experience. PMID:29457053

  12. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants.

    PubMed

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-04-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.

  13. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants

    PubMed Central

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-01-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325

  14. Methods for Tumor Targeting with Salmonella typhimurium A1-R.

    PubMed

    Hoffman, Robert M; Zhao, Ming

    2016-01-01

    Salmonella typhimurium A1-R (S. typhimurium A1-R) has shown great preclinical promise as a broad-based anti-cancer therapeutic (please see Chapter 1 ). The present chapter describes materials and methods for the preclinical study of S. typhimurium A1-R in clinically-relevant mouse models. Establishment of orthotopic metastatic mouse models of the major cancer types is described, as well as other useful models, for efficacy studies of S. typhimurium A1-R or other tumor-targeting bacteria, as well. Imaging methods are described to visualize GFP-labeled S. typhimurium A1-R, as well as GFP- and/or RFP-labeled cancer cells in vitro and in vivo, which S. typhimurium A1-R targets. The mouse models include metastasis to major organs that are life-threatening to cancer patients including the liver, lung, bone, and brain and how to target these metastases with S. typhimurium A1-R. Various routes of administration of S. typhimurium A1-R are described with the advantages and disadvantages of each. Basic experiments to determine toxic effects of S. typhimurium A1-R are also described. Also described are methodologies for combining S. typhimurium A1-R and chemotherapy. The testing of S. typhimurium A1-R on patient tumors in patient-derived orthotopic xenograft (PDOX) mouse models is also described. The major methodologies described in this chapter should be translatable for clinical studies.

  15. A bio-inspired kinematic controller for obstacle avoidance during reaching tasks with real robots.

    PubMed

    Srinivasa, Narayan; Bhattacharyya, Rajan; Sundareswara, Rashmi; Lee, Craig; Grossberg, Stephen

    2012-11-01

    This paper describes a redundant robot arm that is capable of learning to reach for targets in space in a self-organized fashion while avoiding obstacles. Self-generated movement commands that activate correlated visual, spatial and motor information are used to learn forward and inverse kinematic control models while moving in obstacle-free space using the Direction-to-Rotation Transform (DIRECT). Unlike prior DIRECT models, the learning process in this work was realized using an online Fuzzy ARTMAP learning algorithm. The DIRECT-based kinematic controller is fault tolerant and can handle a wide range of perturbations such as joint locking and the use of tools despite not having experienced them during learning. The DIRECT model was extended based on a novel reactive obstacle avoidance direction (DIRECT-ROAD) model to enable redundant robots to avoid obstacles in environments with simple obstacle configurations. However, certain configurations of obstacles in the environment prevented the robot from reaching the target with purely reactive obstacle avoidance. To address this complexity, a self-organized process of mental rehearsals of movements was modeled, inspired by human and animal experiments on reaching, to generate plans for movement execution using DIRECT-ROAD in complex environments. These mental rehearsals or plans are self-generated by using the Fuzzy ARTMAP algorithm to retrieve multiple solutions for reaching each target while accounting for all the obstacles in its environment. The key aspects of the proposed novel controller were illustrated first using simple examples. Experiments were then performed on real robot platforms to demonstrate successful obstacle avoidance during reaching tasks in real-world environments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. An exemplar-familiarity model predicts short-term and long-term probe recognition across diverse forms of memory search.

    PubMed

    Nosofsky, Robert M; Cox, Gregory E; Cao, Rui; Shiffrin, Richard M

    2014-11-01

    Experiments were conducted to test a modern exemplar-familiarity model on its ability to account for both short-term and long-term probe recognition within the same memory-search paradigm. Also, making connections to the literature on attention and visual search, the model was used to interpret differences in probe-recognition performance across diverse conditions that manipulated relations between targets and foils across trials. Subjects saw lists of from 1 to 16 items followed by a single item recognition probe. In a varied-mapping condition, targets and foils could switch roles across trials; in a consistent-mapping condition, targets and foils never switched roles; and in an all-new condition, on each trial a completely new set of items formed the memory set. In the varied-mapping and all-new conditions, mean correct response times (RTs) and error proportions were curvilinear increasing functions of memory set size, with the RT results closely resembling ones from hybrid visual-memory search experiments reported by Wolfe (2012). In the consistent-mapping condition, new-probe RTs were invariant with set size, whereas old-probe RTs increased slightly with increasing study-test lag. With appropriate choice of psychologically interpretable free parameters, the model accounted well for the complete set of results. The work provides support for the hypothesis that a common set of processes involving exemplar-based familiarity may govern long-term and short-term probe recognition across wide varieties of memory- search conditions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  17. NMR data-driven structure determination using NMR-I-TASSER in the CASD-NMR experiment

    PubMed Central

    Jang, Richard; Wang, Yan

    2015-01-01

    NMR-I-TASSER, an adaption of the I-TASSER algorithm combining NMR data for protein structure determination, recently joined the second round of the CASD-NMR experiment. Unlike many molecular dynamics-based methods, NMR-I-TASSER takes a molecular replacement-like approach to the problem by first threading the target through the PDB to identify structural templates which are then used for iterative NOE assignments and fragment structure assembly refinements. The employment of multiple templates allows NMR-I-TASSER to sample different topologies while convergence to a single structure is not required. Retroactive and blind tests of the CASD-NMR targets from Rounds 1 and 2 demonstrate that even without using NOE peak lists I-TASSER can generate correct structure topology with 15 of 20 targets having a TM-score above 0.5. With the addition of NOE-based distance restraints, NMR-I-TASSER significantly improved the I-TASSER models with all models having the TM-score above 0.5. The average RMSD was reduced from 5.29 to 2.14 Å in Round 1 and 3.18 to 1.71 Å in Round 2. There is no obvious difference in the modeling results with using raw and refined peak lists, indicating robustness of the pipeline to the NOE assignment errors. Overall, despite the low-resolution modeling the current NMR-I-TASSER pipeline provides a coarse-grained structure folding approach complementary to traditional molecular dynamics simulations, which can produce fast near-native frameworks for atomic-level structural refinement. PMID:25737244

  18. Random-walk enzymes.

    PubMed

    Mak, Chi H; Pham, Phuong; Afif, Samir A; Goodman, Myron F

    2015-09-01

    Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C→U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.

  19. Random-walk enzymes

    PubMed Central

    Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.

    2015-01-01

    Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C → U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics. PMID:26465508

  20. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme.

    PubMed

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-04-21

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.

Top