Sample records for targets modeling experiments

  1. Statistical analysis of target acquisition sensor modeling experiments

    NASA Astrophysics Data System (ADS)

    Deaver, Dawne M.; Moyer, Steve

    2015-05-01

    The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.

  2. Computational modeling of joint U.S.-Russian experiments relevant to magnetic compression/magnetized target fusion (MAGO/MTF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheehey, P.T.; Faehl, R.J.; Kirkpatrick, R.C.

    1997-12-31

    Magnetized Target Fusion (MTF) experiments, in which a preheated and magnetized target plasma is hydrodynamically compressed to fusion conditions, present some challenging computational modeling problems. Recently, joint experiments relevant to MTF (Russian acronym MAGO, for Magnitnoye Obzhatiye, or magnetic compression) have been performed by Los Alamos National Laboratory and the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF). Modeling of target plasmas must accurately predict plasma densities, temperatures, fields, and lifetime; dense plasma interactions with wall materials must be characterized. Modeling of magnetically driven imploding solid liners, for compression of target plasmas, must address issues such as Rayleigh-Taylor instability growthmore » in the presence of material strength, and glide plane-liner interactions. Proposed experiments involving liner-on-plasma compressions to fusion conditions will require integrated target plasma and liner calculations. Detailed comparison of the modeling results with experiment will be presented.« less

  3. Numerical Modeling of Complex Targets for High-Energy- Density Experiments with Ion Beams and other Drivers

    DOE PAGES

    Koniges, Alice; Liu, Wangyi; Lidia, Steven; ...

    2016-04-01

    We explore the simulation challenges and requirements for experiments planned on facilities such as the NDCX-II ion accelerator at LBNL, currently undergoing commissioning. Hydrodynamic modeling of NDCX-II experiments include certain lower temperature effects, e.g., surface tension and target fragmentation, that are not generally present in extreme high-energy laser facility experiments, where targets are completely vaporized in an extremely short period of time. Target designs proposed for NDCX-II range from metal foils of order one micron thick (thin targets) to metallic foam targets several tens of microns thick (thick targets). These high-energy-density experiments allow for the study of fracture as wellmore » as the process of bubble and droplet formation. We incorporate these physics effects into a code called ALE-AMR that uses a combination of Arbitrary Lagrangian Eulerian hydrodynamics and Adaptive Mesh Refinement. Inclusion of certain effects becomes tricky as we must deal with non-orthogonal meshes of various levels of refinement in three dimensions. A surface tension model used for droplet dynamics is implemented in ALE-AMR using curvature calculated from volume fractions. Thick foam target experiments provide information on how ion beam induced shock waves couple into kinetic energy of fluid flow. Although NDCX-II is not fully commissioned, experiments are being conducted that explore material defect production and dynamics.« less

  4. Validating An Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments

    NASA Astrophysics Data System (ADS)

    Catanzarite, Joseph; Burke, Christopher J.; Li, Jie; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    The Kepler Mission is developing an Analytic Completeness Model (ACM) to estimate detection completeness contours as a function of exoplanet radius and period for each target star. Accurate completeness contours are necessary for robust estimation of exoplanet occurrence rates.The main components of the ACM for a target star are: detection efficiency as a function of SNR, the window function (WF) and the one-sigma depth function (OSDF). (Ref. Burke et al. 2015). The WF captures the falloff in transit detection probability at long periods that is determined by the observation window (the duration over which the target star has been observed). The OSDF is the transit depth (in parts per million) that yields SNR of unity for the full transit train. It is a function of period, and accounts for the time-varying properties of the noise and for missing or deweighted data.We are performing flux-level transit injection (FLTI) experiments on selected Kepler target stars with the goal of refining and validating the ACM. “Flux-level” injection machinery inserts exoplanet transit signatures directly into the flux time series, as opposed to “pixel-level” injection, which inserts transit signatures into the individual pixels using the pixel response function. See Jie Li's poster: ID #2493668, "Flux-level transit injection experiments with the NASA Pleiades Supercomputer" for details, including performance statistics.Since FLTI is affordable for only a small subset of the Kepler targets, the ACM is designed to apply to most Kepler target stars. We validate this model using “deep” FLTI experiments, with ~500,000 injection realizations on each of a small number of targets and “shallow” FLTI experiments with ~2000 injection realizations on each of many targets. From the results of these experiments, we identify anomalous targets, model their behavior and refine the ACM accordingly.In this presentation, we discuss progress in validating and refining the ACM, and we

  5. Target charging in short-pulse-laser-plasma experiments.

    PubMed

    Dubois, J-L; Lubrano-Lavaderci, F; Raffestin, D; Ribolzi, J; Gazave, J; Compant La Fontaine, A; d'Humières, E; Hulin, S; Nicolaï, Ph; Poyé, A; Tikhonchuk, V T

    2014-01-01

    Interaction of high-intensity laser pulses with solid targets results in generation of large quantities of energetic electrons that are the origin of various effects such as intense x-ray emission, ion acceleration, and so on. Some of these electrons are escaping the target, leaving behind a significant positive electric charge and creating a strong electromagnetic pulse long after the end of the laser pulse. We propose here a detailed model of the target electric polarization induced by a short and intense laser pulse and an escaping electron bunch. A specially designed experiment provides direct measurements of the target polarization and the discharge current in the function of the laser energy, pulse duration, and target size. Large-scale numerical simulations describe the energetic electron generation and their emission from the target. The model, experiment, and numerical simulations demonstrate that the hot-electron ejection may continue long after the laser pulse ends, enhancing significantly the polarization charge.

  6. 3-D Modeling of Planar Target-Mount Perturbation Experiments on OMEGA

    NASA Astrophysics Data System (ADS)

    Collins, T. J. B.; Marshall, F. J.; Marozas, J. A.; Bonino, M. J.; Forties, R.; Goncharov, V. N.; Igumenshchev, I. V.; McKenty, P. W.; Smalyuk, V. A.

    2008-11-01

    OMEGA cryogenic targets are suspended in the target chamber using four spider silks attached to a C-shaped mount. The spider silks are typically composed of two entwined protein strands comparable to 1 μm in diameter. The silks and mount refract the incident laser light and cast shadows on the target surface. Experiments to measure the effects of the silks on target illumination have been performed in planar geometry using silks suspended parallel to a 20-μm-thick laser-driven target. The evolution of the surface perturbations introduced by the silks was measured using x-ray backlighting. The results of these experiments will be compared to simulations performed with DRACO, employing three-dimensional (3-D) planar hydrodynamics and a new 3-D refractive ray-trace package written specifically for this geometry. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC52-08NA28302.

  7. Fixed-target hadron production experiments

    NASA Astrophysics Data System (ADS)

    Popov, Boris A.

    2015-08-01

    Results from fixed-target hadroproduction experiments (HARP, MIPP, NA49 and NA61/SHINE) as well as their implications for cosmic ray and neutrino physics are reviewed. HARP measurements have been used for predictions of neutrino beams in K2K and MiniBooNE/SciBooNE experiments and are also being used to improve predictions of the muon yields in EAS and of the atmospheric neutrino fluxes as well as to help in the optimization of neutrino factory and super-beam designs. Recent measurements released by the NA61/SHINE experiment are of significant importance for a precise prediction of the J-PARC neutrino beam used for the T2K experiment and for interpretation of EAS data. These hadroproduction experiments provide also a large amount of input for validation and tuning of hadron production models in Monte-Carlo generators.

  8. Spatial frequency dependence of target signature for infrared performance modeling

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd; Olson, Jeffrey

    2011-05-01

    The standard model used to describe the performance of infrared imagers is the U.S. Army imaging system target acquisition model, based on the targeting task performance metric. The model is characterized by the resolution and sensitivity of the sensor as well as the contrast and task difficulty of the target set. The contrast of the target is defined as a spatial average contrast. The model treats the contrast of the target set as spatially white, or constant, over the bandlimit of the sensor. Previous experiments have shown that this assumption is valid under normal conditions and typical target sets. However, outside of these conditions, the treatment of target signature can become the limiting factor affecting model performance accuracy. This paper examines target signature more carefully. The spatial frequency dependence of the standard U.S. Army RDECOM CERDEC Night Vision 12 and 8 tracked vehicle target sets is described. The results of human perception experiments are modeled and evaluated using both frequency dependent and independent target signature definitions. Finally the function of task difficulty and its relationship to a target set is discussed.

  9. Modeling pressure rise in gas targets

    NASA Astrophysics Data System (ADS)

    Jahangiri, P.; Lapi, S. E.; Publicover, J.; Buckley, K.; Martinez, D. M.; Ruth, T. J.; Hoehr, C.

    2017-05-01

    The purpose of this work is to introduce a universal mathematical model to explain a gas target behaviour at steady-state time scale. To obtain our final goal, an analytical model is proposed to study the pressure rise in the targets used to produce medical isotopes on low-energy cyclotrons. The model is developed based on the assumption that during irradiation the system reaches steady-state. The model is verified by various experiments performed at different beam currents, gas type, and initial pressures at 13 MeV cyclotron at TRIUMF. Excellent agreement is achieved.

  10. Testing light dark matter coannihilation with fixed-target experiments

    DOE PAGES

    Izaguirre, Eder; Kahn, Yonatan; Krnjaic, Gordan; ...

    2017-09-01

    In this paper, we introduce a novel program of fixed-target searches for thermal-origin Dark Matter (DM), which couples inelastically to the Standard Model. Since the DM only interacts by transitioning to a heavier state, freeze-out proceeds via coannihilation and the unstable heavier state is depleted at later times. For sufficiently large mass splittings, direct detection is kinematically forbidden and indirect detection is impossible, so this scenario can only be tested with accelerators. Here we propose new searches at proton and electron beam fixed-target experiments to probe sub-GeV coannihilation, exploiting the distinctive signals of up- and downscattering as well as decaymore » of the excited state inside the detector volume. We focus on a representative model in which DM is a pseudo-Dirac fermion coupled to a hidden gauge field (dark photon), which kinetically mixes with the visible photon. We define theoretical targets in this framework and determine the existing bounds by reanalyzing results from previous experiments. We find that LSND, E137, and BaBar data already place strong constraints on the parameter space consistent with a thermal freeze-out origin, and that future searches at Belle II and MiniBooNE, as well as recently-proposed fixed-target experiments such as LDMX and BDX, can cover nearly all remaining gaps. We also briefly comment on the discovery potential for proposed beam dump and neutrino experiments which operate at much higher beam energies.« less

  11. Testing light dark matter coannihilation with fixed-target experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izaguirre, Eder; Kahn, Yonatan; Krnjaic, Gordan

    In this paper, we introduce a novel program of fixed-target searches for thermal-origin Dark Matter (DM), which couples inelastically to the Standard Model. Since the DM only interacts by transitioning to a heavier state, freeze-out proceeds via coannihilation and the unstable heavier state is depleted at later times. For sufficiently large mass splittings, direct detection is kinematically forbidden and indirect detection is impossible, so this scenario can only be tested with accelerators. Here we propose new searches at proton and electron beam fixed-target experiments to probe sub-GeV coannihilation, exploiting the distinctive signals of up- and downscattering as well as decaymore » of the excited state inside the detector volume. We focus on a representative model in which DM is a pseudo-Dirac fermion coupled to a hidden gauge field (dark photon), which kinetically mixes with the visible photon. We define theoretical targets in this framework and determine the existing bounds by reanalyzing results from previous experiments. We find that LSND, E137, and BaBar data already place strong constraints on the parameter space consistent with a thermal freeze-out origin, and that future searches at Belle II and MiniBooNE, as well as recently-proposed fixed-target experiments such as LDMX and BDX, can cover nearly all remaining gaps. We also briefly comment on the discovery potential for proposed beam dump and neutrino experiments which operate at much higher beam energies.« less

  12. Testing light dark matter coannihilation with fixed-target experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izaguirre, Eder; Kahn, Yonatan; Krnjaic, Gordan

    In this paper, we introduce a novel program of fixed-target searches for thermal-origin Dark Matter (DM), which couples inelastically to the Standard Model. Since the DM only interacts by transitioning to a heavier state, freeze-out proceeds via coannihilation and the unstable heavier state is depleted at later times. For sufficiently large mass splittings, direct detection is kinematically forbidden and indirect detection is impossible, so this scenario can only be tested with accelerators. Here we propose new searches at proton and electron beam fixed-target experiments to probe sub-GeV coannihilation, exploiting the distinctive signals of up- and down-scattering as well as decaymore » of the excited state inside the detector volume. We focus on a representative model in which DM is a pseudo-Dirac fermion coupled to a hidden gauge field (dark photon), which kinetically mixes with the visible photon. We define theoretical targets in this framework and determine the existing bounds by reanalyzing results from previous experiments. We find that LSND, E137, and BaBar data already place strong constraints on the parameter space consistent with a thermal freeze-out origin, and that future searches at Belle II and MiniBooNE, as well as recently-proposed fixed-target experiments such as LDMX and BDX, can cover nearly all remaining gaps. We also briefly comment on the discovery potential for proposed beam dump and neutrino experiments which operate at much higher beam energies.« less

  13. An Experiment Quantifying The Effect Of Clutter On Target Detection

    NASA Astrophysics Data System (ADS)

    Weathersby, Marshall R.; Schmieder, David E.

    1985-01-01

    Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 lines pairs per target (LP/TGT), while at the higher SCRs it was found that a resolution of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.

  14. A Model for the Application of Target-Controlled Intravenous Infusion for a Prolonged Immersive DMT Psychedelic Experience

    PubMed Central

    Gallimore, Andrew R.; Strassman, Rick J.

    2016-01-01

    The state of consciousness induced by N,N-dimethyltryptamine (DMT) is one of the most extraordinary of any naturally-occurring psychedelic substance. Users consistently report the complete replacement of normal subjective experience with a novel “alternate universe,” often densely populated with a variety of strange objects and other highly complex visual content, including what appear to be sentient “beings.” The phenomenology of the DMT state is of great interest to psychology and calls for rigorous academic enquiry. The extremely short duration of DMT effects—less than 20 min—militates against single dose administration as the ideal model for such enquiry. Using pharmacokinetic modeling and DMT blood sampling data, we demonstrate that the unique pharmacological characteristics of DMT, which also include a rapid onset and lack of acute tolerance to its subjective effects, make it amenable to administration by target-controlled intravenous infusion. This is a technology developed to maintain a stable brain concentration of anesthetic drugs during surgery. Simulations of our model demonstrate that this approach will allow research subjects to be induced into a stable and prolonged DMT experience, making it possible to carefully observe its psychological contents, and provide more extensive accounts for subsequent analyses. This model would also be valuable in performing functional neuroimaging, where subjects are required to remain under the influence of the drug for extended periods. Finally, target-controlled intravenous infusion of DMT may aid the development of unique psychotherapeutic applications of this psychedelic agent. PMID:27471468

  15. A Model for the Application of Target-Controlled Intravenous Infusion for a Prolonged Immersive DMT Psychedelic Experience.

    PubMed

    Gallimore, Andrew R; Strassman, Rick J

    2016-01-01

    The state of consciousness induced by N,N-dimethyltryptamine (DMT) is one of the most extraordinary of any naturally-occurring psychedelic substance. Users consistently report the complete replacement of normal subjective experience with a novel "alternate universe," often densely populated with a variety of strange objects and other highly complex visual content, including what appear to be sentient "beings." The phenomenology of the DMT state is of great interest to psychology and calls for rigorous academic enquiry. The extremely short duration of DMT effects-less than 20 min-militates against single dose administration as the ideal model for such enquiry. Using pharmacokinetic modeling and DMT blood sampling data, we demonstrate that the unique pharmacological characteristics of DMT, which also include a rapid onset and lack of acute tolerance to its subjective effects, make it amenable to administration by target-controlled intravenous infusion. This is a technology developed to maintain a stable brain concentration of anesthetic drugs during surgery. Simulations of our model demonstrate that this approach will allow research subjects to be induced into a stable and prolonged DMT experience, making it possible to carefully observe its psychological contents, and provide more extensive accounts for subsequent analyses. This model would also be valuable in performing functional neuroimaging, where subjects are required to remain under the influence of the drug for extended periods. Finally, target-controlled intravenous infusion of DMT may aid the development of unique psychotherapeutic applications of this psychedelic agent.

  16. Ballistic Experiments with Titanium and Aluminum Targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gogolewski, R.; Morgan, B.R.

    1999-11-23

    During the course of the project we conducted two sets of fundamental experiments in penetration mechanics in the LLNL Terminal Ballistics Laboratory of the Physics Directorate. The first set of full-scale experiments was conducted with a 14.5mm air propelled launcher. The object of the experiments was to determine the ballistic limit speed of 6Al-4V-alloy titanium, low fineness ratio projectiles centrally impacting 2024-T3 alloy aluminum flat plates and the failure modes of the projectiles and the targets. The second set of one-third scale experiments was conducted with a 14.5mm powder launcher. The object of these experiments was to determine the ballisticmore » limit speed of 6Al-4V alloy titanium high fineness ratio projectiles centrally impacting 6Al-4V alloy titanium flat plates and the failure modes of the projectiles and the target. We employed radiography to observe a projectile just before and after interaction with a target plate. Early on, we employed a non-damaging ''soft-catch'' technique to capture projectiles after they perforated targets. Once we realized that a projectile was not damaged during interaction with a target, we used a 4-inch thick 6061-T6-alloy aluminum witness block with a 6.0-inch x 6.0-inch cross-section to measure projectile residual penetration. We have recorded and tabulated below projectile impact speed, projectile residual (post-impact) speed, projectile failure mode, target failure mode, and pertinent comments for the experiments. The ballistic techniques employed for the experiments are similar to those employed in an earlier study.« less

  17. Validating models of target acquisition performance in the dismounted soldier context

    NASA Astrophysics Data System (ADS)

    Glaholt, Mackenzie G.; Wong, Rachel K.; Hollands, Justin G.

    2018-04-01

    The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models' designs and parameters, and the characteristics of the behavioral

  18. Modeling the effects of contrast enhancement on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Fanning, Jonathan D.

    2008-04-01

    Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content, by better utilizing the available gray levels either globally or locally. This paper assesses the range-performance effects of various contrast enhancement algorithms for target identification with well contrasted vehicles. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing linearly scaled images and various contrast enhancement processed images. Contrast enhancement is modeled in the US Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of feature saturation or enhancement. To account for the equivalent blur associated with each contrast enhancement algorithm, an additional effective MTF was calculated and added to the model. The measured results are compared with the predicted performance based on the target task difficulty metric used in NVThermIP.

  19. Beauty and charm production at fixed-target experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erik E. Gottschalk

    Fixed-target experiments continue to provide insights into the physics of particle production in strong interactions. The experiments are performed with different types of beam particles of varying energies, and many different target materials. Studies of beauty and charm production are of particular interest, since experimental results can be compared to perturbative QCD calculations. It is in this context that recent results from fixed-target experiments on beauty and charm production will be reviewed.

  20. Dynamic model of target charging by short laser pulse interactions

    NASA Astrophysics Data System (ADS)

    Poyé, A.; Dubois, J.-L.; Lubrano-Lavaderci, F.; D'Humières, E.; Bardon, M.; Hulin, S.; Bailly-Grandvaux, M.; Ribolzi, J.; Raffestin, D.; Santos, J. J.; Nicolaï, Ph.; Tikhonchuk, V.

    2015-10-01

    A model providing an accurate estimate of the charge accumulation on the surface of a metallic target irradiated by a high-intensity laser pulse of fs-ps duration is proposed. The model is confirmed by detailed comparisons with specially designed experiments. Such a model is useful for understanding the electromagnetic pulse emission and the quasistatic magnetic field generation in laser-plasma interaction experiments.

  1. Dynamic model of target charging by short laser pulse interactions.

    PubMed

    Poyé, A; Dubois, J-L; Lubrano-Lavaderci, F; D'Humières, E; Bardon, M; Hulin, S; Bailly-Grandvaux, M; Ribolzi, J; Raffestin, D; Santos, J J; Nicolaï, Ph; Tikhonchuk, V

    2015-10-01

    A model providing an accurate estimate of the charge accumulation on the surface of a metallic target irradiated by a high-intensity laser pulse of fs-ps duration is proposed. The model is confirmed by detailed comparisons with specially designed experiments. Such a model is useful for understanding the electromagnetic pulse emission and the quasistatic magnetic field generation in laser-plasma interaction experiments.

  2. Non-Targeted Effects Models Predict Significantly Higher Mars Mission Cancer Risk than Targeted Effects Models

    DOE PAGES

    Cucinotta, Francis A.; Cacao, Eliedonna

    2017-05-12

    Cancer risk is an important concern for galactic cosmic ray (GCR) exposures, which consist of a wide-energy range of protons, heavy ions and secondary radiation produced in shielding and tissues. Relative biological effectiveness (RBE) factors for surrogate cancer endpoints in cell culture models and tumor induction in mice vary considerable, including significant variations for different tissues and mouse strains. Many studies suggest non-targeted effects (NTE) occur for low doses of high linear energy transfer (LET) radiation, leading to deviation from the linear dose response model used in radiation protection. Using the mouse Harderian gland tumor experiment, the only extensive data-setmore » for dose response modelling with a variety of particle types (>4), for the first-time a particle track structure model of tumor prevalence is used to investigate the effects of NTEs in predictions of chronic GCR exposure risk. The NTE model led to a predicted risk 2-fold higher compared to a targeted effects model. The scarcity of data with animal models for tissues that dominate human radiation cancer risk, including lung, colon, breast, liver, and stomach, suggest that studies of NTEs in other tissues are urgently needed prior to long-term space missions outside the protection of the Earth’s geomagnetic sphere.« less

  3. Non-Targeted Effects Models Predict Significantly Higher Mars Mission Cancer Risk than Targeted Effects Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cucinotta, Francis A.; Cacao, Eliedonna

    Cancer risk is an important concern for galactic cosmic ray (GCR) exposures, which consist of a wide-energy range of protons, heavy ions and secondary radiation produced in shielding and tissues. Relative biological effectiveness (RBE) factors for surrogate cancer endpoints in cell culture models and tumor induction in mice vary considerable, including significant variations for different tissues and mouse strains. Many studies suggest non-targeted effects (NTE) occur for low doses of high linear energy transfer (LET) radiation, leading to deviation from the linear dose response model used in radiation protection. Using the mouse Harderian gland tumor experiment, the only extensive data-setmore » for dose response modelling with a variety of particle types (>4), for the first-time a particle track structure model of tumor prevalence is used to investigate the effects of NTEs in predictions of chronic GCR exposure risk. The NTE model led to a predicted risk 2-fold higher compared to a targeted effects model. The scarcity of data with animal models for tissues that dominate human radiation cancer risk, including lung, colon, breast, liver, and stomach, suggest that studies of NTEs in other tissues are urgently needed prior to long-term space missions outside the protection of the Earth’s geomagnetic sphere.« less

  4. Using Data Independent Acquisition (DIA) to Model High-responding Peptides for Targeted Proteomics Experiments*

    PubMed Central

    Searle, Brian C.; Egertson, Jarrett D.; Bollinger, James G.; Stergachis, Andrew B.; MacCoss, Michael J.

    2015-01-01

    Targeted mass spectrometry is an essential tool for detecting quantitative changes in low abundant proteins throughout the proteome. Although selected reaction monitoring (SRM) is the preferred method for quantifying peptides in complex samples, the process of designing SRM assays is laborious. Peptides have widely varying signal responses dictated by sequence-specific physiochemical properties; one major challenge is in selecting representative peptides to target as a proxy for protein abundance. Here we present PREGO, a software tool that predicts high-responding peptides for SRM experiments. PREGO predicts peptide responses with an artificial neural network trained using 11 minimally redundant, maximally relevant properties. Crucial to its success, PREGO is trained using fragment ion intensities of equimolar synthetic peptides extracted from data independent acquisition experiments. Because of similarities in instrumentation and the nature of data collection, relative peptide responses from data independent acquisition experiments are a suitable substitute for SRM experiments because they both make quantitative measurements from integrated fragment ion chromatograms. Using an SRM experiment containing 12,973 peptides from 724 synthetic proteins, PREGO exhibits a 40–85% improvement over previously published approaches at selecting high-responding peptides. These results also represent a dramatic improvement over the rules-based peptide selection approaches commonly used in the literature. PMID:26100116

  5. Modeling criterion shifts and target checking in prospective memory monitoring.

    PubMed

    Horn, Sebastian S; Bayen, Ute J

    2015-01-01

    Event-based prospective memory (PM) involves remembering to perform intended actions after a delay. An important theoretical issue is whether and how people monitor the environment to execute an intended action when a target event occurs. Performing a PM task often increases the latencies in ongoing tasks. However, little is known about the reasons for this cost effect. This study uses diffusion model analysis to decompose monitoring processes in the PM paradigm. Across 4 experiments, performing a PM task increased latencies in an ongoing lexical decision task. A large portion of this effect was explained by consistent increases in boundary separation; additional increases in nondecision time emerged in a nonfocal PM task and explained variance in PM performance (Experiment 1), likely reflecting a target-checking strategy before and after the ongoing decision (Experiment 2). However, we found that possible target-checking strategies may depend on task characteristics. That is, instructional emphasis on the importance of ongoing decisions (Experiment 3) or the use of focal targets (Experiment 4) eliminated the contribution of nondecision time to the cost of PM, but left participants in a mode of increased cautiousness. The modeling thus sheds new light on the cost effect seen in many PM studies and suggests that people approach ongoing activities more cautiously when they need to remember an intended action. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  6. Performance of a Liner-on-Target Injector for Staged Z-Pinch Experiments

    NASA Astrophysics Data System (ADS)

    Conti, F.; Valenzuela, J. C.; Narkis, J.; Krasheninnikov, I.; Beg, F.; Wessel, F. J.; Ruskov, E.; Rahman, H. U.; McGee, E.

    2016-10-01

    We present the design and characterization of a compact liner-on-target injector, used in the Staged Z-pinch experiments conducted on the UNR-NTF Zebra Facility. Previous experiments and analysis indicate that high-Z gas liners produce a uniform and efficient implosion on a low-Z target plasma. The liner gas shell is produced by an annular solenoid valve and a converging-diverging nozzle designed to achieve a collimated, supersonic, Mach-5 flow. The on-axis target is produced by a coaxial plasma gun, where a high voltage pulse is applied to ionize neutral gas and accelerate the plasma by the J-> × B-> force. Measurements of the liner and target dynamics, resolved by interferometry in space and time, fast imaging, and collection of the emitted light, are presented. The results are compared to the predictions from Computational Fluid Dynamics and MHD simulations that model the injector. Optimization of the design parameters, for upcoming Staged Z-pinch experiments, will be discussed. Advanced Research Projects Agency - Energy, DE-AR0000569.

  7. Investigating the empirical support for therapeutic targets proposed by the temporal experience of pleasure model in schizophrenia: A systematic review.

    PubMed

    Edwards, Clementine J; Cella, Matteo; Tarrier, Nicholas; Wykes, Til

    2015-10-01

    Anhedonia and amotivation are substantial predictors of poor functional outcomes in people with schizophrenia and often present a formidable barrier to returning to work or building relationships. The Temporal Experience of Pleasure Model proposes constructs which should be considered therapeutic targets for these symptoms in schizophrenia e.g. anticipatory pleasure, memory, executive functions, motivation and behaviours related to the activity. Recent reviews have highlighted the need for a clear evidence base to drive the development of targeted interventions. To review systematically the empirical evidence for each TEP model component and propose evidence-based therapeutic targets for anhedonia and amotivation in schizophrenia. Following PRISMA guidelines, PubMed and PsycInfo were searched using the terms "schizophrenia" and "anhedonia". Studies were included if they measured anhedonia and participants had a diagnosis of schizophrenia. The methodology, measures and main findings from each study were extracted and critically summarised for each TEP model construct. 80 independent studies were reviewed and executive functions, emotional memory and the translation of motivation into actions are highlighted as key deficits with a strong evidence base in people with schizophrenia. However, there are many relationships that are unclear because the empirical work is limited by over-general tasks and measures. Promising methods for research which have more ecological validity include experience sampling and behavioural tasks assessing motivation. Specific adaptations to Cognitive Remediation Therapy, Cognitive Behavioural Therapy and the utilisation of mobile technology to enhance representations and emotional memory are recommended for future development. Copyright © 2015. Published by Elsevier B.V.

  8. Maximize, minimize or target - optimization for a fitted response from a designed experiment

    DOE PAGES

    Anderson-Cook, Christine Michaela; Cao, Yongtao; Lu, Lu

    2016-04-01

    One of the common goals of running and analyzing a designed experiment is to find a location in the design space that optimizes the response of interest. Depending on the goal of the experiment, we may seek to maximize or minimize the response, or set the process to hit a particular target value. After the designed experiment, a response model is fitted and the optimal settings of the input factors are obtained based on the estimated response model. Furthermore, the suggested optimal settings of the input factors are then used in the production environment.

  9. Beauty and charm production in fixed target experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidonakis, Nikolaos; Vogt, Ramona

    We present calculations of NNLO threshold corrections for beauty and charm production in {pi}{sup -} p and pp interactions at fixed-target experiments. Recent calculations for heavy quark hadroproduction have included next-to-next-to-leading-order (NNLO) soft-gluon corrections [1] to the double differential cross section from threshold resummation techniques [2]. These corrections are important for near-threshold beauty and charm production at fixed-target experiments, including HERA-B and some of the current and future heavy ion experiments.

  10. Comparison of hydrodynamic simulations with two-shockwave drive target experiments

    NASA Astrophysics Data System (ADS)

    Karkhanis, Varad; Ramaprabhu, Praveen; Buttler, William

    2015-11-01

    We consider hydrodynamic continuum simulations to mimic ejecta generation in two-shockwave target experiments, where metallic surface is loaded by two successive shock waves. Time of second shock in simulations is determined to match experimental amplitudes at the arrival of the second shock. The negative Atwood number (A --> - 1) of ejecta simulations leads to two successive phase inversions of the interface corresponding to the passage of the shocks from heavy to light media in each instance. Metallic phase of ejecta (solid/liquid) depends on shock loading pressure in the experiment, and we find that hydrodynamic simulations quantify the liquid phase ejecta physics with a fair degree of accuracy, where RM instability is not suppressed by the strength effect. In particular, we find that our results of free surface velocity, maximum ejecta velocity, and maximum ejecta areal density are in excellent agreement with their experimental counterparts, as well as ejecta models. We also comment on the parametric space for hydrodynamic simulations in which they can be used to compare with the target experiments.

  11. Cryogenic Target-Implosion Experiments on OMEGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, D.R.; Meyerhofer, D.D.; Sangster, T.C.

    The University of Rochester’s Laboratory for Laser Energetics has been imploding thick cryogenic targets for six years. Improvements in the Cryogenic Target Handling System and the ability to accurately design laser pulse shapes that properly time shocks and minimize electron preheat, produced high fuel areal densities in deuterium cryogenic targets (202+/-7 mg/cm^2). The areal density was inferred from the energy loss of secondary protons in the fuel (D2) shell. Targets were driven on a low final adiabat (alpha = 2) employing techniques to radially grade the adiabat (the highest adiabat at the ablation surface). The ice layer meets the target-designmore » toughness specification for DT ice of 1-um rms (all modes), while D2 ice layers average 3.0-um-rms roughness. The implosion experiments and the improvements in the quality and understanding of cryogenic targets are presented.« less

  12. Debiasing affective forecasting errors with targeted, but not representative, experience narratives.

    PubMed

    Shaffer, Victoria A; Focella, Elizabeth S; Scherer, Laura D; Zikmund-Fisher, Brian J

    2016-10-01

    To determine whether representative experience narratives (describing a range of possible experiences) or targeted experience narratives (targeting the direction of forecasting bias) can reduce affective forecasting errors, or errors in predictions of experiences. In Study 1, participants (N=366) were surveyed about their experiences with 10 common medical events. Those who had never experienced the event provided ratings of predicted discomfort and those who had experienced the event provided ratings of actual discomfort. Participants making predictions were randomly assigned to either the representative experience narrative condition or the control condition in which they made predictions without reading narratives. In Study 2, participants (N=196) were again surveyed about their experiences with these 10 medical events, but participants making predictions were randomly assigned to either the targeted experience narrative condition or the control condition. Affective forecasting errors were observed in both studies. These forecasting errors were reduced with the use of targeted experience narratives (Study 2) but not representative experience narratives (Study 1). Targeted, but not representative, narratives improved the accuracy of predicted discomfort. Public collections of patient experiences should favor stories that target affective forecasting biases over stories representing the range of possible experiences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Off-target model based OPC

    NASA Astrophysics Data System (ADS)

    Lu, Mark; Liang, Curtis; King, Dion; Melvin, Lawrence S., III

    2005-11-01

    Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.

  14. The role of experience in location estimation: Target distributions shift location memory biases.

    PubMed

    Lipinski, John; Simmering, Vanessa R; Johnson, Jeffrey S; Spencer, John P

    2010-04-01

    Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. Cognition, 93, 75-97]. This conflicts with earlier results showing that location estimation is biased relative to the spatial distribution of targets [Spencer, J. P., & Hund, A. M. (2002). Prototypes and particulars: Geometric and experience-dependent spatial categories. Journal of Experimental Psychology: General, 131, 16-37]. Here, we resolve this controversy by using a task based on Huttenlocher et al. (Experiment 4) with minor modifications to enhance our ability to detect experience-dependent effects. Results after the first block of trials replicate the pattern reported in Huttenlocher et al. After additional experience, however, participants showed biases that significantly shifted according to the target distributions. These results are consistent with the Dynamic Field Theory, an alternative theory of spatial cognition that integrates long-term memory traces across trials relative to the perceived structure of the task space. Copyright 2009 Elsevier B.V. All rights reserved.

  15. Injector design for liner-on-target gas-puff experiments

    NASA Astrophysics Data System (ADS)

    Valenzuela, J. C.; Krasheninnikov, I.; Conti, F.; Wessel, F.; Fadeev, V.; Narkis, J.; Ross, M. P.; Rahman, H. U.; Ruskov, E.; Beg, F. N.

    2017-11-01

    We present the design of a gas-puff injector for liner-on-target experiments. The injector is composed of an annular high atomic number (e.g., Ar and Kr) gas and an on-axis plasma gun that delivers an ionized deuterium target. The annular supersonic nozzle injector has been studied using Computational Fluid Dynamics (CFD) simulations to produce a highly collimated (M > 5), ˜1 cm radius gas profile that satisfies the theoretical requirement for best performance on ˜1-MA current generators. The CFD simulations allowed us to study output density profiles as a function of the nozzle shape, gas pressure, and gas composition. We have performed line-integrated density measurements using a continuous wave (CW) He-Ne laser to characterize the liner gas density. The measurements agree well with the CFD values. We have used a simple snowplow model to study the plasma sheath acceleration in a coaxial plasma gun to help us properly design the target injector.

  16. Injector design for liner-on-target gas-puff experiments.

    PubMed

    Valenzuela, J C; Krasheninnikov, I; Conti, F; Wessel, F; Fadeev, V; Narkis, J; Ross, M P; Rahman, H U; Ruskov, E; Beg, F N

    2017-11-01

    We present the design of a gas-puff injector for liner-on-target experiments. The injector is composed of an annular high atomic number (e.g., Ar and Kr) gas and an on-axis plasma gun that delivers an ionized deuterium target. The annular supersonic nozzle injector has been studied using Computational Fluid Dynamics (CFD) simulations to produce a highly collimated (M > 5), ∼1 cm radius gas profile that satisfies the theoretical requirement for best performance on ∼1-MA current generators. The CFD simulations allowed us to study output density profiles as a function of the nozzle shape, gas pressure, and gas composition. We have performed line-integrated density measurements using a continuous wave (CW) He-Ne laser to characterize the liner gas density. The measurements agree well with the CFD values. We have used a simple snowplow model to study the plasma sheath acceleration in a coaxial plasma gun to help us properly design the target injector.

  17. Megajoule Dense Plasma Focus Solid Target Experiments

    NASA Astrophysics Data System (ADS)

    Podpaly, Y. A.; Falabella, S.; Link, A.; Povilus, A.; Higginson, D. P.; Shaw, B. H.; Cooper, C. M.; Chapman, S.; Bennett, N.; Sipe, N.; Olson, R.; Schmidt, A. E.

    2016-10-01

    Dense plasma focus (DPF) devices are plasma sources that can produce significant neutron yields from beam into gas interactions. Yield increases, up to approximately a factor of five, have been observed previously on DPFs using solid targets, such as CD2 and D2O ice. In this work, we report on deuterium solid-target experiments at the Gemini DPF. A rotatable target holder and baffle arrangement were installed in the Gemini device which allowed four targets to be deployed sequentially without breaking vacuum. Solid targets of titanium deuteride were installed and systematically studied at a variety of fill pressures, bias voltages, and target positions. Target holder design, experimental results, and comparison to simulations will be presented. Prepared by LLNL under Contract DE-AC52-07NA27344.

  18. Moving target detection method based on improved Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.

  19. a Target Aware Texture Mapping for Sculpture Heritage Modeling

    NASA Astrophysics Data System (ADS)

    Yang, C.; Zhang, F.; Huang, X.; Li, D.; Zhu, Y.

    2017-08-01

    In this paper, we proposed a target aware image to model registration method using silhouette as the matching clues. The target sculpture object in natural environment can be automatically detected from image with complex background with assistant of 3D geometric data. Then the silhouette can be automatically extracted and applied in image to model matching. Due to the user don't need to deliberately draw target area, the time consumption for precisely image to model matching operation can be greatly reduced. To enhance the function of this method, we also improved the silhouette matching algorithm to support conditional silhouette matching. Two experiments using a stone lion sculpture of Ming Dynasty and a potable relic in museum are given to evaluate the method we proposed. The method we proposed in this paper is extended and developed into a mature software applied in many culture heritage documentation projects.

  20. Algorithm research on infrared imaging target extraction based on GAC model

    NASA Astrophysics Data System (ADS)

    Li, Yingchun; Fan, Youchen; Wang, Yanqing

    2016-10-01

    Good target detection and tracking technique is significantly meaningful to increase infrared target detection distance and enhance resolution capacity. For the target detection problem about infrared imagining, firstly, the basic principles of level set method and GAC model are is analyzed in great detail. Secondly, "convergent force" is added according to the defect that GAC model is stagnant outside the deep concave region and cannot reach deep concave edge to build the promoted GAC model. Lastly, the self-adaptive detection method in combination of Sobel operation and GAC model is put forward by combining the advantages that subject position of the target could be detected with Sobel operator and the continuous edge of the target could be obtained through GAC model. In order to verify the effectiveness of the model, the two groups of experiments are carried out by selecting the images under different noise effects. Besides, the comparative analysis is conducted with LBF and LIF models. The experimental result shows that target could be better locked through LIF and LBF algorithms for the slight noise effect. The accuracy of segmentation is above 0.8. However, as for the strong noise effect, the target and noise couldn't be distinguished under the strong interference of GAC, LIF and LBF algorithms, thus lots of non-target parts are extracted during iterative process. The accuracy of segmentation is below 0.8. The accurate target position is extracted through the algorithm proposed in this paper. Besides, the accuracy of segmentation is above 0.8.

  1. New designs of LMJ targets for early ignition experiments

    NASA Astrophysics Data System (ADS)

    C-Clérouin, C.; Bonnefille, M.; Dattolo, E.; Fremerye, P.; Galmiche, D.; Gauthier, P.; Giorla, J.; Laffite, S.; Liberatore, S.; Loiseau, P.; Malinie, G.; Masse, L.; Poggi, F.; Seytor, P.

    2008-05-01

    The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 40 laser quads, delivering up to 1.4MJ and 380TW. New targets needing reduced laser energy with only a small decrease in robustness are then designed for this purpose. A first strategy is to use scaled-down cylindrical hohlraums and capsules, taking advantage of our better understanding of the problem, set on theoretical modelling, simulations and experiments. Another strategy is to work specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, which is with parametric instabilities a crucial drawback of indirect drive. An alternative design is proposed, made up of the nominal 60 quads capsule, named A1040, in a rugby-shaped hohlraum. Robustness evaluations of these different targets are in progress.

  2. Comparison of hydrodynamic simulations with two-shockwave drive target experiments

    NASA Astrophysics Data System (ADS)

    Karkhanis, Varad; Ramaprabhu, Praveen; Buttler, William

    2015-11-01

    We consider hydrodynamic continuum simulations to mimic ejecta generation in two-shockwave target experiments, where metallic surface is loaded by two successive shock waves. Time of second shock in simulations is determined to match experimental amplitudes at the arrival of the second shock. The negative Atwood number A --> - 1 of ejecta simulations leads to two successive phase inversions of the interface corresponding to the passage of the shocks from heavy to light media in each instance. Metallic phase of ejecta (solid/liquid) depends on shock loading pressure in the experiment, and we find that hydrodynamic simulations quantify the liquid phase ejecta physics with a fair degree of accuracy, where RM instability is not suppressed by the strength effect. In particular, we find that our results of free surface velocity, maximum ejecta velocity, and maximum ejecta areal density are in excellent agreement with their experimental counterparts, as well as ejecta models. We also comment on the parametric space for hydrodynamic simulations in which they can be used to compare with the target experiments. This work was supported in part by the (U.S.) Department of Energy (DOE) under Contract No. DE-AC52-06NA2-5396.

  3. Target selection biases from recent experience transfer across effectors.

    PubMed

    Moher, Jeff; Song, Joo-Hyun

    2016-02-01

    Target selection is often biased by an observer's recent experiences. However, not much is known about whether these selection biases influence behavior across different effectors. For example, does looking at a red object make it easier to subsequently reach towards another red object? In the current study, we asked observers to find the uniquely colored target object on each trial. Randomly intermixed pre-trial cues indicated the mode of action: either an eye movement or a visually guided reach movement to the target. In Experiment 1, we found that priming of popout, reflected in faster responses following repetition of the target color on consecutive trials, occurred regardless of whether the effector was repeated from the previous trial or not. In Experiment 2, we examined whether an inhibitory selection bias away from a feature could transfer across effectors. While priming of popout reflects both enhancement of the repeated target features and suppression of the repeated distractor features, the distractor previewing effect isolates a purely inhibitory component of target selection in which a previewed color is presented in a homogenous display and subsequently inhibited. Much like priming of popout, intertrial suppression biases in the distractor previewing effect transferred across effectors. Together, these results suggest that biases for target selection driven by recent trial history transfer across effectors. This indicates that representations in memory that bias attention towards or away from specific features are largely independent from their associated actions.

  4. NDCX-II target experiments and simulations

    DOE PAGES

    Barnard, J. J.; More, R. M.; Terry, M.; ...

    2013-06-13

    The ion accelerator NDCX-II is undergoing commissioning at Lawrence Berkeley National Laboratory (LBNL). Its principal mission is to explore ion-driven High Energy Density Physics (HEDP) relevant to Inertial Fusion Energy (IFE) especially in the Warm Dense Matter (WDM) regime. We have carried out hydrodynamic simulations of beam-heated targets for parameters expected for the initial configuration of NDCX-II. For metal foils of order one micron thick (thin targets), the beam is predicted to heat the target in a timescale comparable to the hydrodynamic expansion time for experiments that infer material properties from measurements of the resulting rarefaction wave. We have alsomore » carried out hydrodynamic simulations of beam heating of metallic foam targets several tens of microns thick (thick targets) in which the ion range is shorter than the areal density of the material. In this case shock waves will form and we derive simple scaling laws for the efficiency of conversion of ion energy into kinetic energy of fluid flow. Geometries with a tamping layer may also be used to study the merging of a tamper shock with the end-of-range shock. As a result, this process can occur in tamped, direct drive IFE targets.« less

  5. Modeling and Depletion Simulations for a High Flux Isotope Reactor Cycle with a Representative Experiment Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chandler, David; Betzler, Ben; Hirtz, Gregory John

    2016-09-01

    The purpose of this report is to document a high-fidelity VESTA/MCNP High Flux Isotope Reactor (HFIR) core model that features a new, representative experiment loading. This model, which represents the current, high-enriched uranium fuel core, will serve as a reference for low-enriched uranium conversion studies, safety-basis calculations, and other research activities. A new experiment loading model was developed to better represent current, typical experiment loadings, in comparison to the experiment loading included in the model for Cycle 400 (operated in 2004). The new experiment loading model for the flux trap target region includes full length 252Cf production targets, 75Se productionmore » capsules, 63Ni production capsules, a 188W production capsule, and various materials irradiation targets. Fully loaded 238Pu production targets are modeled in eleven vertical experiment facilities located in the beryllium reflector. Other changes compared to the Cycle 400 model are the high-fidelity modeling of the fuel element side plates and the material composition of the control elements. Results obtained from the depletion simulations with the new model are presented, with a focus on time-dependent isotopic composition of irradiated fuel and single cycle isotope production metrics.« less

  6. Two-dimensional hidden semantic information model for target saliency detection and eyetracking identification

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao

    2018-01-01

    Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.

  7. Modeling Hohlraum-Based Laser Plasma Instability Experiments

    NASA Astrophysics Data System (ADS)

    Meezan, N. B.

    2005-10-01

    Laser fusion targets must control laser-plasma instabilities (LPI) in order to perform as designed. We present analyses of recent hohlraum LPI experiments from the Omega laser facility. The targets, gold hohlraums filled with gas or SiO2 foam, are preheated by several 3φ beams before an interaction beam (2φ or 3φ) is fired along the hohlraum axis. The experiments are simulated in 2-D and 3-D using the code hydra. The choice of electron thermal conduction model in hydra strongly affects the simulated plasma conditions. This work is part of a larger effort to systematically explore the usefulness of linear gain as a design tool for fusion targets. We find that the measured Raman and Brillouin backscatter scale monotonically with the peak linear gain calculated for the target; however, linear gain is not sufficient to explain all trends in the data. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-ENG-48.

  8. The polarised internal target for the PAX experiment

    NASA Astrophysics Data System (ADS)

    Ciullo, G.; Barion, L.; Barschel, C.; Grigoriev, K.; Lenisa, P.; Nass, A.; Sarkadi, J.; Statera, M.; Steffens, E.; Tagliente, G.

    2011-05-01

    The PAX (Polarized Antiproton eXperiment) collaboration aims to polarise antiproton beams stored in ring by means of spin-filtering. The experimental setup is based on a polarised internal gas target, surrounded by a detection system for the measurement of spin observables. In this report, we present results from the commission of the PAX target (atomic beam source, openable cell, and polarimeter).

  9. Modelling debris and shrapnel generation in inertial confinement fusion experiments

    DOE PAGES

    Eder, D. C.; Fisher, A. C.; Koniges, A. E.; ...

    2013-10-24

    Modelling and mitigation of damage are crucial for safe and economical operation of high-power laser facilities. Experiments at the National Ignition Facility use a variety of targets with a range of laser energies spanning more than two orders of magnitude (~14 kJ to ~1.9 MJ). Low-energy inertial confinement fusion experiments are used to study early-time x-ray load symmetry on the capsule, shock timing, and other physics issues. For these experiments, a significant portion of the target is not completely vaporized and late-time (hundreds of ns) simulations are required to study the generation of debris and shrapnel from these targets. Damagemore » to optics and diagnostics from shrapnel is a major concern for low-energy experiments. Here, we provide the first full-target simulations of entire cryogenic targets, including the Al thermal mechanical package and Si cooling rings. We use a 3D multi-physics multi-material hydrodynamics code, ALE-AMR, for these late-time simulations. The mass, velocity, and spatial distribution of shrapnel are calculated for three experiments with laser energies ranging from 14 to 250 kJ. We calculate damage risk to optics and diagnostics for these three experiments. For the lowest energy re-emit experiment, we provide a detailed analysis of the effects of shrapnel impacts on optics and diagnostics and compare with observations of damage sites.« less

  10. Modeling peripheral vision for moving target search and detection.

    PubMed

    Yang, Ji Hyun; Huston, Jesse; Day, Michael; Balogh, Imre

    2012-06-01

    Most target search and detection models focus on foveal vision. In reality, peripheral vision plays a significant role, especially in detecting moving objects. There were 23 subjects who participated in experiments simulating target detection tasks in urban and rural environments while their gaze parameters were tracked. Button responses associated with foveal object and peripheral object (PO) detection and recognition were recorded. In an urban scenario, pedestrians appearing in the periphery holding guns were threats and pedestrians with empty hands were non-threats. In a rural scenario, non-U.S. unmanned aerial vehicles (UAVs) were considered threats and U.S. UAVs non-threats. On average, subjects missed detecting 2.48 POs among 50 POs in the urban scenario and 5.39 POs in the rural scenario. Both saccade reaction time and button reaction time can be predicted by peripheral angle and entrance speed of POs. Fast moving objects were detected faster than slower objects and POs appearing at wider angles took longer to detect than those closer to the gaze center. A second-order mixed-effect model was applied to provide each subject's prediction model for peripheral target detection performance as a function of eccentricity angle and speed. About half the subjects used active search patterns while the other half used passive search patterns. An interactive 3-D visualization tool was developed to provide a representation of macro-scale head and gaze movement in the search and target detection task. An experimentally validated stochastic model of peripheral vision in realistic target detection scenarios was developed.

  11. A user-targeted synthesis of the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutierrez, Jose; Kotlarski, Sven; Hertig, Elke; Wibig, Joanna; Rössler, Ole; Huth, Radan

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. We consider different aspects: (1) marginal aspects such as mean, variance and extremes; (2) temporal aspects such as spell length characteristics; (3) spatial aspects such as the de-correlation length of precipitation extremes; and multi-variate aspects such as the interplay of temperature and precipitation or scale-interactions. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur. Experiment 1 (perfect predictors): what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Experiment 2 (Global climate model predictors): how is the overall representation of regional climate, including errors inherited from global climate models? Experiment 3 (pseudo reality): do methods fail in representing regional climate change? Here, we present a user-targeted synthesis of the results of the first VALUE experiment. In this experiment, downscaling methods are driven with ERA-Interim reanalysis data to eliminate global climate model errors, over the period 1979-2008. As reference data we use, depending on the question addressed, (1) observations from 86 meteorological stations distributed across Europe; (2) gridded observations at the corresponding 86 locations or (3) gridded spatially extended observations for selected European regions. With more than 40 contributing methods, this study is the most comprehensive downscaling inter-comparison project so far. The

  12. [Model and analysis of spectropolarimetric BRDF of painted target based on GA-LM method].

    PubMed

    Chen, Chao; Zhao, Yong-Qiang; Luo, Li; Pan, Quan; Cheng, Yong-Mei; Wang, Kai

    2010-03-01

    Models based on microfacet were used to describe spectropolarimetric BRDF (short for bidirectional reflectance distribution function) with experimental data. And the spectropolarimetric BRDF values of targets were measured with the comparison to the standard whiteboard, which was considered as Lambert and had a uniform reflectance rate up to 98% at arbitrary angle of view. And then the relationships between measured spectropolarimetric BRDF values and the angles of view, as well as wavelengths which were in a range of 400-720 nm were analyzed in details. The initial value needed to be input to the LM optimization method was difficult to get and greatly impacted the results. Therefore, optimization approach which combines genetic algorithm and Levenberg-Marquardt (LM) was utilized aiming to retrieve parameters of nonlinear models, and the initial values were obtained using GA approach. Simulated experiments were used to test the efficiency of the adopted optimization method. And the simulated experiment ensures the optimization method to have a good performance and be able to retrieve the parameters of nonlinear model efficiently. The correctness of the models was validated by real outdoor sampled data. The parameters of DoP model retrieved are the refraction index of measured targets. The refraction index of the same color painted target but with different materials was also obtained. Conclusion has been drawn that the refraction index from these two targets are very near and this slight difference could be understood by the difference in the conditions of paint targets' surface, not the material of the targets.

  13. A model for combined targeting and tracking tasks in computer applications.

    PubMed

    Senanayake, Ransalu; Hoffmann, Errol R; Goonetilleke, Ravindra S

    2013-11-01

    Current models for targeted-tracking are discussed and shown to be inadequate as a means of understanding the combined task of tracking, as in the Drury's paradigm, and having a final target to be aimed at, as in the Fitts' paradigm. It is shown that the task has to be split into components that are, in general, performed sequentially and have a movement time component dependent on the difficulty of the individual component of the task. In some cases, the task time may be controlled by the Fitts' task difficulty, and in others, it may be dominated by the Drury's task difficulty. Based on an experiment carried out that captured movement time in combinations of visually controlled and ballistic movements, a model for movement time in targeted-tracking was developed.

  14. Top-attack modeling and automatic target detection using synthetic FLIR scenery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.; Penn, Joseph A.

    2004-09-01

    A series of experiments have been performed to verify the utility of algorithmic tools for the modeling and analysis of cold-target signatures in synthetic, top-attack, FLIR video sequences. The tools include: MuSES/CREATION for the creation of synthetic imagery with targets, an ARL target detection algorithm to detect imbedded synthetic targets in scenes, and an ARL scoring algorithm, using Receiver-Operating-Characteristic (ROC) curve analysis, to evaluate detector performance. Cold-target detection variability was examined as a function of target emissivity, surrounding clutter type, and target placement in non-obscuring clutter locations. Detector metrics were also individually scored so as to characterize the effect of signature/clutter variations. Results show that using these tools, a detailed, physically meaningful, target detection analysis is possible and that scenario specific target detectors may be developed by selective choice and/or weighting of detector metrics. However, developing these tools into a reliable predictive capability will require the extension of these results to the modeling and analysis of a large number of data sets configured for a wide range of target and clutter conditions. Finally, these tools should also be useful for the comparison of competitive detection algorithms by providing well defined, and controllable target detection scenarios, as well as for the training and testing of expert human observers.

  15. Validating the random search model for two targets of different difficulty.

    PubMed

    Chan, Alan H S; Yu, Ruifeng

    2010-02-01

    A random visual search model was fitted to 1,788 search times obtained from a nonidentical double-target search task. 30 Hong Kong Chinese (13 men, 17 women) ages 18 to 33 years (M = 23, SD = 6.8) took part in the experiment voluntarily. The overall adequacy and prediction accuracy of the model for various search time parameters (mean and median search times and response times) for both individual and pooled data show that search strategy may reasonably be inferred from search time distributions. The results also suggested the general applicability of the random search model for describing the search behavior of a large number of participants performing the type of search used here, as well as the practical feasibility of its application for determination of stopping policy for optimization of an inspection system design. Although the data generally conformed to the model the search for the more difficult target was faster than expected. The more difficult target was usually detected after the easier target and it is suggested that some degree of memory-guided searching may have been used for the second target. Some abnormally long search times were observed and it is possible that these might have been due to the characteristics of visual lobes, nonoptimum interfixation distances and inappropriate overlapping of lobes, as has been previously reported.

  16. Asymmetries in visual search for conjunctive targets.

    PubMed

    Cohen, A

    1993-08-01

    Asymmetry is demonstrated between conjunctive targets in visual search with no detectable asymmetries between the individual features that compose these targets. Experiment 1 demonstrated this phenomenon for targets composed of color and shape. Experiment 2 and 4 demonstrate this asymmetry for targets composed of size and orientation and for targets composed of contrast level and orientation, respectively. Experiment 3 demonstrates that search rate of individual features cannot predict search rate for conjunctive targets. These results demonstrate the need for 2 levels of representations: one of features and one of conjunction of features. A model related to the modified feature integration theory is proposed to account for these results. The proposed model and other models of visual search are discussed.

  17. GE781: a Monte Carlo package for fixed target experiments

    NASA Astrophysics Data System (ADS)

    Davidenko, G.; Funk, M. A.; Kim, V.; Kuropatkin, N.; Kurshetsov, V.; Molchanov, V.; Rud, S.; Stutte, L.; Verebryusov, V.; Zukanovich Funchal, R.

    The Monte Carlo package for the fixed target experiment B781 at Fermilab, a third generation charmed baryon experiment, is described. This package is based on GEANT 3.21, ADAMO database and DAFT input/output routines.

  18. A cryogenic target for Compton scattering experiments at HIγS

    DOE PAGES

    Kendellen, D. P.; Ahmed, M. W.; Baird, E.; ...

    2016-10-06

    We have developed a cryogenic target for use at the High Intensity γ-ray Source (HIγS). The target system is able to liquefy 4He at 4 K, hydrogen at 20 K, and deuterium at 23 K to fill a 0.3 L Kapton cell. Liquid temperatures and condenser pressures are recorded throughout each run in order to ensure that the target's areal density is known to ~1%. The target is being utilized in a series of experiments which probe the electromagnetic polarizabilities of the nucleon.

  19. Experience of targeting subsidies on insecticide-treated nets: what do we know and what are the knowledge gaps?

    PubMed

    Worrall, Eve; Hill, Jenny; Webster, Jayne; Mortimer, Julia

    2005-01-01

    Widespread coverage of vulnerable populations with insecticide-treated nets (ITNs) constitutes an important component of the Roll Back Malaria (RBM) strategy to control malaria. The Abuja Targets call for 60% coverage of children under 5 years of age and pregnant women by 2005; but current coverage in Africa is unacceptably low. The RBM 'Strategic Framework for Coordinated National Action in Scaling-up Insecticide-Treated Netting Programmes in Africa' promotes coordinated national action and advocates sustained public provision of targeted subsidies to maximise public health benefits, alongside support and stimulation of the private sector. Several countries have already planned or initiated targeted subsidy schemes either on a pilot scale or on a national scale, and have valuable experience which can inform future interventions. The WHO RBM 'Workshop on mapping models for delivering ITNs through targeted subsidies' held in Zambia in 2003 provided an opportunity to share and document these country experiences. This paper brings together experiences presented at the workshop with other information on experiences of targeting subsidies on ITNs, net treatment kits and retreatment services (ITN products) in order to describe alternative approaches, highlight their similarities and differences, outline lessons learnt, and identify gaps in knowledge. We find that while there is a growing body of knowledge on different approaches to targeting ITN subsidies, there are significant gaps in knowledge in crucial areas. Key questions regarding how best to target, how much it will cost and what outcomes (levels of coverage) to expect remain unanswered. High quality, well-funded monitoring and evaluation of alternative approaches to targeting ITN subsidies is vital to develop a knowledge base so that countries can design and implement effective strategies to target ITN subsidies.

  20. Total variation-based method for radar coincidence imaging with model mismatch for extended target

    NASA Astrophysics Data System (ADS)

    Cao, Kaicheng; Zhou, Xiaoli; Cheng, Yongqiang; Fan, Bo; Qin, Yuliang

    2017-11-01

    Originating from traditional optical coincidence imaging, radar coincidence imaging (RCI) is a staring/forward-looking imaging technique. In RCI, the reference matrix must be computed precisely to reconstruct the image as preferred; unfortunately, such precision is almost impossible due to the existence of model mismatch in practical applications. Although some conventional sparse recovery algorithms are proposed to solve the model-mismatch problem, they are inapplicable to nonsparse targets. We therefore sought to derive the signal model of RCI with model mismatch by replacing the sparsity constraint item with total variation (TV) regularization in the sparse total least squares optimization problem; in this manner, we obtain the objective function of RCI with model mismatch for an extended target. A more robust and efficient algorithm called TV-TLS is proposed, in which the objective function is divided into two parts and the perturbation matrix and scattering coefficients are updated alternately. Moreover, due to the ability of TV regularization to recover sparse signal or image with sparse gradient, TV-TLS method is also applicable to sparse recovering. Results of numerical experiments demonstrate that, for uniform extended targets, sparse targets, and real extended targets, the algorithm can achieve preferred imaging performance both in suppressing noise and in adapting to model mismatch.

  1. Target acquisition modeling over the exact optical path: extending the EOSTAR TDA with the TOD sensor performance model

    NASA Astrophysics Data System (ADS)

    Dijk, J.; Bijl, P.; Oppeneer, M.; ten Hove, R. J. M.; van Iersel, M.

    2017-10-01

    The Electro-Optical Signal Transmission and Ranging (EOSTAR) model is an image-based Tactical Decision Aid (TDA) for thermal imaging systems (MWIR/LWIR) developed for a sea environment with an extensive atmosphere model. The Triangle Orientation Discrimination (TOD) Target Acquisition model calculates the sensor and signal processing effects on a set of input triangle test pattern images, judges their orientation using humans or a Human Visual System (HVS) model and derives the system image quality and operational field performance from the correctness of the responses. Combination of the TOD model and EOSTAR, basically provides the possibility to model Target Acquisition (TA) performance over the exact path from scene to observer. In this method ship representative TOD test patterns are placed at the position of the real target, subsequently the combined effects of the environment (atmosphere, background, etc.), sensor and signal processing on the image are calculated using EOSTAR and finally the results are judged by humans. The thresholds are converted into Detection-Recognition-Identification (DRI) ranges of the real target. In experiments is shown that combination of the TOD model and the EOSTAR model is indeed possible. The resulting images look natural and provide insight in the possibilities of combining the two models. The TOD observation task can be done well by humans, and the measured TOD is consistent with analytical TOD predictions for the same camera that was modeled in the ECOMOS project.

  2. Physics opportunities with a fixed target experiment at the LHC (AFTER@LHC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadjidakis, Cynthia; Anselmino, Mauro; Arnaldi, R.

    By extracting the beam with a bent crystal or by using an internal gas target, the multi-TeV proton and lead LHC beams allow one to perform the most energetic fixed-target experiments (AFTER@LHC) and to study p+p and p+A collisions at \\sqrt{s_NN}=115 GeV and Pb+p and Pb+A collisions at \\sqrt{s_NN}=72 GeV. Such studies would address open questions in the domain of the nucleon and nucleus partonic structure at high-x, quark-gluon plasma and, by using longitudinally or transversally polarised targets, spin physics. In this paper, we discuss the physics opportunities of a fixed-target experiment at the LHC and we report on themore » possible technical implementations of a high-luminosity experiment. We finally present feasibility studies for Drell-Yan, open heavy-flavour and quarkonium production, with an emphasis on high-x and spin physics.« less

  3. Target modelling for SAR image simulation

    NASA Astrophysics Data System (ADS)

    Willis, Chris J.

    2014-10-01

    This paper examines target models that might be used in simulations of Synthetic Aperture Radar imagery. We examine the basis for scattering phenomena in SAR, and briefly review the Swerling target model set, before considering extensions to this set discussed in the literature. Methods for simulating and extracting parameters for the extended Swerling models are presented. It is shown that in many cases the more elaborate extended Swerling models can be represented, to a high degree of fidelity, by simpler members of the model set. Further, it is shown that it is quite unlikely that these extended models would be selected when fitting models to typical data samples.

  4. Integrated modelling framework for short pulse high energy density physics experiments

    NASA Astrophysics Data System (ADS)

    Sircombe, N. J.; Hughes, S. J.; Ramsay, M. G.

    2016-03-01

    Modelling experimental campaigns on the Orion laser at AWE, and developing a viable point-design for fast ignition (FI), calls for a multi-scale approach; a complete description of the problem would require an extensive range of physics which cannot realistically be included in a single code. For modelling the laser-plasma interaction (LPI) we need a fine mesh which can capture the dispersion of electromagnetic waves, and a kinetic model for each plasma species. In the dense material of the bulk target, away from the LPI region, collisional physics dominates. The transport of hot particles generated by the action of the laser is dependent on their slowing and stopping in the dense material and their need to draw a return current. These effects will heat the target, which in turn influences transport. On longer timescales, the hydrodynamic response of the target will begin to play a role as the pressure generated from isochoric heating begins to take effect. Recent effort at AWE [1] has focussed on the development of an integrated code suite based on: the particle in cell code EPOCH, to model LPI; the Monte-Carlo electron transport code THOR, to model the onward transport of hot electrons; and the radiation hydrodynamics code CORVUS, to model the hydrodynamic response of the target. We outline the methodology adopted, elucidate on the advantages of a robustly integrated code suite compared to a single code approach, demonstrate the integrated code suite's application to modelling the heating of buried layers on Orion, and assess the potential of such experiments for the validation of modelling capability in advance of more ambitious HEDP experiments, as a step towards a predictive modelling capability for FI.

  5. All-optical atom trap as a target for MOTRIMS-like collision experiments

    NASA Astrophysics Data System (ADS)

    Sharma, S.; Acharya, B. P.; De Silva, A. H. N. C.; Parris, N. W.; Ramsey, B. J.; Romans, K. L.; Dorn, A.; de Jesus, V. L. B.; Fischer, D.

    2018-04-01

    Momentum-resolved scattering experiments with laser-cooled atomic targets have been performed since almost two decades with magneto-optical trap recoil ion momentum spectroscopy (MOTRIMS) setups. Compared to experiments with gas-jet targets, MOTRIMS features significantly lower target temperatures allowing for an excellent recoil ion momentum resolution. However, the coincident and momentum-resolved detection of electrons was long rendered impossible due to incompatible magnetic field requirements. Here we report on an experimental approach which is based on an all-optical 6Li atom trap that—in contrast to magneto-optical traps—does not require magnetic field gradients in the trapping region. Atom temperatures of about 2 mK and number densities up to 109 cm-3 make this trap ideally suited for momentum-resolved electron-ion coincidence experiments. The overall configuration of the trap is very similar to conventional magneto-optical traps. It mainly requires small modifications of laser beam geometries and polarization which makes it easily implementable in other existing MOTRIMS experiments.

  6. Internal models of target motion: expected dynamics overrides measured kinematics in timing manual interceptions.

    PubMed

    Zago, Myrka; Bosco, Gianfranco; Maffei, Vincenzo; Iosa, Marco; Ivanenko, Yuri P; Lacquaniti, Francesco

    2004-04-01

    Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. Here we present evidence in favor of a different view: the brain makes the best estimate about target motion based on measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from expected dynamics (kinetics). We projected a virtual target moving vertically downward on a wide screen with different randomized laws of motion. In the first series of experiments, subjects were asked to intercept this target by punching a real ball that fell hidden behind the screen and arrived in synchrony with the visual target. Subjects systematically timed their motor responses consistent with the assumption of gravity effects on an object's mass, even when the visual target did not accelerate. With training, the gravity model was not switched off but adapted to nonaccelerating targets by shifting the time of motor activation. In the second series of experiments, there was no real ball falling behind the screen. Instead the subjects were required to intercept the visual target by clicking a mousebutton. In this case, subjects timed their responses consistent with the assumption of uniform motion in the absence of forces, even when the target actually accelerated. Overall, the results are in accord with the theory that motor responses evoked by visual kinematics are modulated by a prior of the target dynamics. The prior appears surprisingly resistant to modifications based on performance errors.

  7. Targeting the Poor: Evidence from a Field Experiment in Indonesia

    PubMed Central

    Alatas, Vivi; Banerjee, Abhijit; Hanna, Rema; Olken, Benjamin A.; Tobias, Julia

    2014-01-01

    This paper reports an experiment in 640 Indonesian villages on three approaches to target the poor: proxy-means tests (PMT), where assets are used to predict consumption; community targeting, where villagers rank everyone from richest to poorest; and a hybrid. Defining poverty based on PPP$2 per-capita consumption, community targeting and the hybrid perform somewhat worse in identifying the poor than PMT, though not by enough to significantly affect poverty outcomes for a typical program. Elite capture does not explain these results. Instead, communities appear to apply a different concept of poverty. Consistent with this finding, community targeting results in higher satisfaction. PMID:25197099

  8. Development And Characterization Of A Liner-On-Target Injector For Staged Z-Pinch Experiments

    NASA Astrophysics Data System (ADS)

    Valenzuela, J. C.; Conti, F.; Krasheninnikov, I.; Narkis, J.; Beg, F.; Wessel, F. J.; Rahman, H. U.

    2016-10-01

    We present the design and optimization of a liner-on-target injector for Staged Z-pinch experiments. The injector is composed of an annular high atomic number (e.g. Ar, Kr) gas-puff and an on-axis plasma gun that delivers the ionized deuterium target. The liner nozzle injector has been carefully studied using Computational Fluid Dynamics (CFD) simulations to produce a highly collimated 1 cm radius gas profile that satisfies the theoretical requirement for best performance on the 1 MA Zebra current driver. The CFD simulations produce density profiles as a function of the nozzle shape and gas. These profiles are initialized in the MHD MACH2 code to find the optimal liner density for a stable, uniform implosion. We use a simple Snowplow model to study the plasma sheath acceleration in a coaxial plasma gun to help us properly design the target injector. We have performed line-integrated density measurements using a CW He-Ne laser to characterize the liner gas and the plasma gun density as a function of time. The measurements are compared with models and calculations and benchmarked accordingly. Advanced Research Projects Agency - Energy, DE-AR0000569.

  9. Implosion and heating experiments of fast ignition targets by Gekko-XII and LFEX lasers

    NASA Astrophysics Data System (ADS)

    Shiraga, H.; Fujioka, S.; Nakai, M.; Watari, T.; Nakamura, H.; Arikawa, Y.; Hosoda, H.; Nagai, T.; Koga, M.; Kikuchi, H.; Ishii, Y.; Sogo, T.; Shigemori, K.; Nishimura, H.; Zhang, Z.; Tanabe, M.; Ohira, S.; Fujii, Y.; Namimoto, T.; Sakawa, Y.; Maegawa, O.; Ozaki, T.; Tanaka, K. A.; Habara, H.; Iwawaki, T.; Shimada, K.; Key, M.; Norreys, P.; Pasley, J.; Nagatomo, H.; Johzaki, T.; Sunahara, A.; Murakami, M.; Sakagami, H.; Taguchi, T.; Norimatsu, T.; Homma, H.; Fujimoto, Y.; Iwamoto, A.; Miyanaga, N.; Kawanaka, J.; Kanabe, T.; Jitsuno, T.; Nakata, Y.; Tsubakimoto, K.; Sueda, K.; Kodama, R.; Kondo, K.; Morio, N.; Matsuo, S.; Kawasaki, T.; Sawai, K.; Tsuji, K.; Murakami, H.; Sarukura, N.; Shimizu, T.; Mima, K.; Azechi, H.

    2013-11-01

    The FIREX-1 project, the goal of which is to demonstrate fuel heating up to 5 keV by fast ignition scheme, has been carried out since 2003 including construction and tuning of LFEX laser and integrated experiments. Implosion and heating experiment of Fast Ignition targets have been performed since 2009 with Gekko-XII and LFEX lasers. A deuterated polystyrene shell target was imploded with the 0.53- μm Gekko-XII, and the 1.053- μm beam of the LFEX laser was injected through a gold cone attached to the shell to generate hot electrons to heat the imploded fuel plasma. Pulse contrast ratio of the LFEX beam was significantly improved. Also a variety of plasma diagnostic instruments were developed to be compatible with harsh environment of intense hard x-rays (γ rays) and electromagnetic pulses due to the intense LFEX beam on the target. Large background signals around the DD neutron signal in time-of-flight record of neutron detector were found to consist of neutrons via (γ,n) reactions and scattered gamma rays. Enhanced neutron yield was confirmed by carefully eliminating such backgrounds. Neutron enhancement up to 3.5 × 107 was observed. Heating efficiency was estimated to be 10-20% assuming a uniform temperature rise model.

  10. Modeling of Dense Plasma Effects in Short-Pulse Laser Experiments

    NASA Astrophysics Data System (ADS)

    Walton, Timothy; Golovkin, Igor; Macfarlane, Joseph; Prism Computational Sciences, Madison, WI Team

    2016-10-01

    Warm and Hot Dense Matter produced in short-pulse laser experiments can be studied with new high resolving power x-ray spectrometers. Data interpretation implies accurate modeling of the early-time heating dynamics and the radiation conditions that are generated. Producing synthetic spectra requires a model that describes the major physical processes that occur inside the target, including the hot-electron generation and relaxation phases and the effect of target heating. An important issue concerns the sensitivity of the predicted K-line shifts to the continuum lowering model that is used. We will present a set of PrismSPECT spectroscopic simulations using various continuum lowering models: Hummer/Mihalas, Stewart-Pyatt, and Ecker-Kroll and discuss their effect on the formation of K-shell features. We will also discuss recently implemented models for dense plasma shifts for H-like, He-like and neutral systems.

  11. Multispectral infrared target detection: phenomenology and modeling

    NASA Astrophysics Data System (ADS)

    Cederquist, Jack N.; Rogne, Timothy J.; Schwartz, Craig R.

    1993-10-01

    Many targets of interest provide only very small signature differences from the clutter background. The ability to detect these small difference targets should be improved by using data which is diverse in space, time, wavelength or some other observable. Target materials often differ from background materials in the variation of their reflectance or emittance with wavelength. A multispectral sensor is therefore considered as a means to improve detection of small signal targets. If this sensor operates in the thermal infrared, it will not need solar illumination and will be useful at night as well as during the day. An understanding of the phenomenology of the spectral properties of materials and an ability to model and simulate target and clutter signatures is needed to understand potential target detection performance from multispectral infrared sensor data. Spectral variations in material emittance are due to vibrational energy transitions in molecular bonds. The spectral emittances of many materials of interest have been measured. Examples are vegetation, soil, construction and road materials, and paints. A multispectral infrared signature model has been developed which includes target and background temperature and emissivity, sky, sun, cloud and background irradiance, multiple reflection effects, path radiance, and atmospheric attenuation. This model can be used to predict multispectral infrared signatures for small signal targets.

  12. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  13. Target Recognition Using Neural Networks for Model Deformation Measurements

    NASA Technical Reports Server (NTRS)

    Ross, Richard W.; Hibler, David L.

    1999-01-01

    Optical measurements provide a non-invasive method for measuring deformation of wind tunnel models. Model deformation systems use targets mounted or painted on the surface of the model to identify known positions, and photogrammetric methods are used to calculate 3-D positions of the targets on the model from digital 2-D images. Under ideal conditions, the reflective targets are placed against a dark background and provide high-contrast images, aiding in target recognition. However, glints of light reflecting from the model surface, or reduced contrast caused by light source or model smoothness constraints, can compromise accurate target determination using current algorithmic methods. This paper describes a technique using a neural network and image processing technologies which increases the reliability of target recognition systems. Unlike algorithmic methods, the neural network can be trained to identify the characteristic patterns that distinguish targets from other objects of similar size and appearance and can adapt to changes in lighting and environmental conditions.

  14. Impact cratering in viscous targets - Laboratory experiments

    NASA Technical Reports Server (NTRS)

    Greeley, R.; Fink, J.; Snyder, D. B.; Gault, D. E.; Guest, J. E.; Schultz, P. H.

    1980-01-01

    To determine the effects of target yield strength and viscosity on the formation and morphology of Martian multilobed, slosh and rampart-type impact craters, 75 experiments in which target properties and impact energies were varied were carried out for high-speed motion picture observation in keeping with the following sequence: (1) projectile initial impact; (2) crater excavation and rise of ejecta plume; (3) formation of a transient central mound which generates a surge of material upon collapse that can partly override the plume deposit; and (4) oscillation of the central mound with progressively smaller surges of material leaving the crater. A dimensional analysis of the experimental results indicates that the dimensions of the central mound are proportional to (1) the energy of the impacting projectile and (2) to the inverse of both the yield strength and viscosity of the target material, and it is determined that extrapolation of these results to large Martian craters requires an effective surface layer viscosity of less than 10 to the 10th poise. These results may also be applicable to impacts on outer planet satellites composed of ice-silicate mixtures.

  15. Increased subjective experience of non-target emotions in patients with frontotemporal dementia and Alzheimer’s disease

    PubMed Central

    Chen, Kuan-Hua; Lwi, Sandy J.; Hua, Alice Y.; Haase, Claudia M.; Miller, Bruce L.; Levenson, Robert W.

    2017-01-01

    Although laboratory procedures are designed to produce specific emotions, participants often experience mixed emotions (i.e., target and non-target emotions). We examined non-target emotions in patients with frontotemporal dementia (FTD), Alzheimer’s disease (AD), other neurodegenerative diseases, and healthy controls. Participants watched film clips designed to produce three target emotions. Subjective experience of non-target emotions was assessed and emotional facial expressions were coded. Compared to patients with other neurodegenerative diseases and healthy controls, FTD patients reported more positive and negative non-target emotions, whereas AD patients reported more positive non-target emotions. There were no group differences in facial expressions of non-target emotions. We interpret these findings as reflecting deficits in processing interoceptive and contextual information resulting from neurodegeneration in brain regions critical for creating subjective emotional experience. PMID:29457053

  16. Penetration experiments in aluminum 1100 targets using soda-lime glass projectiles

    NASA Technical Reports Server (NTRS)

    Horz, Friedrich; Cintala, Mark J.; Bernhard, Ronald P.; Cardenas, Frank; Davidson, William E.; Haynes, Gerald; See, Thomas H.; Winkler, Jerry L.

    1995-01-01

    The cratering and penetration behavior of annealed aluminum 1100 targets, with thickness varied from several centimeters to ultra-thin foils less than 1 micrometer thick, were experimentally investigated using 3.2 mm diameter spherical soda-lime glass projectiles at velocities from 1 to 7 km/s. The objective was to establish quantitative, dimensional relationships between initial impact conditions (impact velocity, projectile diameter, and target thickness) and the diameter of the resulting crater or penetration hole. Such dimensional relationships and calibration experiments are needed to extract the diameters and fluxes of hypervelocity particles from space-exposed surfaces and to predict the performance of certain collisional shields. The cratering behavior of aluminum 1100 is fairly well predicted. However, crater depth is modestly deeper for our silicate impactors than the canonical value based on aluminum projectiles and aluminum 6061-T6 targets. The ballistic-limit thickness was also different. These differences attest to the great sensitivity of detailed crater geometry and penetration behavior on the physical properties of both the target and impactor. Each penetration experiment was equipped with a witness plate to monitor the nature of the debris plume emanating from the rear of the target. This plume consists of both projectile fragments and target debris. Both penetration hole and witness-plate spray patterns systematically evolve in response to projectile diameter/target thickness. The relative dimensions of the projectile and target totally dominate the experimental products documented in this report; impact velocity is an important contributor as well to the evolution of penetration holes, but is of subordinate significance for the witness-plate spray patterns.

  17. Prediction of homoprotein and heteroprotein complexes by protein docking and template‐based modeling: A CASP‐CAPRI experiment

    PubMed Central

    Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen‐You; Schneidman‐Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez‐Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Rie Lee, Gyu; Seok, Chaok; Qin, Sanbo; Zhou, Huan‐Xiang; Ritchie, David W.; Maigret, Bernard; Devignes, Marie‐Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaël A.G.; Bates, Paul A.; Ben‐Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S.; Weng, Zhiping; Vreven, Thom; Pierce, Brian G.; Borrman, Tyler M.; Yu, Jinchao; Ochsenbein, Françoise; Guerois, Raphaël; Vangone, Anna; Rodrigues, João P.G.L.M.; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S.J.; Visscher, Koen; Kastritis, Panagiotis L.; Bonvin, Alexandre M.J.J.; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X.; Kim, Hyung‐Rae; Roy, Amit; Han, Xusi; Esquivel‐Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J.; Fuller, Jonathan C.; Wade, Rebecca C.; Anishchenko, Ivan; Kundrotas, Petras J.; Vakser, Ilya A.; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero‐Durana, Miguel; Jiménez‐García, Brian; Moal, Iain H.; Férnandez‐Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R.; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A.; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S.; Gray, Jeffrey J.; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey

    2016-01-01

    ABSTRACT We present the results for CAPRI Round 30, the first joint CASP‐CAPRI experiment, which brought together experts from the protein structure prediction and protein–protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact‐sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology‐built subunit models and the smaller pair‐wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. Proteins 2016; 84(Suppl 1):323–348. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:27122118

  18. TargetNet: a web service for predicting potential drug-target interaction profiling via multi-target SAR models.

    PubMed

    Yao, Zhi-Jiang; Dong, Jie; Che, Yu-Jing; Zhu, Min-Feng; Wen, Ming; Wang, Ning-Ning; Wang, Shan; Lu, Ai-Ping; Cao, Dong-Sheng

    2016-05-01

    Drug-target interactions (DTIs) are central to current drug discovery processes and public health fields. Analyzing the DTI profiling of the drugs helps to infer drug indications, adverse drug reactions, drug-drug interactions, and drug mode of actions. Therefore, it is of high importance to reliably and fast predict DTI profiling of the drugs on a genome-scale level. Here, we develop the TargetNet server, which can make real-time DTI predictions based only on molecular structures, following the spirit of multi-target SAR methodology. Naïve Bayes models together with various molecular fingerprints were employed to construct prediction models. Ensemble learning from these fingerprints was also provided to improve the prediction ability. When the user submits a molecule, the server will predict the activity of the user's molecule across 623 human proteins by the established high quality SAR model, thus generating a DTI profiling that can be used as a feature vector of chemicals for wide applications. The 623 SAR models related to 623 human proteins were strictly evaluated and validated by several model validation strategies, resulting in the AUC scores of 75-100 %. We applied the generated DTI profiling to successfully predict potential targets, toxicity classification, drug-drug interactions, and drug mode of action, which sufficiently demonstrated the wide application value of the potential DTI profiling. The TargetNet webserver is designed based on the Django framework in Python, and is freely accessible at http://targetnet.scbdd.com .

  19. TargetNet: a web service for predicting potential drug-target interaction profiling via multi-target SAR models

    NASA Astrophysics Data System (ADS)

    Yao, Zhi-Jiang; Dong, Jie; Che, Yu-Jing; Zhu, Min-Feng; Wen, Ming; Wang, Ning-Ning; Wang, Shan; Lu, Ai-Ping; Cao, Dong-Sheng

    2016-05-01

    Drug-target interactions (DTIs) are central to current drug discovery processes and public health fields. Analyzing the DTI profiling of the drugs helps to infer drug indications, adverse drug reactions, drug-drug interactions, and drug mode of actions. Therefore, it is of high importance to reliably and fast predict DTI profiling of the drugs on a genome-scale level. Here, we develop the TargetNet server, which can make real-time DTI predictions based only on molecular structures, following the spirit of multi-target SAR methodology. Naïve Bayes models together with various molecular fingerprints were employed to construct prediction models. Ensemble learning from these fingerprints was also provided to improve the prediction ability. When the user submits a molecule, the server will predict the activity of the user's molecule across 623 human proteins by the established high quality SAR model, thus generating a DTI profiling that can be used as a feature vector of chemicals for wide applications. The 623 SAR models related to 623 human proteins were strictly evaluated and validated by several model validation strategies, resulting in the AUC scores of 75-100 %. We applied the generated DTI profiling to successfully predict potential targets, toxicity classification, drug-drug interactions, and drug mode of action, which sufficiently demonstrated the wide application value of the potential DTI profiling. The TargetNet webserver is designed based on the Django framework in Python, and is freely accessible at http://targetnet.scbdd.com.

  20. Target and Tissue Selectivity Prediction by Integrated Mechanistic Pharmacokinetic-Target Binding and Quantitative Structure Activity Modeling.

    PubMed

    Vlot, Anna H C; de Witte, Wilhelmus E A; Danhof, Meindert; van der Graaf, Piet H; van Westen, Gerard J P; de Lange, Elizabeth C M

    2017-12-04

    Selectivity is an important attribute of effective and safe drugs, and prediction of in vivo target and tissue selectivity would likely improve drug development success rates. However, a lack of understanding of the underlying (pharmacological) mechanisms and availability of directly applicable predictive methods complicates the prediction of selectivity. We explore the value of combining physiologically based pharmacokinetic (PBPK) modeling with quantitative structure-activity relationship (QSAR) modeling to predict the influence of the target dissociation constant (K D ) and the target dissociation rate constant on target and tissue selectivity. The K D values of CB1 ligands in the ChEMBL database are predicted by QSAR random forest (RF) modeling for the CB1 receptor and known off-targets (TRPV1, mGlu5, 5-HT1a). Of these CB1 ligands, rimonabant, CP-55940, and Δ 8 -tetrahydrocanabinol, one of the active ingredients of cannabis, were selected for simulations of target occupancy for CB1, TRPV1, mGlu5, and 5-HT1a in three brain regions, to illustrate the principles of the combined PBPK-QSAR modeling. Our combined PBPK and target binding modeling demonstrated that the optimal values of the K D and k off for target and tissue selectivity were dependent on target concentration and tissue distribution kinetics. Interestingly, if the target concentration is high and the perfusion of the target site is low, the optimal K D value is often not the lowest K D value, suggesting that optimization towards high drug-target affinity can decrease the benefit-risk ratio. The presented integrative structure-pharmacokinetic-pharmacodynamic modeling provides an improved understanding of tissue and target selectivity.

  1. Experiment research on infrared targets signature in mid and long IR spectral bands

    NASA Astrophysics Data System (ADS)

    Wang, Chensheng; Hong, Pu; Lei, Bo; Yue, Song; Zhang, Zhijie; Ren, Tingting

    2013-09-01

    Since the infrared imaging system has played a significant role in the military self-defense system and fire control system, the radiation signature of IR target becomes an important topic in IR imaging application technology. IR target signature can be applied in target identification, especially for small and dim targets, as well as the target IR thermal design. To research and analyze the targets IR signature systematically, a practical and experimental project is processed under different backgrounds and conditions. An infrared radiation acquisition system based on a MWIR cooled thermal imager and a LWIR cooled thermal imager is developed to capture the digital infrared images. Furthermore, some instruments are introduced to provide other parameters. According to the original image data and the related parameters in a certain scene, the IR signature of interested target scene can be calculated. Different background and targets are measured with this approach, and a comparison experiment analysis shall be presented in this paper as an example. This practical experiment has proved the validation of this research work, and it is useful in detection performance evaluation and further target identification research.

  2. [Passive ranging of infrared target using oxygen A-band and Elsasser model].

    PubMed

    Li, Jin-Hua; Wang, Zhao-Ba; Wang Zhi

    2014-09-01

    Passive ranging method of short range and single band was developed based on target radiation and attenuation characteristic of oxygen spectrum absorption. The relation between transmittance of oxygen A band and range of measured target was analyzed. Radiation strength distribution of measured target can be obtained according to the distribution law of absorption coefficient with environmental parameters. Passive ranging mathematical model of short ranges was established using Elsasser model with Lorentz line shape based on the computational methods of band average transmittance and high-temperature gas radiation narrowband model. The range of measured object was obtained using transmittance fitting with test data calculation and theoretical model. Besides, ranging precision was corrected considering the influence of oxygen absorption with enviromental parameter. The ranging experiment platform was established. The source was a 10 watt black body, and a grating spectrometer with 17 cm(-1) resolution was used. In order to improve the light receiving efficiency, light input was collected with 23 mm calibre telescope. The test data was processed for different range in 200 m. The results show that the transmittance accuracy was better than 2.18% in short range compared to the test data with predicted value in the same conditions.

  3. A Robotics-Based Approach to Modeling of Choice Reaching Experiments on Visual Attention

    PubMed Central

    Strauss, Soeren; Heinke, Dietmar

    2012-01-01

    The paper presents a robotics-based model for choice reaching experiments on visual attention. In these experiments participants were asked to make rapid reach movements toward a target in an odd-color search task, i.e., reaching for a green square among red squares and vice versa (e.g., Song and Nakayama, 2008). Interestingly these studies found that in a high number of trials movements were initially directed toward a distractor and only later were adjusted toward the target. These “curved” trajectories occurred particularly frequently when the target in the directly preceding trial had a different color (priming effect). Our model is embedded in a closed-loop control of a LEGO robot arm aiming to mimic these reach movements. The model is based on our earlier work which suggests that target selection in visual search is implemented through parallel interactions between competitive and cooperative processes in the brain (Heinke and Humphreys, 2003; Heinke and Backhaus, 2011). To link this model with the control of the robot arm we implemented a topological representation of movement parameters following the dynamic field theory (Erlhagen and Schoener, 2002). The robot arm is able to mimic the results of the odd-color search task including the priming effect and also generates human-like trajectories with a bell-shaped velocity profile. Theoretical implications and predictions are discussed in the paper. PMID:22529827

  4. Targeting the link between loneliness and paranoia via an interventionist-causal model framework.

    PubMed

    Gollwitzer, Anton; Wilczynska, Magdalena; Jaya, Edo S

    2018-05-01

    Targeting the antecedents of paranoia may be one potential method to reduce or prevent paranoia. For instance, targeting a potential antecedent of paranoia - loneliness - may reduce paranoia. Our first research question was whether loneliness heightens subclinical paranoia and whether negative affect may mediate this effect. Second, we wondered whether this potential effect could be targeted via two interventionist pathways in line with an interventionist-causal model approach: (1) decreasing loneliness, and (2) intervening on the potential mediator - negative affect. In Study 1 (N = 222), recollecting an experience of companionship reduced paranoia in participants high in pre-manipulation paranoia but not in participants low in pre-manipulation paranoia. Participants recollecting an experience of loneliness, on the other hand, exhibited increased paranoia, and this effect was mediated by negative affect. In Study 2 (N = 196), participants who utilized an emotion-regulation strategy, cognitive reappraisal, to regulate the negative affect associated with loneliness successfully attenuated the effect of loneliness on paranoia. Targeting the effect of loneliness on paranoia by identifying interventionist pathways may be one promising route for reducing and preventing subclinical paranoia. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Experimental design and data analysis of Ago-RIP-Seq experiments for the identification of microRNA targets.

    PubMed

    Tichy, Diana; Pickl, Julia Maria Anna; Benner, Axel; Sültmann, Holger

    2017-03-31

    The identification of microRNA (miRNA) target genes is crucial for understanding miRNA function. Many methods for the genome-wide miRNA target identification have been developed in recent years; however, they have several limitations including the dependence on low-confident prediction programs and artificial miRNA manipulations. Ago-RNA immunoprecipitation combined with high-throughput sequencing (Ago-RIP-Seq) is a promising alternative. However, appropriate statistical data analysis algorithms taking into account the experimental design and the inherent noise of such experiments are largely lacking.Here, we investigate the experimental design for Ago-RIP-Seq and examine biostatistical methods to identify de novo miRNA target genes. Statistical approaches considered are either based on a negative binomial model fit to the read count data or applied to transformed data using a normal distribution-based generalized linear model. We compare them by a real data simulation study using plasmode data sets and evaluate the suitability of the approaches to detect true miRNA targets by sensitivity and false discovery rates. Our results suggest that simple approaches like linear regression models on (appropriately) transformed read count data are preferable. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. On the internal target model in a tracking task

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Baron, S.

    1981-01-01

    An optimal control model for predicting operator's dynamic responses and errors in target tracking ability is summarized. The model, which predicts asymmetry in the tracking data, is dependent on target maneuvers and trajectories. Gunners perception, decision making, control, and estimate of target positions and velocity related to crossover intervals are discussed. The model provides estimates for means, standard deviations, and variances for variables investigated and for operator estimates of future target positions and velocities.

  7. Optical model analyses of galactic cosmic ray fragmentation in hydrogen targets

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.

    1993-01-01

    Quantum-mechanical optical model methods for calculating cross sections for the fragmentation of galactic cosmic ray nuclei by hydrogen targets are presented. The fragmentation cross sections are calculated with an abrasion-ablation collision formalism. Elemental and isotopic cross sections are estimated and compared with measured values for neon, sulfur, and calcium ions at incident energies between 400A MeV and 910A MeV. Good agreement between theory and experiment is obtained.

  8. SIMULATED PERFORMANCE OF THE PRODUCTION TARGET FOR THE MUON G-2 EXPERIMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stratakis, D.; Convery, M.; Morgan, J. P.

    The Muon g-2 Experiment plans to use the Fermilab Re-cycler Ring for forming the proton bunches that hit its pro-duction target. The proposed scheme uses one RF system, 80 kV of 2.5 MHz RF. In order to avoid bunch rotations in a mismatched bucket, the 2.5 MHz is ramped adiabatically from 3 to 80 kV in 90 ms. In this study, the interaction of the primary proton beam with the production target for the Muon g-2 Experiment is numerically examined.

  9. Optimal de novo design of MRM experiments for rapid assay development in targeted proteomics.

    PubMed

    Bertsch, Andreas; Jung, Stephan; Zerck, Alexandra; Pfeifer, Nico; Nahnsen, Sven; Henneges, Carsten; Nordheim, Alfred; Kohlbacher, Oliver

    2010-05-07

    Targeted proteomic approaches such as multiple reaction monitoring (MRM) overcome problems associated with classical shotgun mass spectrometry experiments. Developing MRM quantitation assays can be time consuming, because relevant peptide representatives of the proteins must be found and their retention time and the product ions must be determined. Given the transitions, hundreds to thousands of them can be scheduled into one experiment run. However, it is difficult to select which of the transitions should be included into a measurement. We present a novel algorithm that allows the construction of MRM assays from the sequence of the targeted proteins alone. This enables the rapid development of targeted MRM experiments without large libraries of transitions or peptide spectra. The approach relies on combinatorial optimization in combination with machine learning techniques to predict proteotypicity, retention time, and fragmentation of peptides. The resulting potential transitions are scheduled optimally by solving an integer linear program. We demonstrate that fully automated construction of MRM experiments from protein sequences alone is possible and over 80% coverage of the targeted proteins can be achieved without further optimization of the assay.

  10. First PIC simulations modeling the interaction of ultra-intense lasers with sub-micron, liquid crystal targets

    NASA Astrophysics Data System (ADS)

    McMahon, Matthew; Poole, Patrick; Willis, Christopher; Andereck, David; Schumacher, Douglass

    2014-10-01

    We recently introduced liquid crystal films as on-demand, variable thickness (50-5000 nanometers), low cost targets for intense laser experiments. Here we present the first particle-in-cell (PIC) simulations of short pulse laser excitation of liquid crystal targets treating Scarlet (OSU) class lasers using the PIC code LSP. In order to accurately model the target evolution, a low starting temperature and field ionization model are employed. This is essential as large starting temperatures, often used to achieve large Debye lengths, lead to expansion of the target causing significant reduction of the target density before the laser pulse can interact. We also present an investigation of the modification of laser pulses by very thin targets. This work was supported by the DARPA PULSE program through a grant from ARMDEC, by the US Department of Energy under Contract No. DE-NA0001976, and allocations of computing time from the Ohio Supercomputing Center.

  11. Modeling target normal sheath acceleration using handoffs between multiple simulations

    NASA Astrophysics Data System (ADS)

    McMahon, Matthew; Willis, Christopher; Mitchell, Robert; King, Frank; Schumacher, Douglass; Akli, Kramer; Freeman, Richard

    2013-10-01

    We present a technique to model the target normal sheath acceleration (TNSA) process using full-scale LSP PIC simulations. The technique allows for a realistic laser, full size target and pre-plasma, and sufficient propagation length for the accelerated ions and electrons. A first simulation using a 2D Cartesian grid models the laser-plasma interaction (LPI) self-consistently and includes field ionization. Electrons accelerated by the laser are imported into a second simulation using a 2D cylindrical grid optimized for the initial TNSA process and incorporating an equation of state. Finally, all of the particles are imported to a third simulation optimized for the propagation of the accelerated ions and utilizing a static field solver for initialization. We also show use of 3D LPI simulations. Simulation results are compared to recent ion acceleration experiments using SCARLET laser at The Ohio State University. This work was performed with support from ASOFR under contract # FA9550-12-1-0341, DARPA, and allocations of computing time from the Ohio Supercomputing Center.

  12. Efficient visualization of high-throughput targeted proteomics experiments: TAPIR.

    PubMed

    Röst, Hannes L; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars

    2015-07-15

    Targeted mass spectrometry comprises a set of powerful methods to obtain accurate and consistent protein quantification in complex samples. To fully exploit these techniques, a cross-platform and open-source software stack based on standardized data exchange formats is required. We present TAPIR, a fast and efficient Python visualization software for chromatograms and peaks identified in targeted proteomics experiments. The input formats are open, community-driven standardized data formats (mzML for raw data storage and TraML encoding the hierarchical relationships between transitions, peptides and proteins). TAPIR is scalable to proteome-wide targeted proteomics studies (as enabled by SWATH-MS), allowing researchers to visualize high-throughput datasets. The framework integrates well with existing automated analysis pipelines and can be extended beyond targeted proteomics to other types of analyses. TAPIR is available for all computing platforms under the 3-clause BSD license at https://github.com/msproteomicstools/msproteomicstools. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Application of QSAR and shape pharmacophore modeling approaches for targeted chemical library design.

    PubMed

    Ebalunode, Jerry O; Zheng, Weifan; Tropsha, Alexander

    2011-01-01

    Optimization of chemical library composition affords more efficient identification of hits from biological screening experiments. The optimization could be achieved through rational selection of reagents used in combinatorial library synthesis. However, with a rapid advent of parallel synthesis methods and availability of millions of compounds synthesized by many vendors, it may be more efficient to design targeted libraries by means of virtual screening of commercial compound collections. This chapter reviews the application of advanced cheminformatics approaches such as quantitative structure-activity relationships (QSAR) and pharmacophore modeling (both ligand and structure based) for virtual screening. Both approaches rely on empirical SAR data to build models; thus, the emphasis is placed on achieving models of the highest rigor and external predictive power. We present several examples of successful applications of both approaches for virtual screening to illustrate their utility. We suggest that the expert use of both QSAR and pharmacophore models, either independently or in combination, enables users to achieve targeted libraries enriched with experimentally confirmed hit compounds.

  14. Use of a vision model to quantify the significance of factors effecting target conspicuity

    NASA Astrophysics Data System (ADS)

    Gilmore, M. A.; Jones, C. K.; Haynes, A. W.; Tolhurst, D. J.; To, M.; Troscianko, T.; Lovell, P. G.; Parraga, C. A.; Pickavance, K.

    2006-05-01

    When designing camouflage it is important to understand how the human visual system processes the information to discriminate the target from the background scene. A vision model has been developed to compare two images and detect differences in local contrast in each spatial frequency channel. Observer experiments are being undertaken to validate this vision model so that the model can be used to quantify the relative significance of different factors affecting target conspicuity. Synthetic imagery can be used to design improved camouflage systems. The vision model is being used to compare different synthetic images to understand what features in the image are important to reproduce accurately and to identify the optimum way to render synthetic imagery for camouflage effectiveness assessment. This paper will describe the vision model and summarise the results obtained from the initial validation tests. The paper will also show how the model is being used to compare different synthetic images and discuss future work plans.

  15. Qweak Data Analysis for Target Modeling Using Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Moore, Michael; Covrig, Silviu

    2015-04-01

    The 2.5 kW liquid hydrogen (LH2) target used in the Qweak parity violation experiment is the highest power LH2 target in the world and the first to be designed with Computational Fluid Dynamics (CFD) at Jefferson Lab. The Qweak experiment determined the weak charge of the proton by measuring the parity-violating elastic scattering asymmetry of longitudinally polarized electrons from unpolarized liquid hydrogen at small momentum transfer (Q2 = 0 . 025 GeV2). This target met the design goals of < 1 % luminosity reduction and < 5 % contribution to the total asymmetry width (the Qweak target achieved 2 % or 55 ppm). State of the art time dependent CFD simulations are being developed to improve the predictions of target noise on the time scale of the electron beam helicity period. These predictions will be bench-marked with the Qweak target data. This work is an essential ingredient in future designs of very high power low noise targets like MOLLER (5 kW, target noise asymmetry contribution < 25 ppm) and MESA (4.5 kW).

  16. Source characterization and modeling development for monoenergetic-proton radiography experiments on OMEGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manuel, M. J.-E.; Zylstra, A. B.; Rinderknecht, H. G.

    2012-06-15

    A monoenergetic proton source has been characterized and a modeling tool developed for proton radiography experiments at the OMEGA [T. R. Boehly et al., Opt. Comm. 133, 495 (1997)] laser facility. Multiple diagnostics were fielded to measure global isotropy levels in proton fluence and images of the proton source itself provided information on local uniformity relevant to proton radiography experiments. Global fluence uniformity was assessed by multiple yield diagnostics and deviations were calculated to be {approx}16% and {approx}26% of the mean for DD and D{sup 3}He fusion protons, respectively. From individual fluence images, it was found that the angular frequenciesmore » of Greater-Than-Or-Equivalent-To 50 rad{sup -1} contributed less than a few percent to local nonuniformity levels. A model was constructed using the Geant4 [S. Agostinelli et al., Nuc. Inst. Meth. A 506, 250 (2003)] framework to simulate proton radiography experiments. The simulation implements realistic source parameters and various target geometries. The model was benchmarked with the radiographs of cold-matter targets to within experimental accuracy. To validate the use of this code, the cold-matter approximation for the scattering of fusion protons in plasma is discussed using a typical laser-foil experiment as an example case. It is shown that an analytic cold-matter approximation is accurate to within Less-Than-Or-Equivalent-To 10% of the analytic plasma model in the example scenario.« less

  17. Cavitation Damage Experiments for Mercury Spallation Targets At the LANSCE WNR in 2008

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riemer, Bernie; Wendel, Mark W; Felde, David K

    2010-01-01

    Proton beam experiments investigating cavitation damage in short pulse mercury spallation targets were performed at LANSCE WNR in July of 2008. They included two main areas for investigation: damage dependence on mercury velocity using geometry more prototypic to the SNS target than previously employed and damage dependence on incident proton beam flux intensity. The flow dependence experiment employed six test targets with mercury velocity in the channel ranging from 0 to more than 4 m/s. Each was hit with 100 WNR beam pulses with peak proton flux equivalent to that of SNS operating at 2.7 MW. Damage dependence on incidentmore » proton beam flux intensity was also investigated with three intensity levels used on simple rectangular shaped targets without mercury flow. Intensity variation was imposed by focusing the beam differently while maintaining protons per pulse. This kept total energy deposited in each target constant. A fourth test target was hit with various beams: constant protons and varied spot size; constant spot size and varied protons. No damage will be assessed in this case. Instead, acoustic emissions associated with cavitation collapse were measured by laser Doppler vibrometer (LDV) from readings of exterior vessel motions as well as by mercury wetted acoustic transducers. This paper will provide a description of the experiment and present available results. Damage assessment will require several months before surface analysis can be completed and was not available in time for IWSMT-9.« less

  18. Forming Uniform Deuterium-Ice Layers in Cryogenic Targets: Experiences Using the OMEGA Cryogenic Target Handling System

    NASA Astrophysics Data System (ADS)

    Harding, D. R.; Wittman, M. D.; Elasky, L.; Iwan, L. S.; Lund, L.

    2001-10-01

    The OMEGA Cryogenic Target Handling System (OCTHS) allows variable-thickness ice layers (nominal 100-μm) to be formed inside OMEGA-size (1-mm-diam., 3-μm-wall) plastic shells. The OCTHS design provides the most straightforward thermal environment for layering targets: permeation filled spherical targets are in a spherical isothermal environment. The layered target can be rotated 360^o to acquire multiple views of the ice layer. However, the capability of providing cryogenic targets for implosion experiments imposes constraints that do not exist in test systems dedicated to ice-layering studies. Most affected is the ability to characterize the target: space constraints and the need for multiple sets of windows limit the viewing access to f/5 optics, which affects the image quality. With these features, the OCTS provides the most relevant test system, to date, for layering targets and quantifying the overall ice roughness. No single layering protocol provides repeatable ice smoothness. All techniques require extensive operator interaction, and the layering process is lengthy. Typical ice rms smoothness varied from 5 to 10 μm for all targets studied. Characterizing the ice layer from different views shows a ~30% variation in the ice rms smoothness and a greater difference in the power spectra, depending on the view axis. This work was supported by the U.S. DOE Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC03-92SF19460.

  19. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  20. Evaluation of free modeling targets in CASP11 and ROLL.

    PubMed

    Kinch, Lisa N; Li, Wenlin; Monastyrskyy, Bohdan; Kryshtafovych, Andriy; Grishin, Nick V

    2016-09-01

    We present an assessment of 'template-free modeling' (FM) in CASP11and ROLL. Community-wide server performance suggested the use of automated scores similar to previous CASPs would provide a good system of evaluating performance, even in the absence of comprehensive manual assessment. The CASP11 FM category included several outstanding examples, including successful prediction by the Baker group of a 256-residue target (T0806-D1) that lacked sequence similarity to any existing template. The top server model prediction by Zhang's Quark, which was apparently selected and refined by several manual groups, encompassed the entire fold of target T0837-D1. Methods from the same two groups tended to dominate overall CASP11 FM and ROLL rankings. Comparison of top FM predictions with those from the previous CASP experiment revealed progress in the category, particularly reflected in high prediction accuracy for larger protein domains. FM prediction models for two cases were sufficient to provide functional insights that were otherwise not obtainable by traditional sequence analysis methods. Importantly, CASP11 abstracts revealed that alignment-based contact prediction methods brought about much of the CASP11 progress, producing both of the functionally relevant models as well as several of the other outstanding structure predictions. These methodological advances enabled de novo modeling of much larger domain structures than was previously possible and allowed prediction of functional sites. Proteins 2016; 84(Suppl 1):51-66. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  1. Scattering Models and Basic Experiments in the Microwave Regime

    NASA Technical Reports Server (NTRS)

    Fung, A. K.; Blanchard, A. J. (Principal Investigator)

    1985-01-01

    The objectives of research over the next three years are: (1) to develop a randomly rough surface scattering model which is applicable over the entire frequency band; (2) to develop a computer simulation method and algorithm to simulate scattering from known randomly rough surfaces, Z(x,y); (3) to design and perform laboratory experiments to study geometric and physical target parameters of an inhomogeneous layer; (4) to develop scattering models for an inhomogeneous layer which accounts for near field interaction and multiple scattering in both the coherent and the incoherent scattering components; and (5) a comparison between theoretical models and measurements or numerical simulation.

  2. Statistical Modeling of Single Target Cell Encapsulation

    PubMed Central

    Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan

    2011-01-01

    High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems. PMID:21814548

  3. A Canopy Density Model for Planar Orchard Target Detection Based on Ultrasonic Sensors

    PubMed Central

    Li, Hanzhe; Zhai, Changyuan; Weckler, Paul; Wang, Ning; Yang, Shuo; Zhang, Bo

    2016-01-01

    Orchard target-oriented variable rate spraying is an effective method to reduce pesticide drift and excessive residues. To accomplish this task, the orchard targets’ characteristic information is needed to control liquid flow rate and airflow rate. One of the most important characteristics is the canopy density. In order to establish the canopy density model for a planar orchard target which is indispensable for canopy density calculation, a target density detection testing system was developed based on an ultrasonic sensor. A time-domain energy analysis method was employed to analyze the ultrasonic signal. Orthogonal regression central composite experiments were designed and conducted using man-made canopies of known density with three or four layers of leaves. Two model equations were obtained, of which the model for the canopies with four layers was found to be the most reliable. A verification test was conducted with different layers at the same density values and detecting distances. The test results showed that the relative errors of model density values and actual values of five, four, three and two layers of leaves were acceptable, while the maximum relative errors were 17.68%, 25.64%, 21.33% and 29.92%, respectively. It also suggested the model equation with four layers had a good applicability with different layers which increased with adjacent layers. PMID:28029132

  4. The application of antitumor drug-targeting models on liver cancer.

    PubMed

    Yan, Yan; Chen, Ningbo; Wang, Yunbing; Wang, Ke

    2016-06-01

    Hepatocarcinoma animal models, such as the induced tumor model, transplanted tumor model, gene animal model, are significant experimental tools for the evaluation of targeting drug delivery system as well as the pre-clinical studies of liver cancer. The application of antitumor drug-targeting models not only furnishes similar biological characteristics to human liver cancer but also offers guarantee of pharmacokinetic indicators of the liver-targeting preparations. In this article, we have reviewed some kinds of antitumor drug-targeting models of hepatoma and speculated that the research on this field would be capable of attaining a deeper level and expecting a superior achievement in the future.

  5. Identification of HMX1 target genes: A predictive promoter model approach

    PubMed Central

    Boulling, Arnaud; Wicht, Linda

    2013-01-01

    Purpose A homozygous mutation in the H6 family homeobox 1 (HMX1) gene is responsible for a new oculoauricular defect leading to eye and auricular developmental abnormalities as well as early retinal degeneration (MIM 612109). However, the HMX1 pathway remains poorly understood, and in the first approach to better understand the pathway’s function, we sought to identify the target genes. Methods We developed a predictive promoter model (PPM) approach using a comparative transcriptomic analysis in the retina at P15 of a mouse model lacking functional Hmx1 (dmbo mouse) and its respective wild-type. This PPM was based on the hypothesis that HMX1 binding site (HMX1-BS) clusters should be more represented in promoters of HMX1 target genes. The most differentially expressed genes in the microarray experiment that contained HMX1-BS clusters were used to generate the PPM, which was then statistically validated. Finally, we developed two genome-wide target prediction methods: one that focused on conserving PPM features in human and mouse and one that was based on the co-occurrence of HMX1-BS pairs fitting the PPM, in human or in mouse, independently. Results The PPM construction revealed that sarcoglycan, gamma (35kDa dystrophin-associated glycoprotein) (Sgcg), teashirt zinc finger homeobox 2 (Tshz2), and solute carrier family 6 (neurotransmitter transporter, glycine) (Slc6a9) genes represented Hmx1 targets in the mouse retina at P15. Moreover, the genome-wide target prediction revealed that mouse genes belonging to the retinal axon guidance pathway were targeted by Hmx1. Expression of these three genes was experimentally validated using a quantitative reverse transcription PCR approach. The inhibitory activity of Hmx1 on Sgcg, as well as protein tyrosine phosphatase, receptor type, O (Ptpro) and Sema3f, two targets identified by the PPM, were validated with luciferase assay. Conclusions Gene expression analysis between wild-type and dmbo mice allowed us to develop a PPM

  6. Skyline: an open source document editor for creating and analyzing targeted proteomics experiments.

    PubMed

    MacLean, Brendan; Tomazela, Daniela M; Shulman, Nicholas; Chambers, Matthew; Finney, Gregory L; Frewen, Barbara; Kern, Randall; Tabb, David L; Liebler, Daniel C; MacCoss, Michael J

    2010-04-01

    Skyline is a Windows client application for targeted proteomics method creation and quantitative data analysis. It is open source and freely available for academic and commercial use. The Skyline user interface simplifies the development of mass spectrometer methods and the analysis of data from targeted proteomics experiments performed using selected reaction monitoring (SRM). Skyline supports using and creating MS/MS spectral libraries from a wide variety of sources to choose SRM filters and verify results based on previously observed ion trap data. Skyline exports transition lists to and imports the native output files from Agilent, Applied Biosystems, Thermo Fisher Scientific and Waters triple quadrupole instruments, seamlessly connecting mass spectrometer output back to the experimental design document. The fast and compact Skyline file format is easily shared, even for experiments requiring many sample injections. A rich array of graphs displays results and provides powerful tools for inspecting data integrity as data are acquired, helping instrument operators to identify problems early. The Skyline dynamic report designer exports tabular data from the Skyline document model for in-depth analysis with common statistical tools. Single-click, self-updating web installation is available at http://proteome.gs.washington.edu/software/skyline. This web site also provides access to instructional videos, a support board, an issues list and a link to the source code project.

  7. Ejecta velocity distribution for impact cratering experiments on porous and low strength targets

    NASA Astrophysics Data System (ADS)

    Michikami, Tatsuhiro; Moriguchi, Kouichi; Hasegawa, Sunao; Fujiwara, Akira

    2007-01-01

    Impact cratering experiments on porous targets with various compressive strength ranging from ˜0.5 to ˜250 MPa were carried out in order to investigate the relationship between the ejecta velocity, and material strength or porosity of the target. A spherical alumina projectile (diameter ˜1 mm) was shot perpendicularly into the target surface with velocity ranging from 1.2 to 4.5 km/s (nominal 4 km/s), using a two-stage light-gas gun. The ejecta velocity was estimated from the fall point distance of ejecta. The results show that there are in fact a large fraction of ejecta with very low velocities when the material strength of the target is small and the porosity is high. As an example, in the case of one specific target (compressive strength ˜0.5 MPa and porosity 43%), the amount of ejecta with velocities lower than 1 m/s is about 40% of the total mass. The average velocity of the ejecta decreases with decreasing material strength or increasing the porosity of the target. Moreover, in our experiments, the ejecta velocity distributions normalized to total ejecta mass seem to be mainly dependent on the material strength of the target, and not so greatly on the porosity. We also compare our experimental results with those of Gault et al. [1963. Spray ejected from the lunar surface by meteoroid impact. NASA Technical Note D-1767] and Housen [1992. Crater ejecta velocities for impacts on rocky bodies. LPSC XXIII, 555-556] for the ejecta velocity distribution using Housen's nondimensional scaling parameter. The ejecta velocity distributions of our experiments are lower than those of Gault et al. [1963. Spray ejected from the lunar surface by meteoroid impact. NASA Technical Note D-1767] and Housen [1992. Crater ejecta velocities for impacts on rocky bodies. LPSC XIII, 555-556].

  8. A diamond active target for the PADME experiment

    NASA Astrophysics Data System (ADS)

    Chiodini, G.

    2017-02-01

    The PADME (Positron Annihilation into Dark Mediator Experiment) collaboration searches for dark photons produced in the annihilation e++e-→γ+A' of accelerated positrons with atomic electrons of a fixed target at the Beam Test Facility of Laboratori Nazionali di Frascati. The apparatus can detect dark photons decaying into visible A'→e+e- and invisible A'→χχ channels, where χ's are particles of a secluded sector weakly interacting and therefore undetected. In order to improve the missing mass resolution and to measure the beam flux, PADME has an active target able to reconstruct the beam spot position and the bunch multiplicity. In this work the active target is described, which is made of a detector grade polycrystalline synthetic diamond with strip electrodes on both surfaces. The electrodes segmentation allows to measure the beam profile along X and Y and evaluate the average beam position bunch per bunch. The results of beam tests for the first two diamond detector prototypes are shown. One of them holds innovative graphitic electrodes built with a custom process developed in the laboratory, and the other one with commercially available traditional Cr-Au electrodes. The front-end electronics used in the test beam is discussed and the performance observed is presented. Finally, the final design of the target to be realized at the beginning of 2017 to be ready for data taking in 2018 is illustrated.

  9. Utilizing random Forest QSAR models with optimized parameters for target identification and its application to target-fishing server.

    PubMed

    Lee, Kyoungyeul; Lee, Minho; Kim, Dongsup

    2017-12-28

    The identification of target molecules is important for understanding the mechanism of "target deconvolution" in phenotypic screening and "polypharmacology" of drugs. Because conventional methods of identifying targets require time and cost, in-silico target identification has been considered an alternative solution. One of the well-known in-silico methods of identifying targets involves structure activity relationships (SARs). SARs have advantages such as low computational cost and high feasibility; however, the data dependency in the SAR approach causes imbalance of active data and ambiguity of inactive data throughout targets. We developed a ligand-based virtual screening model comprising 1121 target SAR models built using a random forest algorithm. The performance of each target model was tested by employing the ROC curve and the mean score using an internal five-fold cross validation. Moreover, recall rates for top-k targets were calculated to assess the performance of target ranking. A benchmark model using an optimized sampling method and parameters was examined via external validation set. The result shows recall rates of 67.6% and 73.9% for top-11 (1% of the total targets) and top-33, respectively. We provide a website for users to search the top-k targets for query ligands available publicly at http://rfqsar.kaist.ac.kr . The target models that we built can be used for both predicting the activity of ligands toward each target and ranking candidate targets for a query ligand using a unified scoring scheme. The scores are additionally fitted to the probability so that users can estimate how likely a ligand-target interaction is active. The user interface of our web site is user friendly and intuitive, offering useful information and cross references.

  10. Measuring the Density of Liquid Targets in the SeaQuest Experiment

    NASA Astrophysics Data System (ADS)

    Xi, Zhaojia; SeaQuest/E906 Collaboration

    2015-10-01

    The SeaQuest (E906) experiment, using the 120 GeV proton beam from the Main Injector at the Fermi National Accelerator Lab (FNAL), is studying the quark and antiquark structure of the nucleon using the Drell-Yan process. Based on the cross section ratios, σ (p + d) / σ (p + p) , SeaQuest will extract the Bjorken-x dependnce of the d / u ratio. The measurement will cover the large region (x > 0 . 25) with improved accuracy compared to the previous E866/Nusea experiment. Liquid D2 (LD2) and Liquid H2 (LH2) are the targets used in the SeaQuest experiment. The densities of LD2 and LH2 targets are two important quantities for the determination of the d / u ratio. We measure the pressure and temperature inside the flasks, from which the densities are calculated. The method, measurements and results of this study will be presented. This work is supported by U.S. DOE MENP Grant DE-FG02-03ER41243.

  11. Scaling law deduced from impact-cratering experiments on basalt targets

    NASA Astrophysics Data System (ADS)

    Takagi, Y.; Hasegawa, S.; Suzuki, A.

    2014-07-01

    Since impact-cratering phenomena on planetary bodies were the key process which modified the surface topography and formed regolith layers, many experiments on non-cohesive materials (sand, glass beads) were performed. On the other hand, experiments on natural rocks were limited. Especially, experiments on basalt targets are rare, although basalt is the most common rocky material on planetary surfaces. The reason may be the difficulties of obtaining basalt samples suitable for cratering experiments. Recently, we obtained homogenous and crackless large basalt blocks. We performed systematic cratering experiments using the basalt targets. Experimental Procedure: Impact experiments were performed using a double stage light-gas (hydrogen) gun on the JAXA Sagamihara campus. Spherical projectiles of nylon, aluminum, stainless steel, and tungsten carbide were launched at velocities between 2400 and 6100 m/sec. The projectiles were 1.0 to 7.1 mm in diameter and 0.004 to 0.22 g in mass. The incidence angle was fixed at 90 degrees. The targets were rectangular blocks of Ukrainian basalt. The impact plane was a square with 20-cm sides. The thickness was 9 cm. Samples were cut out from a columnar block so that the impact plane might become perpendicular to the axis of the columnar joint. The mass was about 10.5 kg. The density was 2920 ± 10 kg/m^3 . Twenty eight shots were performed. Three-dimensional shapes of craters were measured by an X-Y stage with a laser displacement sensor (Keyence LK-H150). The interval between the measurement points was 200 micrometer. The volume, depth, and aperture area of the crater were calculated from the 3-D data using analytical software. Since the shapes of the formed craters are markedly asymmetrical, the diameter of the circle whose area is equal to the aperture area was taken as the crater diameter. Results: The diameter, depth, and the volume of the formed craters are normalized by the π parameters. Experimental conditions are also

  12. Target Fabrication Technology and New Functional Materials for Laser Fusion and Laser-Plasma Experiment

    NASA Astrophysics Data System (ADS)

    Nagai, Keiji; Norimatsu, Takayoshi; Izawa, Yasukazu

    Target fabrication technique is a key issue of laser fusion. We present a comprehensive, up-to-data compilation of laser fusion target fabrication and relating new materials. To achieve highly efficient laser implosion, organic and inorganic highly spherical millimeter-sized capsules and cryogenic hydrogen layers inside should be uniform in diameter and thickness within sub-micrometer ˜ nanometer error. Porous structured targets and molecular cluster targets are required for laser-plasma experiments and applications. Various technologies and new materials concerning above purposes are summarized including fast-ignition targets, equation-of-state measurement targets, high energy ion generation targets, etc.

  13. Site selection and directional models of deserts used for ERBE validation targets

    NASA Technical Reports Server (NTRS)

    Staylor, W. F.

    1986-01-01

    Broadband shortwave and longwave radiance measurements obtained from the Nimbus 7 Earth Radiation Budget scanner were used to develop reflectance and emittance models for the Sahara, Gibson, and Saudi Deserts. These deserts will serve as in-flight validation targets for the Earth Radiation Budget Experiment being flown on the Earth Radiation Budget Satellite and two National Oceanic and Atmospheric Administration polar satellites. The directional reflectance model derived for the deserts was a function of the sum and product of the cosines of the solar and viewing zenith angles, and thus reciprocity existed between these zenith angles. The emittance model was related by a power law of the cosine of the viewing zenith angle.

  14. Penetration of tungsten-alloy rods into composite ceramic targets: Experiments and 2-D simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Z.; Dekel, E.; Hohler, V.

    1998-07-10

    A series of terminal ballistics experiments, with scaled tungsten-alloy penetrators, was performed on composite targets consisting of ceramic tiles glued to thick steel backing plates. Tiles of silicon-carbide, aluminum nitride, titanium-dibroide and boron-carbide were 20-80 mm thick, and impact velocity was 1.7 km/s. 2-D numerical simulations, using the PISCES code, were performed in order to simulate these shots. It is shown that a simplified version of the Johnson-Holmquist failure model can account for the penetration depths of the rods but is not enough to capture the effect of lateral release waves on these penetrations.

  15. HDice, Highly-Polarized Low-Background Frozen-Spin HD Targets for CLAS experiments at Jefferson Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Xiangdong; Bass, Christopher; D'Angelo, Annalisa

    2012-12-01

    Large, portable frozen-spin HD (Deuterium-Hydride) targets have been developed for studying nucleon spin properties with low backgrounds. Protons and Deuterons in HD are polarized at low temperatures (~10mK) inside a vertical dilution refrigerator (Oxford Kelvinox-1000) containing a high magnetic field (up to 17T). The targets reach a frozen-spin state within a few months, after which they can be cold transferred to an In-Beam Cryostat (IBC). The IBC, a thin-walled dilution refrigerator operating either horizontally or vertically, is use with quasi-4{pi} detector systems in open geometries with minimal energy loss for exiting reaction products in nucleon structure experiments. The first applicationmore » of this advanced target system has been used for Spin Sum Rule experiments at the LEGS facility in Brookhaven National Laboratory. An improved target production and handling system has been developed at Jefferson Lab for experiments with the CEBAF Large Acceptance Spectrometer, CLAS.« less

  16. Prioritizing therapeutic targets using patient-derived xenograft models

    PubMed Central

    Lodhia, K.A; Hadley, A; Haluska, P; Scott, C.L

    2015-01-01

    Effective systemic treatment of cancer relies on the delivery of agents with optimal therapeutic potential. The molecular age of medicine has provided genomic tools that can identify a large number of potential therapeutic targets in individual patients, heralding the promise of personalized treatment. However, determining which potential targets actually drive tumor growth and should be prioritized for therapy is challenging. Indeed, reliable molecular matches of target and therapeutic agent have been stringently validated in the clinic for only a small number of targets. Patient-derived xenografts (PDX) are tumor models developed in immunocompromised mice using tumor procured directly from the patient. As patient surrogates, PDX models represent a powerful tool for addressing individualized therapy. Challenges include humanizing the immune system of PDX models and ensuring high quality molecular annotation, in order to maximise insights for the clinic. Importantly, PDX can be sampled repeatedly and in parallel, to reveal clonal evolution, which may predict mechanisms of drug resistance and inform therapeutic strategy design. PMID:25783201

  17. Illusions of integration are subjectively impenetrable: Phenomenological experience of Lag 1 percepts during dual-target RSVP.

    PubMed

    Simione, Luca; Akyürek, Elkan G; Vastola, Valentina; Raffone, Antonino; Bowman, Howard

    2017-05-01

    We investigated the relationship between different kinds of target reports in a rapid serial visual presentation task, and their associated perceptual experience. Participants reported the identity of two targets embedded in a stream of stimuli and their associated subjective visibility. In our task, target stimuli could be combined together to form more complex ones, thus allowing participants to report temporally integrated percepts. We found that integrated percepts were associated with high subjective visibility scores, whereas reports in which the order of targets was reversed led to a poorer perceptual experience. We also found a reciprocal relationship between the chance of the second target not being reported correctly and the perceptual experience associated with the first one. Principally, our results indicate that integrated percepts are experienced as a unique, clear perceptual event, whereas order reversals are experienced as confused, similar to cases in which an entirely wrong response was given. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Centrifuge impact cratering experiments: Scaling laws for non-porous targets

    NASA Technical Reports Server (NTRS)

    Schmidt, Robert M.

    1987-01-01

    This research is a continuation of an ongoing program whose objective is to perform experiments and to develop scaling relationships for large body impacts onto planetary surfaces. The development of the centrifuge technique has been pioneered by the present investigator and is used to provide experimental data for actual target materials of interest. With both powder and gas guns mounted on a rotor arm, it is possible to match various dimensionless similarity parameters, which have been shown to govern the behavior of large scale impacts. Current work is directed toward the determination of scaling estimates for nonporous targets. The results are presented in summary form.

  19. Nonlinear sigma models with compact hyperbolic target spaces

    NASA Astrophysics Data System (ADS)

    Gubser, Steven; Saleem, Zain H.; Schoenholz, Samuel S.; Stoica, Bogdan; Stokes, James

    2016-06-01

    We explore the phase structure of nonlinear sigma models with target spaces corresponding to compact quotients of hyperbolic space, focusing on the case of a hyperbolic genus-2 Riemann surface. The continuum theory of these models can be approximated by a lattice spin system which we simulate using Monte Carlo methods. The target space possesses interesting geometric and topological properties which are reflected in novel features of the sigma model. In particular, we observe a topological phase transition at a critical temperature, above which vortices proliferate, reminiscent of the Kosterlitz-Thouless phase transition in the O(2) model [1, 2]. Unlike in the O(2) case, there are many different types of vortices, suggesting a possible analogy to the Hagedorn treatment of statistical mechanics of a proliferating number of hadron species. Below the critical temperature the spins cluster around six special points in the target space known as Weierstrass points. The diversity of compact hyperbolic manifolds suggests that our model is only the simplest example of a broad class of statistical mechanical models whose main features can be understood essentially in geometric terms.

  20. Comparison of several maneuvering target tracking models

    NASA Astrophysics Data System (ADS)

    McIntyre, Gregory A.; Hintz, Kenneth J.

    1998-07-01

    The tracking of maneuvering targets is complicated by the fact that acceleration is not directly observable or measurable. Additionally, acceleration can be induced by a variety of sources including human input, autonomous guidance, or atmospheric disturbances. The approaches to tracking maneuvering targets can be divided into two categories both of which assume that the maneuver input command is unknown. One approach is to model the maneuver as a random process. The other approach assumes that the maneuver is not random and that it is either detected or estimated in real time. The random process models generally assume one of two statistical properties, either white noise or an autocorrelated noise. The multiple-model approach is generally used with the white noise model while a zero-mean, exponentially correlated acceleration approach is used with the autocorrelated noise model. The nonrandom approach uses maneuver detection to correct the state estimate or a variable dimension filter to augment the state estimate with an extra state component during a detected maneuver. Another issue with the tracking of maneuvering target is whether to perform the Kalman filter in Polar or Cartesian coordinates. This paper will examine and compare several exponentially correlated acceleration approaches in both Polar and Cartesian coordinates for accuracy and computational complexity. They include the Singer model in both Polar and Cartesian coordinates, the Singer model in Polar coordinates converted to Cartesian coordinates, Helferty's third order rational approximation of the Singer model and the Bar-Shalom and Fortmann model. This paper shows that these models all provide very accurate position estimates with only minor differences in velocity estimates and compares the computational complexity of the models.

  1. Experiences of nursing home staff using the targeted interdisciplinary model for evaluation and treatment of neuropsychiatric symptoms (TIME) - a qualitative study.

    PubMed

    Lichtwarck, Bjørn; Myhre, Janne; Goyal, Alka R; Rokstad, Anne Marie Mork; Selbaek, Geir; Kirkevold, Øyvind; Bergh, Sverre

    2018-04-19

    Neuropsychiatric symptoms (NPS) in dementia pose great challenges for residents and staff in nursing homes. The Targeted Interdisciplinary Model for Evaluation and Treatment of Neuropsychiatric Symptoms (TIME) has recently in a randomized controlled trial demonstrated reductions in NPS. We explored the participating staff's experiences with the model and how it meets the challenges when dealing with the complexity of NPS. Three to six months after the end of the intervention, we interviewed 32 of the caregivers, leaders, and physicians participating in the trial, in five focus groups. We used thematic content analysis. The analysis yielded two main themes: (1) a systematic reflection method enhanced learning at work; (2) the structure of the approach helped staff to cope with NPS in residents with dementia. TIME shifts the way of learning for the staff from a traditional to a more innovative and reflection-based learning through a process of learning how to learn at work. The staff's experienced increased coping in their approach to complex problems. Our results emphasise the importance of a structured and biopsychosocial approach to NPS in clinical practice. Future research should explore models for integrating situated learning in daily routines in nursing homes.

  2. Hohlraum modeling for opacity experiments on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Dodd, E. S.; DeVolder, B. G.; Martin, M. E.; Krasheninnikova, N. S.; Tregillis, I. L.; Perry, T. S.; Heeter, R. F.; Opachich, Y. P.; Moore, A. S.; Kline, J. L.; Johns, H. M.; Liedahl, D. A.; Cardenas, T.; Olson, R. E.; Wilde, B. H.; Urbatsch, T. J.

    2018-06-01

    This paper discusses the modeling of experiments that measure iron opacity in local thermodynamic equilibrium (LTE) using laser-driven hohlraums at the National Ignition Facility (NIF). A previous set of experiments fielded at Sandia's Z facility [Bailey et al., Nature 517, 56 (2015)] have shown up to factors of two discrepancies between the theory and experiment, casting doubt on the validity of the opacity models. The purpose of the new experiments is to make corroborating measurements at the same densities and temperatures, with the initial measurements made at a temperature of 160 eV and an electron density of 0.7 × 1022 cm-3. The X-ray hot spots of a laser-driven hohlraum are not in LTE, and the iron must be shielded from a direct line-of-sight to obtain the data [Perry et al., Phys. Rev. B 54, 5617 (1996)]. This shielding is provided either with the internal structure (e.g., baffles) or external wall shapes that divide the hohlraum into a laser-heated portion and an LTE portion. In contrast, most inertial confinement fusion hohlraums are simple cylinders lacking complex gold walls, and the design codes are not typically applied to targets like those for the opacity experiments. We will discuss the initial basis for the modeling using LASNEX, and the subsequent modeling of five different hohlraum geometries that have been fielded on the NIF to date. This includes a comparison of calculated and measured radiation temperatures.

  3. A murine model of targeted infusion for intracranial tumors.

    PubMed

    Kim, Minhyung; Barone, Tara A; Fedtsova, Natalia; Gleiberman, Anatoli; Wilfong, Chandler D; Alosi, Julie A; Plunkett, Robert J; Gudkov, Andrei; Skitzki, Joseph J

    2016-01-01

    Historically, intra-arterial (IA) drug administration for malignant brain tumors including glioblastoma multiforme (GBM) was performed as an attempt to improve drug delivery. With the advent of percutaneous neuorovascular techniques and modern microcatheters, intracranial drug delivery is readily feasible; however, the question remains whether IA administration is safe and more effective compared to other delivery modalities such as intravenous (IV) or oral administrations. Preclinical large animal models allow for comparisons between treatment routes and to test novel agents, but can be expensive and difficult to generate large numbers and rapid results. Accordingly, we developed a murine model of IA drug delivery for GBM that is reproducible with clear readouts of tumor response and neurotoxicities. Herein, we describe a novel mouse model of IA drug delivery accessing the internal carotid artery to treat ipsilateral implanted GBM tumors that is consistent and reproducible with minimal experience. The intent of establishing this unique platform is to efficiently interrogate targeted anti-tumor agents that may be designed to take advantage of a directed, regional therapy approach for brain tumors.

  4. Skyline: an open source document editor for creating and analyzing targeted proteomics experiments

    PubMed Central

    MacLean, Brendan; Tomazela, Daniela M.; Shulman, Nicholas; Chambers, Matthew; Finney, Gregory L.; Frewen, Barbara; Kern, Randall; Tabb, David L.; Liebler, Daniel C.; MacCoss, Michael J.

    2010-01-01

    Summary: Skyline is a Windows client application for targeted proteomics method creation and quantitative data analysis. It is open source and freely available for academic and commercial use. The Skyline user interface simplifies the development of mass spectrometer methods and the analysis of data from targeted proteomics experiments performed using selected reaction monitoring (SRM). Skyline supports using and creating MS/MS spectral libraries from a wide variety of sources to choose SRM filters and verify results based on previously observed ion trap data. Skyline exports transition lists to and imports the native output files from Agilent, Applied Biosystems, Thermo Fisher Scientific and Waters triple quadrupole instruments, seamlessly connecting mass spectrometer output back to the experimental design document. The fast and compact Skyline file format is easily shared, even for experiments requiring many sample injections. A rich array of graphs displays results and provides powerful tools for inspecting data integrity as data are acquired, helping instrument operators to identify problems early. The Skyline dynamic report designer exports tabular data from the Skyline document model for in-depth analysis with common statistical tools. Availability: Single-click, self-updating web installation is available at http://proteome.gs.washington.edu/software/skyline. This web site also provides access to instructional videos, a support board, an issues list and a link to the source code project. Contact: brendanx@u.washington.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20147306

  5. A BRDF statistical model applying to space target materials modeling

    NASA Astrophysics Data System (ADS)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  6. Liquid Hydrogen Target Experience at SLAC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weisend, J.G.; Boyce, R.; Candia, A.

    2005-08-29

    Liquid hydrogen targets have played a vital role in the physics program at SLAC for the past 40 years. These targets have ranged from small ''beer can'' targets to the 1.5 m long E158 target that was capable of absorbing up to 800 W without any significant density changes. Successful use of these targets has required the development of thin wall designs, liquid hydrogen pumps, remote positioning and alignment systems, safety systems, control and data acquisition systems, cryogenic cooling circuits and heat exchangers. Detailed operating procedures have been created to ensure safety and operational reliability. This paper surveys the evolutionmore » of liquid hydrogen targets at SLAC and discusses advances in several of the enabling technologies that made these targets possible.« less

  7. The first target experiments on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Landen, O. L.; Glenzer, S. H.; Froula, D. H.; Dewald, E. L.; Suter, L. J.; Schneider, M. B.; Hinkel, D. E.; Fernandez, J. C.; Kline, J. L.; Goldman, S. R.; Braun, D. G.; Celliers, P. M.; Moon, S. J.; Robey, H. S.; Lanier, N. E.; Glendinning, S. G.; Blue, B. E.; Wilde, B. H.; Jones, O. S.; Schein, J.; Divol, L.; Kalantar, D. H.; Campbell, K. M.; Holder, J. P.; McDonald, J. W.; Niemann, C.; MacKinnon, A. J.; Collins, G. W.; Bradley, D. K.; Eggert, J. H.; Hicks, D. G.; Gregori, G.; Kirkwood, R. K.; Young, B. K.; Foster, J. M.; Hansen, J. F.; Perry, T. S.; Munro, D. H.; Baldis, H. A.; Grim, G. P.; Heeter, R. F.; Hegelich, M. B.; Montgomery, D. S.; Rochau, G. A.; Olson, R. E.; Turner, R. E.; Workman, J. B.; Berger, R. L.; Cohen, B. I.; Kruer, W. L.; Langdon, A. B.; Langer, S. H.; Meezan, N. B.; Rose, H. A.; Still, C. H.; Williams, E. A.; Dodd, E. S.; Edwards, M. J.; Monteil, M.-C.; Stevenson, R. M.; Thomas, B. R.; Coker, R. F.; Magelssen, G. R.; Rosen, P. A.; Stry, P. E.; Woods, D.; Weber, S. V.; Young, P. E.; Alvarez, S.; Armstrong, G.; Bahr, R.; Bourgade, J.-L.; Bower, D.; Celeste, J.; Chrisp, M.; Compton, S.; Cox, J.; Constantin, C.; Costa, R.; Duncan, J.; Ellis, A.; Emig, J.; Gautier, C.; Greenwood, A.; Griffith, R.; Holdner, F.; Holtmeier, G.; Hargrove, D.; James, T.; Kamperschroer, J.; Kimbrough, J.; Landon, M.; Lee, F. D.; Malone, R.; May, M.; Montelongo, S.; Moody, J.; Ng, E.; Nikitin, A.; Pellinen, D.; Piston, K.; Poole, M.; Rekow, V.; Rhodes, M.; Shepherd, R.; Shiromizu, S.; Voloshin, D.; Warrick, A.; Watts, P.; Weber, F.; Young, P.; Arnold, P.; Atherton, L.; Bardsley, G.; Bonanno, R.; Borger, T.; Bowers, M.; Bryant, R.; Buckman, S.; Burkhart, S.; Cooper, F.; Dixit, S. N.; Erbert, G.; Eder, D. C.; Ehrlich, R. E.; Felker, B.; Fornes, J.; Frieders, G.; Gardner, S.; Gates, C.; Gonzalez, M.; Grace, S.; Hall, T.; Haynam, C. A.; Heestand, G.; Henesian, M. A.; Hermann, M.; Hermes, G.; Huber, S.; Jancaitis, K.; Johnson, S.; Kauffman, B.; Kelleher, T.; Kohut, T.; Koniges, A. E.; Labiak, T.; Latray, D.; Lee, A.; Lund, D.; Mahavandi, S.; Manes, K. R.; Marshall, C.; McBride, J.; McCarville, T.; McGrew, L.; Menapace, J.; Mertens, E.; Murray, J.; Neumann, J.; Newton, M.; Opsahl, P.; Padilla, E.; Parham, T.; Parrish, G.; Petty, C.; Polk, M.; Powell, C.; Reinbachs, I.; Rinnert, R.; Riordan, B.; Ross, G.; Robert, V.; Tobin, M.; Sailors, S.; Saunders, R.; Schmitt, M.; Shaw, M.; Singh, M.; Spaeth, M.; Stephens, A.; Tietbohl, G.; Tuck, J.; van Wonterghem, B. M.; Vidal, R.; Wegner, P. J.; Whitman, P.; Williams, K.; Winward, K.; Work, K.; Wallace, R.; Nobile, A.; Bono, M.; Day, B.; Elliott, J.; Hatch, D.; Louis, H.; Manzenares, R.; O'Brien, D.; Papin, P.; Pierce, T.; Rivera, G.; Ruppe, J.; Sandoval, D.; Schmidt, D.; Valdez, L.; Zapata, K.; MacGowan, B. J.; Eckart, M. J.; Hsing, W. W.; Springer, P. T.; Hammel, B. A.; Moses, E. I.; Miller, G. H.

    2007-08-01

    A first set of shock timing, laser-plasma interaction, hohlraum energetics and hydrodynamic experiments have been performed using the first 4 beams of the National Ignition Facility (NIF), in support of indirect drive Inertial Confinement Fusion (ICF) and High Energy Density Physics (HEDP). In parallel, a robust set of optical and X-ray spectrometers, interferometer, calorimeters and imagers have been activated. The experiments have been undertaken with laser powers and energies of up to 8 TW and 17 kJ in flattop and shaped 1 9 ns pulses focused with various beam smoothing options. The experiments have demonstrated excellent agreement between measured and predicted laser-target coupling in foils and hohlraums, even when extended to a longer pulse regime unattainable at previous laser facilities, validated the predicted effects of beam smoothing on intense laser beam propagation in long scale-length plasmas and begun to test 3D codes by extending the study of laser driven hydrodynamic jets to 3D geometries.

  8. Intercepting a moving target: On-line or model-based control?

    PubMed

    Zhao, Huaiyong; Warren, William H

    2017-05-01

    When walking to intercept a moving target, people take an interception path that appears to anticipate the target's trajectory. According to the constant bearing strategy, the observer holds the bearing direction of the target constant based on current visual information, consistent with on-line control. Alternatively, the interception path might be based on an internal model of the target's motion, known as model-based control. To investigate these two accounts, participants walked to intercept a moving target in a virtual environment. We degraded the target's visibility by blurring the target to varying degrees in the midst of a trial, in order to influence its perceived speed and position. Reduced levels of visibility progressively impaired interception accuracy and precision; total occlusion impaired performance most and yielded nonadaptive heading adjustments. Thus, performance strongly depended on current visual information and deteriorated qualitatively when it was withdrawn. The results imply that locomotor interception is normally guided by current information rather than an internal model of target motion, consistent with on-line control.

  9. Nonlinear sigma models with compact hyperbolic target spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gubser, Steven; Saleem, Zain H.; Schoenholz, Samuel S.

    We explore the phase structure of nonlinear sigma models with target spaces corresponding to compact quotients of hyperbolic space, focusing on the case of a hyperbolic genus-2 Riemann surface. The continuum theory of these models can be approximated by a lattice spin system which we simulate using Monte Carlo methods. The target space possesses interesting geometric and topological properties which are reflected in novel features of the sigma model. In particular, we observe a topological phase transition at a critical temperature, above which vortices proliferate, reminiscent of the Kosterlitz-Thouless phase transition in the O(2) model [1, 2]. Unlike in themore » O(2) case, there are many different types of vortices, suggesting a possible analogy to the Hagedorn treatment of statistical mechanics of a proliferating number of hadron species. Below the critical temperature the spins cluster around six special points in the target space known as Weierstrass points. In conclusion, the diversity of compact hyperbolic manifolds suggests that our model is only the simplest example of a broad class of statistical mechanical models whose main features can be understood essentially in geometric terms.« less

  10. Nonlinear sigma models with compact hyperbolic target spaces

    DOE PAGES

    Gubser, Steven; Saleem, Zain H.; Schoenholz, Samuel S.; ...

    2016-06-23

    We explore the phase structure of nonlinear sigma models with target spaces corresponding to compact quotients of hyperbolic space, focusing on the case of a hyperbolic genus-2 Riemann surface. The continuum theory of these models can be approximated by a lattice spin system which we simulate using Monte Carlo methods. The target space possesses interesting geometric and topological properties which are reflected in novel features of the sigma model. In particular, we observe a topological phase transition at a critical temperature, above which vortices proliferate, reminiscent of the Kosterlitz-Thouless phase transition in the O(2) model [1, 2]. Unlike in themore » O(2) case, there are many different types of vortices, suggesting a possible analogy to the Hagedorn treatment of statistical mechanics of a proliferating number of hadron species. Below the critical temperature the spins cluster around six special points in the target space known as Weierstrass points. In conclusion, the diversity of compact hyperbolic manifolds suggests that our model is only the simplest example of a broad class of statistical mechanical models whose main features can be understood essentially in geometric terms.« less

  11. Centrifuge impact cratering experiments: Scaling laws for non-porous targets

    NASA Technical Reports Server (NTRS)

    Schmidt, Robert M.

    1987-01-01

    A geotechnical centrifuge was used to investigate large body impacts onto planetary surfaces. At elevated gravity, it is possible to match various dimensionless similarity parameters which were shown to govern large scale impacts. Observations of crater growth and target flow fields have provided detailed and critical tests of a complete and unified scaling theory for impact cratering. Scaling estimates were determined for nonporous targets. Scaling estimates for large scale cratering in rock proposed previously by others have assumed that the crater radius is proportional to powers of the impactor energy and gravity, with no additional dependence on impact velocity. The size scaling laws determined from ongoing centrifuge experiments differ from earlier ones in three respects. First, a distinct dependence of impact velocity is recognized, even for constant impactor energy. Second, the present energy exponent for low porosity targets, like competent rock, is lower than earlier estimates. Third, the gravity exponent is recognized here as being related to both the energy and the velocity exponents.

  12. LH2 Target Design & Position Survey Techniques for the MUSE experiment for Precise Proton Radius Measurement

    NASA Astrophysics Data System (ADS)

    Le Pottier, Luc; Roy, Pryiashee; Lorenzon, Wolfgang; Raymond, Richard; Steinberg, Noah; Rossi de La Fuente, Erick; MUSE (MUon proton Scattering Experiment) Collaboration

    2017-09-01

    The proton radius puzzle is a currently unresolved problem which has intrigued the scientific community, dealing with a 7 σ discrepancy between the proton radii determined from muonic hydrogen spectroscopy and electron scattering measurements. The MUon Scattering Experiment (MUSE) aims to resolve this puzzle by performing the first simultaneous elastic scattering measurements of both electrons and muons on the proton, which will allow the comparison of the radii from the two interactions with reduced systematic uncertainties. The data from this experiment is expected to provide the best test of lepton universality to date. The experiment will take place at the Paul Scherrer Institute in Switzerland in 2018. An essential component of the experiment is a liquid hydrogen (LH2) cryotarget system. Our group at the University of Michigan is responsible for the design, fabrication and installation of this system. Here we present our LH2 target cell design and fabrication techniques for successful operation at 20 K and 1 atm, and our computer vision-based target position survey system which will determine the position of the target, installed inside a vacuum chamber, with 0.01 mm or better precision at the height of the liquid hydrogen target and along the beam direction during the experiment.

  13. The unified database for the fixed target experiment BM@N

    NASA Astrophysics Data System (ADS)

    Gertsenberger, K. V.

    2016-09-01

    The article describes the developed database designed as comprehensive data storage of the fixed target experiment BM@N [1] at Joint Institute for Nuclear Research (JINR) in Dubna. The structure and purposes of the BM@N facility will be briefly presented. The scheme of the unified database and its parameters will be described in detail. The use of the BM@N database implemented on the PostgreSQL database management system (DBMS) allows one to provide user access to the actual information of the experiment. Also the interfaces developed for the access to the database will be presented. One was implemented as the set of C++ classes to access the data without SQL statements, the other-Web-interface being available on the Web page of the BM@N experiment.

  14. Laboratory experiments of crater formation on ice-rock mixture targets

    NASA Astrophysics Data System (ADS)

    Hiraoka, K.; Arakawa, M.; Yoshikawa, K.; Nakamura, A. M.

    Surfaces of ice-rock mixture are common among planetary bodies in outer solar system, such as the satellites of the giant planets, comet nuclei, and so on. In order to study the effect of the presence of volatiles in crater formation on these bodies, we performed impact experiments using a two-stage light-gas gun and a gas gun at Hokkaido University. The targets were ice-rock mixtures (diameter = 10 or 30cm, height = 5cm) with 0 wt.% to 50 wt.% rock. Projectiles were ice cylinders (diameter = 15mm, height = 10mm) or corn-shaped nylon ones and the impact velocities were varied from about 300m/s to 3500m/s. We will show an anti-correlation between the crater volume and the rock content, and will make a comparison with previous works (Lange and Ahrens 1982; Koschny and Grun 2001). Ejecta size and velocity measured on high-speed video images will be presented and will be discussed by a comparison with a spallation model (Melosh 1984).

  15. Numerical Modelling of the Deep Impact Mission Experiment

    NASA Technical Reports Server (NTRS)

    Wuennemann, K.; Collins, G. S.; Melosh, H. J.

    2005-01-01

    NASA s Deep Impact Mission (launched January 2005) will provide, for the first time ever, insights into the interior of a comet (Tempel 1) by shooting a approx.370 kg projectile onto the surface of a comets nucleus. Although it is usually assumed that comets consist of a very porous mixture of water ice and rock, little is known about the internal structure and in particular the constitutive material properties of a comet. It is therefore difficult to predict the dimensions of the excavated crater. Estimates of the crater size are based on laboratory experiments of impacts into various target compositions of different densities and porosities using appropriate scaling laws; they range between 10 s of meters up to 250 m in diameter [1]. The size of the crater depends mainly on the physical process(es) that govern formation: Smaller sizes are expected if (1) strength, rather than gravity, limits crater growth; and, perhaps even more crucially, if (2) internal energy losses by pore-space collapse reduce the coupling efficiency (compaction craters). To investigate the effect of pore space collapse and strength of the target we conducted a suite of numerical experiments and implemented a novel approach for modeling porosity and the compaction of pores in hydrocode calculations.

  16. Computational Modeling of Ablation on an Irradiated Target

    NASA Astrophysics Data System (ADS)

    Mehmedagic, Igbal; Thangam, Siva

    2017-11-01

    Computational modeling of pulsed nanosecond laser interaction with an irradiated metallic target is presented. The model formulation involves ablation of the metallic target irradiated by pulsed high intensity laser at normal atmospheric conditions. Computational findings based on effective representation and prediction of the heat transfer, melting and vaporization of the targeting material as well as plume formation and expansion are presented along with its relevance for the development of protective shields. In this context, the available results for a representative irradiation from 1064 nm laser pulse is used to analyze various ablation mechanisms, variable thermo-physical and optical properties, plume expansion and surface geometry. Funded in part by U. S. Army ARDEC, Picatinny Arsenal, NJ.

  17. Analyzing Single-Molecule Protein Transportation Experiments via Hierarchical Hidden Markov Models

    PubMed Central

    Chen, Yang; Shen, Kuang

    2017-01-01

    To maintain proper cellular functions, over 50% of proteins encoded in the genome need to be transported to cellular membranes. The molecular mechanism behind such a process, often referred to as protein targeting, is not well understood. Single-molecule experiments are designed to unveil the detailed mechanisms and reveal the functions of different molecular machineries involved in the process. The experimental data consist of hundreds of stochastic time traces from the fluorescence recordings of the experimental system. We introduce a Bayesian hierarchical model on top of hidden Markov models (HMMs) to analyze these data and use the statistical results to answer the biological questions. In addition to resolving the biological puzzles and delineating the regulating roles of different molecular complexes, our statistical results enable us to propose a more detailed mechanism for the late stages of the protein targeting process. PMID:28943680

  18. Flyer Target Acceleration and Energy Transfer at its Collision with Massive Targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borodziuk, S.; Kasperczuk, A.; Pisarczyk, T.

    2006-01-15

    Numerical modelling was aimed at simulation of successive events resulting from interaction of laser beam-single and double targets. It was performed by means of the 2D Lagrangian hydrodynamics code ATLANT-HE. This code is based on one-fluid and two-temperature model of plasma with electron and ion heat conductivity considerations. The code has an advanced treatment of laser light propagation and absorption. This numerical modelling corresponds to the experiment, which was carried out with the use of the PALS facility. Two types of planar solid targets, i.e. single massive Al slabs and double targets consisting of 6 {mu}m thick Al foil andmore » Al slab were applied. The targets were irradiated by the iodine laser pulses of two wavelengths: 1.315 and 0.438 {mu}m. A pulse duration of 0.4 ns and a focal spot diameter of 250 {mu}m at a laser energy of 130 J were used. The numerical modelling allowed us to obtain a more detailed description of shock wave propagation and crater formation.« less

  19. First Experiments with the Polarized Internal Gas Target (PIT) at ANKE/COSY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engels, R.; Lorentz, B.; Prasuhn, D.

    2008-02-06

    For future few-nucleon interaction studies with polarized beams and targets at COSY-Juelich, a polarized internal storage-cell gas target was implemented at the magnet spectrometer ANKE in summer 2005. First commissioning of the polarized Atomic Beam Source (ABS) at ANKE was carried out and some improvements of the system have been done. Storage-cell tests to determine the COSY beam dimensions have been performed. Electron cooling combined with stacking and stochastic cooling have been studied. Experiments with N{sub 2} gas in the storage cell to simulate the background produced by beam interaction with the aluminum cell walls were performed to investigate themore » beam heating by the target gas. The analysis of the d-vector p-vector {yields}dp and d-vector p-vector{yields}(dp{sub sp}){pi}{sup 0} reactions showed that events from the extended target can be clearly identified in the ANKE detector system.The polarization of the atomic beam of the ABS, positioned close to the strong dipole magnet D2 of ANKE, was tuned with a Lamb-shift polarimeter (LSP) beneath the target chamber. With use of the known analyzing powers of the quasi-free np{yields}d{pi}{sup 0} reaction, the polarization in the storage cell was measured to be Q{sub y} = 0.79{+-}0.07 in the vertical stray field of the D2 magnet acting as a holding field. The achieved target thickness was 2x10{sup 13} atoms/cm{sup 2} for one hyperfine state populated in the ABS beam only. With a COSY beam intensity of 6x10{sup 9} stored polarized deuterons in the ring, the luminosity for double polarized experiments was 1x10{sup 29} cm{sup -2} s{sup -1}.« less

  20. First Experiments with the Polarized Internal Gas Target (PIT) at ANKE/COSY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engels, R.; Lorentz, B.; Prasuhn, D.

    2009-08-04

    For future few-nucleon interaction studies with polarized beams and targets at COSY-Juelich, a polarized internal storage-cell gas target was implemented at the magnet spectrometer ANKE. First commissioning of the polarized Atomic Beam Source (ABS) at ANKE was carried out and some improvements of the system have been done. Storage-cell tests to determine the COSY beam dimensions have been performed. Electron cooling combined with stacking and stochastic cooling have been studied. Experiments with N{sub 2} gas in the storage cell to simulate the background produced by beam interaction with the aluminum cell walls were performed to investigate the beam heating bymore » the target gas. The analysis of the d-vectorp-vector->dp and d-vectorp-vector->(dp{sub sp})pi{sup 0} reactions showed that events from different positions of the extended target can be clearly identified in the ANKE detector system. The polarization of the atomic beam of the ABS, positioned close to the strong dipole magnet D2 of ANKE, was tuned with a Lamb-shift polarimeter (LSP) beneath the target chamber. With use of the known analyzing powers of the quasi-free np->dpi{sup 0} reaction, the polarization in the storage cell was measured to be Q{sub y} = 0.79+-0.07 in the vertical stray field of the D2 magnet acting as a holding field. The target thickness achieved was 2x10{sup 13} atoms/cm{sup 2} for one hyperfine state populated in the ABS beam only. With a COSY beam intensity of 6x10{sup 9} stored polarized deuterons in the ring, the luminosity for double polarized experiments was 1x10{sup 29} cm{sup -2} s{sup -1}.« less

  1. First Results of the Regional Earthquake Likelihood Models Experiment

    USGS Publications Warehouse

    Schorlemmer, D.; Zechar, J.D.; Werner, M.J.; Field, E.H.; Jackson, D.D.; Jordan, T.H.

    2010-01-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquake prediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).

  2. Unsupervised Spatial Event Detection in Targeted Domains with Applications to Civil Unrest Modeling

    PubMed Central

    Zhao, Liang; Chen, Feng; Dai, Jing; Hua, Ting; Lu, Chang-Tien; Ramakrishnan, Naren

    2014-01-01

    Twitter has become a popular data source as a surrogate for monitoring and detecting events. Targeted domains such as crime, election, and social unrest require the creation of algorithms capable of detecting events pertinent to these domains. Due to the unstructured language, short-length messages, dynamics, and heterogeneity typical of Twitter data streams, it is technically difficult and labor-intensive to develop and maintain supervised learning systems. We present a novel unsupervised approach for detecting spatial events in targeted domains and illustrate this approach using one specific domain, viz. civil unrest modeling. Given a targeted domain, we propose a dynamic query expansion algorithm to iteratively expand domain-related terms, and generate a tweet homogeneous graph. An anomaly identification method is utilized to detect spatial events over this graph by jointly maximizing local modularity and spatial scan statistics. Extensive experiments conducted in 10 Latin American countries demonstrate the effectiveness of the proposed approach. PMID:25350136

  3. The DESI Experiment Part I: Science,Targeting, and Survey Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghamousa, Amir; et al.

    DESI (Dark Energy Spectroscopic Instrument) is a Stage IV ground-based dark energy experiment that will study baryon acoustic oscillations (BAO) and the growth of structure through redshift-space distortions with a wide-area galaxy and quasar redshift survey. To trace the underlying dark matter distribution, spectroscopic targets will be selected in four classes from imaging data. We will measure luminous red galaxies up tomore » $z=1.0$. To probe the Universe out to even higher redshift, DESI will target bright [O II] emission line galaxies up to $z=1.7$. Quasars will be targeted both as direct tracers of the underlying dark matter distribution and, at higher redshifts ($ 2.1 < z < 3.5$), for the Ly-$$\\alpha$$ forest absorption features in their spectra, which will be used to trace the distribution of neutral hydrogen. When moonlight prevents efficient observations of the faint targets of the baseline survey, DESI will conduct a magnitude-limited Bright Galaxy Survey comprising approximately 10 million galaxies with a median $$z\\approx 0.2$$. In total, more than 30 million galaxy and quasar redshifts will be obtained to measure the BAO feature and determine the matter power spectrum, including redshift space distortions.« less

  4. Limitations of contrast enhancement for infrared target identification

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Fanning, Jonathan D.

    2009-05-01

    Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content. Automatic contrast enhancement techniques do not always achieve this improvement. In some cases, the contrast can increase to a level of target saturation. This paper assesses the range-performance effects of contrast enhancement for target identification as a function of image saturation. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing contrast enhancement processed images at various levels of saturation. Contrast enhancement is modeled in the U.S. Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of specific feature saturation or enhancement. The measured results follow the predicted performance based on the target task difficulty metric used in NVThermIP for the non-saturated cases. The saturated images reduce the information contained in the target and performance suffers. The model treats the contrast of the target as uniform over spatial frequency. As the contrast is enhanced, the model assumes that the contrast is enhanced uniformly over the spatial frequencies. After saturation, the spatial cues that differentiate one tank from another are located in a limited band of spatial frequencies. A frequency dependent treatment of target contrast is needed to predict performance of over-processed images.

  5. Mathematical modeling of vesicle drug delivery systems 2: targeted vesicle interactions with cells, tumors, and the body.

    PubMed

    Ying, Chong T; Wang, Juntian; Lamm, Robert J; Kamei, Daniel T

    2013-02-01

    Vesicles have been studied for several years in their ability to deliver drugs. Mathematical models have much potential in reducing time and resources required to engineer optimal vesicles, and this review article summarizes these models that aid in understanding the ability of targeted vesicles to bind and internalize into cancer cells, diffuse into tumors, and distribute in the body. With regard to binding and internalization, radiolabeling and surface plasmon resonance experiments can be performed to determine optimal vesicle size and the number and type of ligands conjugated. Binding and internalization properties are also inputs into a mathematical model of vesicle diffusion into tumor spheroids, which highlights the importance of the vesicle diffusion coefficient and the binding affinity of the targeting ligand. Biodistribution of vesicles in the body, along with their half-life, can be predicted with compartmental models for pharmacokinetics that include the effect of targeting ligands, and these predictions can be used in conjunction with in vivo models to aid in the design of drug carriers. Mathematical models can prove to be very useful in drug carrier design, and our hope is that this review will encourage more investigators to combine modeling with quantitative experimentation in the field of vesicle-based drug delivery.

  6. Accelerating the connection between experiments and models: The FACE-MDS experience

    NASA Astrophysics Data System (ADS)

    Norby, R. J.; Medlyn, B. E.; De Kauwe, M. G.; Zaehle, S.; Walker, A. P.

    2014-12-01

    The mandate is clear for improving communication between models and experiments to better evaluate terrestrial responses to atmospheric and climatic change. Unfortunately, progress in linking experimental and modeling approaches has been slow and sometimes frustrating. Recent successes in linking results from the Duke and Oak Ridge free-air CO2 enrichment (FACE) experiments with ecosystem and land surface models - the FACE Model-Data Synthesis (FACE-MDS) project - came only after a period of slow progress, but the experience points the way to future model-experiment interactions. As the FACE experiments were approaching their termination, the FACE research community made an explicit attempt to work together with the modeling community to synthesize and deliver experimental data to benchmark models and to use models to supply appropriate context for the experimental results. Initial problems that impeded progress were: measurement protocols were not consistent across different experiments; data were not well organized for model input; and parameterizing and spinning up models that were not designed for simulating a specific site was difficult. Once these problems were worked out, the FACE-MDS project has been very successful in using data from the Duke and ORNL FACE experiment to test critical assumptions in the models. The project showed, for example, that the stomatal conductance model most widely used in models was supported by experimental data, but models did not capture important responses such as increased leaf mass per unit area in elevated CO2, and did not appropriately represent foliar nitrogen allocation. We now have an opportunity to learn from this experience. New FACE experiments that have recently been initiated, or are about to be initiated, include a eucalyptus forest in Australia; the AmazonFACE experiment in a primary, tropical forest in Brazil; and a mature oak woodland in England. Cross-site science questions are being developed that will have a

  7. Non-negative infrared patch-image model: Robust target-background separation via partial sum minimization of singular values

    NASA Astrophysics Data System (ADS)

    Dai, Yimian; Wu, Yiquan; Song, Yu; Guo, Jun

    2017-03-01

    To further enhance the small targets and suppress the heavy clutters simultaneously, a robust non-negative infrared patch-image model via partial sum minimization of singular values is proposed. First, the intrinsic reason behind the undesirable performance of the state-of-the-art infrared patch-image (IPI) model when facing extremely complex backgrounds is analyzed. We point out that it lies in the mismatching of IPI model's implicit assumption of a large number of observations with the reality of deficient observations of strong edges. To fix this problem, instead of the nuclear norm, we adopt the partial sum of singular values to constrain the low-rank background patch-image, which could provide a more accurate background estimation and almost eliminate all the salient residuals in the decomposed target image. In addition, considering the fact that the infrared small target is always brighter than its adjacent background, we propose an additional non-negative constraint to the sparse target patch-image, which could not only wipe off more undesirable components ulteriorly but also accelerate the convergence rate. Finally, an algorithm based on inexact augmented Lagrange multiplier method is developed to solve the proposed model. A large number of experiments are conducted demonstrating that the proposed model has a significant improvement over the other nine competitive methods in terms of both clutter suppressing performance and convergence rate.

  8. Human genetics as a model for target validation: finding new therapies for diabetes.

    PubMed

    Thomsen, Soren K; Gloyn, Anna L

    2017-06-01

    Type 2 diabetes is a global epidemic with major effects on healthcare expenditure and quality of life. Currently available treatments are inadequate for the prevention of comorbidities, yet progress towards new therapies remains slow. A major barrier is the insufficiency of traditional preclinical models for predicting drug efficacy and safety. Human genetics offers a complementary model to assess causal mechanisms for target validation. Genetic perturbations are 'experiments of nature' that provide a uniquely relevant window into the long-term effects of modulating specific targets. Here, we show that genetic discoveries over the past decades have accurately predicted (now known) therapeutic mechanisms for type 2 diabetes. These findings highlight the potential for use of human genetic variation for prospective target validation, and establish a framework for future applications. Studies into rare, monogenic forms of diabetes have also provided proof-of-principle for precision medicine, and the applicability of this paradigm to complex disease is discussed. Finally, we highlight some of the limitations that are relevant to the use of genome-wide association studies (GWAS) in the search for new therapies for diabetes. A key outstanding challenge is the translation of GWAS signals into disease biology and we outline possible solutions for tackling this experimental bottleneck.

  9. Model-independent comparison of annual modulation and total rate with direct detection experiments

    NASA Astrophysics Data System (ADS)

    Kahlhoefer, Felix; Reindl, Florian; Schäffner, Karoline; Schmidt-Hoberg, Kai; Wild, Sebastian

    2018-05-01

    The relative sensitivity of different direct detection experiments depends sensitively on the astrophysical distribution and particle physics nature of dark matter, prohibiting a model-independent comparison. The situation changes fundamentally if two experiments employ the same target material. We show that in this case one can compare measurements of an annual modulation and exclusion bounds on the total rate while making no assumptions on astrophysics and no (or only very general) assumptions on particle physics. In particular, we show that the dark matter interpretation of the DAMA/LIBRA signal can be conclusively tested with COSINUS, a future experiment employing the same target material. We find that if COSINUS excludes a dark matter scattering rate of about 0.01 kg‑1 days‑1 with an energy threshold of 1.8 keV and resolution of 0.2 keV, it will rule out all explanations of DAMA/LIBRA in terms of dark matter scattering off sodium and/or iodine.

  10. Large-signal model of the bilayer graphene field-effect transistor targeting radio-frequency applications: Theory versus experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasadas, Francisco, E-mail: Francisco.Pasadas@uab.cat; Jiménez, David

    2015-12-28

    Bilayer graphene is a promising material for radio-frequency transistors because its energy gap might result in a better current saturation than the monolayer graphene. Because the great deal of interest in this technology, especially for flexible radio-frequency applications, gaining control of it requires the formulation of appropriate models for the drain current, charge, and capacitance. In this work, we have developed them for a dual-gated bilayer graphene field-effect transistor. A drift-diffusion mechanism for the carrier transport has been considered coupled with an appropriate field-effect model taking into account the electronic properties of the bilayer graphene. Extrinsic resistances have been includedmore » considering the formation of a Schottky barrier at the metal-bilayer graphene interface. The proposed model has been benchmarked against experimental prototype transistors, discussing the main figures of merit targeting radio-frequency applications.« less

  11. NLTE atomic kinetics modeling in ICF target simulations

    NASA Astrophysics Data System (ADS)

    Patel, Mehul V.; Mauche, Christopher W.; Scott, Howard A.; Jones, Ogden S.; Shields, Benjamin T.

    2017-10-01

    Radiation hydrodynamics (HYDRA) simulations using recently developed 1D spherical and 2D cylindrical hohlraum models have enabled a reassessment of the accuracy of energetics modeling across a range of NIF target configurations. Higher-resolution hohlraum calculations generally find that the X-ray drive discrepancies are greater than previously reported. We identify important physics sensitivities in the modeling of the NLTE wall plasma and highlight sensitivity variations between different hohlraum configurations (e.g. hohlraum gas fill). Additionally, 1D capsule only simulations show the importance of applying a similar level of rigor to NLTE capsule ablator modeling. Taken together, these results show how improved target performance predictions can be achieved by performing inline atomic kinetics using more complete models for the underlying atomic structure and transitions. Prepared by LLNL under Contract DE-AC52-07NA27344.

  12. Chemical modification of projectile residues and target material in a MEMIN cratering experiment

    NASA Astrophysics Data System (ADS)

    Ebert, Matthias; Hecht, Lutz; Deutsch, Alexander; Kenkmann, Thomas

    2013-01-01

    In the context of the MEMIN project, a hypervelocity cratering experiment has been performed using a sphere of the iron meteorite Campo del Cielo as projectile accelerated to 4.56 km s-1, and a block of Seeberger sandstone as target material. The ejecta, collected in a newly designed catcher, are represented by (1) weakly deformed, (2) highly deformed, and (3) highly shocked material. The latter shows shock-metamorphic features such as planar deformation features (PDF) in quartz, formation of diaplectic quartz glass, partial melting of the sandstone, and partially molten projectile, mixed mechanically and chemically with target melt. During mixing of projectile and target melts, the Fe of the projectile is preferentially partitioned into target melt to a greater degree than Ni and Co yielding a Fe/Ni that is generally higher than Fe/Ni in the projectile. This fractionation results from the differing siderophile properties, specifically from differences in reactivity of Fe, Ni, and Co with oxygen during projectile-target interaction. Projectile matter was also detected in shocked quartz grains. The average Fe/Ni of quartz with PDF (about 20) and of silica glasses (about 24) are in contrast to the average sandstone ratio (about 422), but resembles the Fe/Ni-ratio of the projectile (about 14). We briefly discuss possible reasons of projectile melting and vaporization in the experiment, in which the calculated maximum shock pressure does not exceed 55 GPa.

  13. The Sender-Receiver Model and the Targeting Process.

    ERIC Educational Resources Information Center

    Larson, Mark A.

    The goal of this paper is to describe how one classroom teacher uses the Sender-Receiver Communications Model to illustrate for students in a lively and memorable way the process of "targeting your audience" with medium and message. Students are used as examples of Receivers, or target audience, illustrating the potential range of…

  14. Analytical model for release calculations in solid thin-foils ISOL targets

    NASA Astrophysics Data System (ADS)

    Egoriti, L.; Boeckx, S.; Ghys, L.; Houngbo, D.; Popescu, L.

    2016-10-01

    A detailed analytical model has been developed to simulate isotope-release curves from thin-foils ISOL targets. It involves the separate modeling of diffusion and effusion inside the target. The former has been modeled using both first and second Fick's law. The latter, effusion from the surface of the target material to the end of the ionizer, was simulated with the Monte Carlo code MolFlow+. The calculated delay-time distribution for this process was then fitted using a double-exponential function. The release curve obtained from the convolution of diffusion and effusion shows good agreement with experimental data from two different target geometries used at ISOLDE. Moreover, the experimental yields are well reproduced when combining the release fraction with calculated in-target production.

  15. Capture of shrinking targets with realistic shrink patterns.

    PubMed

    Hoffmann, Errol R; Chan, Alan H S; Dizmen, Coskun

    2013-01-01

    Previous research [Hoffmann, E. R. 2011. "Capture of Shrinking Targets." Ergonomics 54 (6): 519-530] reported experiments for capture of shrinking targets where the target decreased in size at a uniform rate. This work extended this research for targets having a shrink-size versus time pattern that of an aircraft receding from an observer. In Experiment 1, the time to capture the target in this case was well correlated in terms of Fitts' index of difficulty, measured at the time of capture of the target, a result that is in agreement with the 'balanced' model of Johnson and Hart [Johnson, W. W., and Hart, S. G. 1987. "Step Tracking Shrinking Targets." Proceedings of the human factors society 31st annual meeting, New York City, October 1987, 248-252]. Experiment 2 measured the probability of target capture for varying initial target sizes and target shrink rates constant, defined as the time for the target to shrink to half its initial size. Data of shrink time constant for 50% probability of capture were related to initial target size but did not greatly affect target capture as the rate of target shrinking decreased rapidly with time.

  16. Automated target recognition using passive radar and coordinated flight models

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2003-09-01

    Rather than emitting pulses, passive radar systems rely on illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. These systems are particularly attractive since they allow receivers to operate without emitting energy, rendering them covert. Many existing passive radar systems estimate the locations and velocities of targets. This paper focuses on adding an automatic target recognition (ATR) component to such systems. Our approach to ATR compares the Radar Cross Section (RCS) of targets detected by a passive radar system to the simulated RCS of known targets. To make the comparison as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. The estimated positions become inputs for an algorithm that uses a coordinated flight model to compute probable aircraft orientation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of several potential target classes as they execute the estimated maneuvers. The RCS is then scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. The Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, so that the RCS can be further scaled. The Rician model compares the RCS of the illuminated aircraft with those of the potential targets. This comparison results in target identification.

  17. Two-Plasmon Decay Mitigation in Direct-Drive Inertial-Confinement-Fusion Experiments Using Multilayer Targets.

    PubMed

    Follett, R K; Delettrez, J A; Edgell, D H; Goncharov, V N; Henchen, R J; Katz, J; Michel, D T; Myatt, J F; Shaw, J; Solodov, A A; Stoeckl, C; Yaakobi, B; Froula, D H

    2016-04-15

    Multilayer direct-drive inertial-confinement-fusion targets are shown to significantly reduce two-plasmon decay (TPD) driven hot-electron production while maintaining high hydrodynamic efficiency. Implosion experiments on the OMEGA laser used targets with silicon layered between an inner beryllium and outer silicon-doped plastic ablator. A factor-of-5 reduction in hot-electron generation (>50  keV) was observed in the multilayer targets relative to pure CH targets. Three-dimensional simulations of the TPD-driven hot-electron production using a laser-plasma interaction code (lpse) that includes nonlinear and kinetic effects show good agreement with the measurements. The simulations suggest that the reduction in hot-electron production observed in the multilayer targets is primarily caused by increased electron-ion collisional damping.

  18. Modeling of video compression effects on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Preece, Bradley; Espinola, Richard L.

    2009-05-01

    The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.

  19. Chemical projectile-target interaction during hypervelocity cratering experiments (MEMIN project).

    NASA Astrophysics Data System (ADS)

    Ebert, M.; Hecht, L.; Deutsch, A.; Kenkmann, T.

    2012-04-01

    The detection and identification of meteoritic components in impact-derived rocks are of great value for confirming an impact origin and reconstructing the type of extraterrestrial material that repeatedly stroke the Earth during geologic evolution [1]. However, little is known about processes that control the projectile distribution into the various impactites that originate during the cratering and excavation process, and inter-element fractionation between siderophile elements during impact cratering. In the context of the MEMIN project, cratering experiments have been performed using spheres of Cr-V-Co-Mo-W-rich steel and of the iron meteorite Campo del Cielo (IAB) as projectiles accelerated to about 5 km/s, and blocks of Seeberger sandstone as target. The experiments were carried out at the two-stage acceleration facilities of the Fraunhofer Ernst-Mach-Institute (Freiburg). Our results are based on geochemical analyses of highly shocked ejecta material. The ejecta show various shock features including multiple sets of planar deformations features (PDF) in quartz, diaplectic quartz, and partial melting of the sandstone. Melting is concentrated in the phyllosilicate-bearing sandstone matrix but involves quartz, too. Droplets of molten projectile have entered the low-viscosity sandstone melt but not quartz glass. Silica-rich sandstone melts are enriched in the elements that are used to trace the projectile, like Fe, Ni, Cr, Co, and V (but no or little W and Mo). Inter-element ratios of these "projectile" tracer elements within the contaminated sandstone melt may be strongly modified from the original ratios in the projectiles. This fractionation most likely result from variation in the lithophile or siderophile character and/or from differences in reactivity of these tracer elements with oxygen [2] during interaction of metal melt with silicate melt. The shocked quartz with PDF is also enriched in Fe and Ni (experiment with a meteorite iron projectile) and in Fe

  20. Optical model calculations of heavy-ion target fragmentation

    NASA Technical Reports Server (NTRS)

    Townsend, L. W.; Wilson, J. W.; Cucinotta, F. A.; Norbury, J. W.

    1986-01-01

    The fragmentation of target nuclei by relativistic protons and heavy ions is described within the context of a simple abrasion-ablation-final-state interaction model. Abrasion is described by a quantum mechanical formalism utilizing an optical model potential approximation. Nuclear charge distributions of the excited prefragments are calculated by both a hypergeometric distribution and a method based upon the zero-point oscillations of the giant dipole resonance. Excitation energies are estimated from the excess surface energy resulting from the abrasion process and the additional energy deposited by frictional spectator interactions of the abraded nucleons. The ablation probabilities are obtained from the EVA-3 computer program. Isotope production cross sections for the spallation of copper targets by relativistic protons and for the fragmenting of carbon targets by relativistic carbon, neon, and iron projectiles are calculated and compared with available experimental data.

  1. Tradeoffs among watershed model calibration targets for parameter estimation

    EPA Science Inventory

    Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation f...

  2. The Fixed Target Experiment for Studies of Baryonic Matter at the Nuclotron (BM@N)

    NASA Astrophysics Data System (ADS)

    Kapishin, M. N.

    2017-12-01

    BM@N (Baryonic Matter at Nuclotron) is the first experiment to be realized at the NICA-Nuclotron accelerator complex. The aim of the BM@N experiment is to study relativistic heavy ion beam interactions with fixed targets. The BM@N setup, results of Monte Carlo simulations, and the BM@N experimental program are presented.

  3. Impacts into quartz sand: Crater formation, shock metamorphism, and ejecta distribution in laboratory experiments and numerical models

    NASA Astrophysics Data System (ADS)

    Wünnemann, Kai; Zhu, Meng-Hua; Stöffler, Dieter

    2016-10-01

    We investigated the ejection mechanics by a complementary approach of cratering experiments, including the microscopic analysis of material sampled from these experiments, and 2-D numerical modeling of vertical impacts. The study is based on cratering experiments in quartz sand targets performed at the NASA Ames Vertical Gun Range. In these experiments, the preimpact location in the target and the final position of ejecta was determined by using color-coded sand and a catcher system for the ejecta. The results were compared with numerical simulations of the cratering and ejection process to validate the iSALE shock physics code. In turn the models provide further details on the ejection velocities and angles. We quantify the general assumption that ejecta thickness decreases with distance according to a power-law and that the relative proportion of shocked material in the ejecta increase with distance. We distinguish three types of shock metamorphic particles (1) melt particles, (2) shock lithified aggregates, and (3) shock-comminuted grains. The agreement between experiment and model was excellent, which provides confidence that the models can predict ejection angles, velocities, and the degree of shock loading of material expelled from a crater accurately if impact parameters such as impact velocity, impactor size, and gravity are varied beyond the experimental limitations. This study is relevant for a quantitative assessment of impact gardening on planetary surfaces and the evolution of regolith layers on atmosphereless bodies.

  4. First Results of the Regional Earthquake Likelihood Models Experiment

    NASA Astrophysics Data System (ADS)

    Schorlemmer, Danijel; Zechar, J. Douglas; Werner, Maximilian J.; Field, Edward H.; Jackson, David D.; Jordan, Thomas H.

    2010-08-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment—a truly prospective earthquake prediction effort—is underway within the U.S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary—the forecasts were meant for an application of 5 years—we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one.

  5. Ontological and Epistemological Issues Regarding Climate Models and Computer Experiments

    NASA Astrophysics Data System (ADS)

    Vezer, M. A.

    2010-12-01

    Recent philosophical discussions (Parker 2009; Frigg and Reiss 2009; Winsberg, 2009; Morgon 2002, 2003, 2005; Gula 2002) about the ontology of computer simulation experiments and the epistemology of inferences drawn from them are of particular relevance to climate science as computer modeling and analysis are instrumental in understanding climatic systems. How do computer simulation experiments compare with traditional experiments? Is there an ontological difference between these two methods of inquiry? Are there epistemological considerations that result in one type of inference being more reliable than the other? What are the implications of these questions with respect to climate studies that rely on computer simulation analysis? In this paper, I examine these philosophical questions within the context of climate science, instantiating concerns in the philosophical literature with examples found in analysis of global climate change. I concentrate on Wendy Parker’s (2009) account of computer simulation studies, which offers a treatment of these and other questions relevant to investigations of climate change involving such modelling. Two theses at the center of Parker’s account will be the focus of this paper. The first is that computer simulation experiments ought to be regarded as straightforward material experiments; which is to say, there is no significant ontological difference between computer and traditional experimentation. Parker’s second thesis is that some of the emphasis on the epistemological importance of materiality has been misplaced. I examine both of these claims. First, I inquire as to whether viewing computer and traditional experiments as ontologically similar in the way she does implies that there is no proper distinction between abstract experiments (such as ‘thought experiments’ as well as computer experiments) and traditional ‘concrete’ ones. Second, I examine the notion of materiality (i.e., the material commonality between

  6. Penetration experiments in aluminum and Teflon targets of widely variable thickness

    NASA Technical Reports Server (NTRS)

    Hoerz, F.; Cintala, Mark J.; Bernhard, R. P.; See, T. H.

    1994-01-01

    The morphologies and detailed dimensions of hypervelocity craters and penetration holes on space-exposed surfaces faithfully reflect the initial impact conditions. However, current understanding of this postmortem evidence and its relation to such first-order parameters as impact velocity or projectile size and mass is incomplete. While considerable progress is being made in the numerical simulation of impact events, continued impact simulations in the laboratory are needed to obtain empirical constraints and insights. This contribution summarizes such experiments with Al and Teflon targets that were carried out in order to provide a better understanding of the crater and penetration holes reported from the Solar Maximum Mission (SMM) and the Long Duration Exposure Facility (LDEF) satellites. A 5-mm light gas gun was used to fire spherical soda-lime glass projectiles from 50 to 3175 microns in diameter (D(sub P)), at a nominal 6 km/s, into Al (1100 series; annealed) and Teflon (Teflon(sup TFE)) targets. Targets ranged in thickness (T) from infinite halfspace targets (T approx. equals cm) to ultrathin foils (T approx. equals micron), yielding up to 3 degrees of magnitude variation in absolute and relative (D(sub P)/T) target thickness. This experimental matrix simulates the wide range in D(sub P)/T experienced by a space-exposed membrane of constant T that is being impacted by projectiles of widely varying sizes.

  7. A simple analytical model for dynamics of time-varying target leverage ratios

    NASA Astrophysics Data System (ADS)

    Lo, C. F.; Hui, C. H.

    2012-03-01

    In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.

  8. Two-plasmon decay mitigation in direct-drive inertial-confinement-fusion experiments using multilayer targets

    DOE PAGES

    Follett, R. K.; Delettrez, J. A.; Edgell, D. H.; ...

    2016-04-15

    Multilayer direct-drive inertial-confinement-fusion (ICF) targets are shown to significantly reduce two-plasmon-decay (TPD) driven hot-electron production while maintaining high hydrodynamic efficiency. Implosion experiments on the OMEGA Laser used targets with silicon layered between an inner beryllium and outer silicon-doped plastic ablator. A factor of five reduction in hot-electron generation (> 50 keV) was observed in the multilayer targets relative to pure CH targets. Three-dimensional simulations of the TPD driven hot-electron production using a laser-plasma interaction code (LPSE) that includes nonlinear and kinetic effects show excellent agreement with the measurements. As a result, the simulations suggest that the reduction in hot-electron productionmore » observed in the multilayer targets is primarily due to increased electron-ion collisional damping.« less

  9. Integrated modeling/analyses of thermal-shock effects in SNS targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taleyarkhan, R.P.; Haines, J.

    1996-06-01

    In a spallation neutron source (SNS), extremely rapid energy pulses are introduced in target materials such as mercury, lead, tungsten, uranium, etc. Shock phenomena in such systems may possibly lead to structural material damage beyond the design basis. As expected, the progression of shock waves and interaction with surrounding materials for liquid targets can be quite different from that in solid targets. The purpose of this paper is to describe ORNL`s modeling framework for `integrated` assessment of thermal-shock issues in liquid and solid target designs. This modeling framework is being developed based upon expertise developed from past reactor safety studies,more » especially those related to the Advanced Neutron Source (ANS) Project. Unlike previous separate-effects modeling approaches employed (for evaluating target behavior when subjected to thermal shocks), the present approach treats the overall problem in a coupled manner using state-of-the-art equations of state for materials of interest (viz., mercury, tungsten and uranium). That is, the modeling framework simultaneously accounts for localized (and distributed) compression pressure pulse generation due to transient heat deposition, the transport of this shock wave outwards, interaction with surrounding boundaries, feedback to mercury from structures, multi-dimensional reflection patterns & stress induced (possible) breakup or fracture.« less

  10. A computer program to determine the possible daily release window for sky target experiments

    NASA Technical Reports Server (NTRS)

    Michaud, N. H.

    1973-01-01

    A computer program is presented which is designed to determine the daily release window for sky target experiments. Factors considered in the program include: (1) target illumination by the sun at release time and during the tracking period; (2) look angle elevation above local horizon from each tracking station to the target; (3) solar depression angle from the local horizon of each tracking station during the experimental period after target release; (4) lunar depression angle from the local horizon of each tracking station during the experimental period after target release; and (5) total sky background brightness as seen from each tracking station while viewing the target. Program output is produced in both graphic and data form. Output data can be plotted for a single calendar month or year. The numerical values used to generate the plots are furnished to permit a more detailed review of the computed daily release windows.

  11. Using habitat suitability models to target invasive plant species surveys.

    PubMed

    Crall, Alycia W; Jarnevich, Catherine S; Panke, Brendon; Young, Nick; Renz, Mark; Morisette, Jeffrey

    2013-01-01

    Managers need new tools for detecting the movement and spread of nonnative, invasive species. Habitat suitability models are a popular tool for mapping the potential distribution of current invaders, but the ability of these models to prioritize monitoring efforts has not been tested in the field. We tested the utility of an iterative sampling design (i.e., models based on field observations used to guide subsequent field data collection to improve the model), hypothesizing that model performance would increase when new data were gathered from targeted sampling using criteria based on the initial model results. We also tested the ability of habitat suitability models to predict the spread of invasive species, hypothesizing that models would accurately predict occurrences in the field, and that the use of targeted sampling would detect more species with less sampling effort than a nontargeted approach. We tested these hypotheses on two species at the state scale (Centaurea stoebe and Pastinaca sativa) in Wisconsin (USA), and one genus at the regional scale (Tamarix) in the western United States. These initial data were merged with environmental data at 30-m2 resolution for Wisconsin and 1-km2 resolution for the western United States to produce our first iteration models. We stratified these initial models to target field sampling and compared our models and success at detecting our species of interest to other surveys being conducted during the same field season (i.e., nontargeted sampling). Although more data did not always improve our models based on correct classification rate (CCR), sensitivity, specificity, kappa, or area under the curve (AUC), our models generated from targeted sampling data always performed better than models generated from nontargeted data. For Wisconsin species, the model described actual locations in the field fairly well (kappa = 0.51, 0.19, P < 0.01), and targeted sampling did detect more species than nontargeted sampling with less

  12. EMC3-EIRENE modelling of toroidally-localized divertor gas injection experiments on Alcator C-Mod

    DOE PAGES

    Lore, Jeremy D.; Reinke, M. L.; LaBombard, Brian; ...

    2014-09-30

    Experiments on Alcator C-Mod with toroidally and poloidally localized divertor nitrogen injection have been modeled using the three-dimensional edge transport code EMC3-EIRENE to elucidate the mechanisms driving measured toroidal asymmetries. In these experiments five toroidally distributed gas injectors in the private flux region were sequentially activated in separate discharges resulting in clear evidence of toroidal asymmetries in radiated power and nitrogen line emission as well as a ~50% toroidal modulation in electron pressure at the divertor target. The pressure modulation is qualitatively reproduced by the modelling, with the simulation yielding a toroidal asymmetry in the heat flow to the outermore » strike point. Finally, toroidal variation in impurity line emission is qualitatively matched in the scrape-off layer above the strike point, however kinetic corrections and cross-field drifts are likely required to quantitatively reproduce impurity behavior in the private flux region and electron temperatures and densities directly in front of the target.« less

  13. Action-perception dissociation in response to target acceleration.

    PubMed

    Dubrowski, Adam; Carnahan, Heather

    2002-05-01

    The purpose of this study was to investigate whether information about the acceleration characteristics of a moving target can be used for both action and perception. Also of interest was whether prior movement experience altered perceptual judgements. Participants manually intercepted targets moving with various acceleration, velocity and movement time characteristics. They also made perceptual judgements about the acceleration characteristics of these targets either with or without prior manual interception experience. Results showed that while aiming kinematics were sensitive to the acceleration characteristics of the target, participants were only able to perceptually discriminate the velocity characteristics of target motion, even after performing interceptive actions to the same targets. These results are discussed in terms of a two channel (action-perception) model of visuomotor control.

  14. Target-mediated drug disposition model for drugs with two binding sites that bind to a target with one binding site.

    PubMed

    Gibiansky, Leonid; Gibiansky, Ekaterina

    2017-10-01

    The paper extended the TMDD model to drugs with two identical binding sites (2-1 TMDD). The quasi-steady-state (2-1 QSS), quasi-equilibrium (2-1 QE), irreversible binding (2-1 IB), and Michaelis-Menten (2-1 MM) approximations of the model were derived. Using simulations, the 2-1 QSS approximation was compared with the full 2-1 TMDD model. As expected and similarly to the standard TMDD for monoclonal antibodies (mAb), 2-1 QSS predictions were nearly identical to 2-1 TMDD predictions, except for times of fast changes following initiation of dosing, when equilibrium has not yet been reached. To illustrate properties of new equations and approximations, several variations of population PK data for mAbs with soluble (slow elimination of the complex) or membrane-bound (fast elimination of the complex) targets were simulated from a full 2-1 TMDD model and fitted to 2-1 TMDD models, to its approximations, and to the standard (1-1) QSS model. For a mAb with a soluble target, it was demonstrated that the 2-1 QSS model provided nearly identical description of the observed (simulated) free drug and total target concentrations, although there was some minor bias in predictions of unobserved free target concentrations. The standard QSS approximation also provided a good description of the observed data, but was not able to distinguish between free drug concentrations (with no target attached and both binding site free) and partially bound drug concentrations (with one of the binding sites occupied by the target). For a mAb with a membrane-bound target, the 2-1 MM approximation adequately described the data. The 2-1 QSS approximation converged 10 times faster than the full 2-1 TMDD, and its run time was comparable with the standard QSS model.

  15. A Balanced Portfolio Model For Improving Health: Concept And Vermont's Experience.

    PubMed

    Hester, James

    2018-04-01

    A successful strategy for improving population health requires acting in several sectors by implementing a portfolio of interventions. The mix of interventions should be both tailored to meet the community's needs and balanced in several dimensions-for example, time frame, level of risk, and target population. One obstacle is finding sustainable financing for both the interventions and the community infrastructure needed. This article first summarizes Vermont's experience as a laboratory for health reform. It then presents a conceptual model for a community-based population health strategy, using a balanced portfolio and diversified funding approaches. The article then reviews Vermont's population health initiative, including an example of a balanced portfolio and lessons learned from the state's experience.

  16. The polarized atomic-beam target for the EDDA experiment and the time-reversal invariance test at COSY

    NASA Astrophysics Data System (ADS)

    Eversheim, P. D.; Altmeier, M.; Felden, O.

    1997-02-01

    For the the EDDA experiment, which was set up to measure the p¯-p¯ excitation function during the acceleration ramp of the cooler synchrotron COSY at Jülich, a polarized atomic-beam target was designed regarding the restrictions imposed by the geometry of the EDDA detector. Later, when the time-reversal invariance experiment is to be performed, the EDDA detector will serve as efficient internal polarimeter and the source has to deliver tensor polarized deuterons. The modular design of this polarized atomic-beam target that allows to meet these conditions will be discussed in comparison to other existing polarized atomic-beam targets.

  17. A T0/Trigger detector for the External Target Experiment at CSR

    NASA Astrophysics Data System (ADS)

    Hu, D.; Shao, M.; Sun, Y.; Li, C.; Chen, H.; Tang, Z.; Zhang, Y.; Zhou, J.; Zeng, H.; Zhao, X.; You, W.; Song, G.; Deng, P.; Lu, J.; Zhao, L.

    2017-06-01

    A new T0/Trigger detector based on multi-gap resistive plate chamber (MRPC) technology has been constructed and tested for the external target experiment (ETE) at HIRFL-CSR. It measures the multiplicity and timing information of particles produced in heavy-ion collisions at the target region, providing necessary event collision time (T0) and collision centrality with high precision. Monte-Carlo simulation shows a time resolution of several tens of picosecond can be achieved at central collisions. The experimental tests have been performed for this prototype detector at the CSR-ETE. The preliminary results are shown to demonstrate the performance of the T0/Trigger detector.

  18. Image-based in vivo assessment of targeting accuracy of stereotactic brain surgery in experimental rodent models

    NASA Astrophysics Data System (ADS)

    Rangarajan, Janaki Raman; Vande Velde, Greetje; van Gent, Friso; de Vloo, Philippe; Dresselaers, Tom; Depypere, Maarten; van Kuyck, Kris; Nuttin, Bart; Himmelreich, Uwe; Maes, Frederik

    2016-11-01

    Stereotactic neurosurgery is used in pre-clinical research of neurological and psychiatric disorders in experimental rat and mouse models to engraft a needle or electrode at a pre-defined location in the brain. However, inaccurate targeting may confound the results of such experiments. In contrast to the clinical practice, inaccurate targeting in rodents remains usually unnoticed until assessed by ex vivo end-point histology. We here propose a workflow for in vivo assessment of stereotactic targeting accuracy in small animal studies based on multi-modal post-operative imaging. The surgical trajectory in each individual animal is reconstructed in 3D from the physical implant imaged in post-operative CT and/or its trace as visible in post-operative MRI. By co-registering post-operative images of individual animals to a common stereotaxic template, targeting accuracy is quantified. Two commonly used neuromodulation regions were used as targets. Target localization errors showed not only variability, but also inaccuracy in targeting. Only about 30% of electrodes were within the subnucleus structure that was targeted and a-specific adverse effects were also noted. Shifting from invasive/subjective 2D histology towards objective in vivo 3D imaging-based assessment of targeting accuracy may benefit a more effective use of the experimental data by excluding off-target cases early in the study.

  19. Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator

    NASA Astrophysics Data System (ADS)

    Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi

    Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.

  20. Improved protein model quality assessments by changing the target function.

    PubMed

    Uziela, Karolis; Menéndez Hurtado, David; Shu, Nanjiang; Wallner, Björn; Elofsson, Arne

    2018-06-01

    Protein modeling quality is an important part of protein structure prediction. We have for more than a decade developed a set of methods for this problem. We have used various types of description of the protein and different machine learning methodologies. However, common to all these methods has been the target function used for training. The target function in ProQ describes the local quality of a residue in a protein model. In all versions of ProQ the target function has been the S-score. However, other quality estimation functions also exist, which can be divided into superposition- and contact-based methods. The superposition-based methods, such as S-score, are based on a rigid body superposition of a protein model and the native structure, while the contact-based methods compare the local environment of each residue. Here, we examine the effects of retraining our latest predictor, ProQ3D, using identical inputs but different target functions. We find that the contact-based methods are easier to predict and that predictors trained on these measures provide some advantages when it comes to identifying the best model. One possible reason for this is that contact based methods are better at estimating the quality of multi-domain targets. However, training on the S-score gives the best correlation with the GDT_TS score, which is commonly used in CASP to score the global model quality. To take the advantage of both of these features we provide an updated version of ProQ3D that predicts local and global model quality estimates based on different quality estimates. © 2018 Wiley Periodicals, Inc.

  1. Manufacturing of calcium, lithium and molybdenum targets for use in nuclear physics experiments

    NASA Astrophysics Data System (ADS)

    Kheswa, N. Y.; Papka, P.; Buthelezi, E. Z.; Lieder, R. M.; Neveling, R.; Newman, R. T.

    2010-02-01

    This paper describes methods used in the manufacturing of chemically reactive targets such as calcium ( natCa), lithium-6 ( 6Li) and molybdenum-97 ( 97Mo) for nuclear physics experiments at the iThemba LABS cyclotron facility (Faure, South Africa). Due to the chemical properties of these materials a suitable and controlled environment was established in order to minimize oxygen contamination of targets. Calcium was prepared by means of vacuum evaporation while lithium was cold rolled to a desired thickness. In the case of molybdenum, the metallic powder was melted under vacuum using an e-gun followed by cold rolling of the metal bead to a desired thickness. In addition, latest developments toward the establishment of a dedicated nuclear physics target laboratory are discussed.

  2. Geometric Model for Tracker-Target Look Angles and Line of Slight Distance

    DTIC Science & Technology

    2015-10-20

    412TW-PA-15239 Geometric Model for Tracker -Target Look Angles and Line of Slight Distance DANIEL T. LAIRD AIR FORCE TEST CENTER EDWARDS...15 – 23 OCT 15 4. TITLE AND SUBTITLE Geometric Model for Tracker -Target Look Angles and Line of Slight Distance 5a. CONTRACT...include area code) 661-277-8615 Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 GEOMETRIC MODEL FOR TRACKER -TARGET LOOK ANGLES

  3. Physics perspectives with AFTER@LHC (A Fixed Target ExpeRiment at LHC)

    NASA Astrophysics Data System (ADS)

    Massacrier, L.; Anselmino, M.; Arnaldi, R.; Brodsky, S. J.; Chambert, V.; Da Silva, C.; Didelez, J. P.; Echevarria, M. G.; Ferreiro, E. G.; Fleuret, F.; Gao, Y.; Genolini, B.; Hadjidakis, C.; Hřivnáčová, I.; Kikola, D.; Klein, A.; Kurepin, A.; Kusina, A.; Lansberg, J. P.; Lorcé, C.; Lyonnet, F.; Martinez, G.; Nass, A.; Pisano, C.; Robbe, P.; Schienbein, I.; Schlegel, M.; Scomparin, E.; Seixas, J.; Shao, H. S.; Signori, A.; Steffens, E.; Szymanowski, L.; Topilskaya, N.; Trzeciak, B.; Uggerhøj, U. I.; Uras, A.; Ulrich, R.; Wagner, J.; Yamanaka, N.; Yang, Z.

    2018-02-01

    AFTER@LHC is an ambitious fixed-target project in order to address open questions in the domain of proton and neutron spins, Quark Gluon Plasma and high-x physics, at the highest energy ever reached in the fixed-target mode. Indeed, thanks to the highly energetic 7 TeV proton and 2.76 A.TeV lead LHC beams, center-of-mass energies as large as = 115 GeV in pp/pA and = 72 GeV in AA can be reached, corresponding to an uncharted energy domain between SPS and RHIC. We report two main ways of performing fixed-target collisions at the LHC, both allowing for the usage of one of the existing LHC experiments. In these proceedings, after discussing the projected luminosities considered for one year of data taking at the LHC, we will present a selection of projections for light and heavy-flavour production.

  4. Identification of ground targets from airborne platforms

    NASA Astrophysics Data System (ADS)

    Doe, Josh; Boettcher, Evelyn; Miller, Brian

    2009-05-01

    The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) sensor performance models predict the ability of soldiers to perform a specified military discrimination task using an EO/IR sensor system. Increasingly EO/IR systems are being used on manned and un-manned aircraft for surveillance and target acquisition tasks. In response to this emerging requirement, the NVESD Modeling and Simulation division has been tasked to compare target identification performance between ground-to-ground and air-to-ground platforms for both IR and visible spectra for a set of wheeled utility vehicles. To measure performance, several forced choice experiments were designed and administered and the results analyzed. This paper describes these experiments and reports the results as well as the NVTherm model calibration factors derived for the infrared imagery.

  5. Using habitat suitability models to target invasive plant species surveys

    USGS Publications Warehouse

    Crall, Alycia W.; Jarnevich, Catherine S.; Panke, Brendon; Young, Nick; Renz, Mark; Morisette, Jeffrey

    2013-01-01

    Managers need new tools for detecting the movement and spread of nonnative, invasive species. Habitat suitability models are a popular tool for mapping the potential distribution of current invaders, but the ability of these models to prioritize monitoring efforts has not been tested in the field. We tested the utility of an iterative sampling design (i.e., models based on field observations used to guide subsequent field data collection to improve the model), hypothesizing that model performance would increase when new data were gathered from targeted sampling using criteria based on the initial model results. We also tested the ability of habitat suitability models to predict the spread of invasive species, hypothesizing that models would accurately predict occurrences in the field, and that the use of targeted sampling would detect more species with less sampling effort than a nontargeted approach. We tested these hypotheses on two species at the state scale (Centaurea stoebe and Pastinaca sativa) in Wisconsin (USA), and one genus at the regional scale (Tamarix) in the western United States. These initial data were merged with environmental data at 30-m2 resolution for Wisconsin and 1-km2 resolution for the western United States to produce our first iteration models. We stratified these initial models to target field sampling and compared our models and success at detecting our species of interest to other surveys being conducted during the same field season (i.e., nontargeted sampling). Although more data did not always improve our models based on correct classification rate (CCR), sensitivity, specificity, kappa, or area under the curve (AUC), our models generated from targeted sampling data always performed better than models generated from nontargeted data. For Wisconsin species, the model described actual locations in the field fairly well (kappa = 0.51, 0.19, P 2) = 47.42, P < 0.01). From these findings, we conclude that habitat suitability models can be

  6. VALIDATION OF THE CORONAL THICK TARGET SOURCE MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleishman, Gregory D.; Xu, Yan; Nita, Gelu N.

    2016-01-10

    We present detailed 3D modeling of a dense, coronal thick-target X-ray flare using the GX Simulator tool, photospheric magnetic measurements, and microwave imaging and spectroscopy data. The developed model offers a remarkable agreement between the synthesized and observed spectra and images in both X-ray and microwave domains, which validates the entire model. The flaring loop parameters are chosen to reproduce the emission measure, temperature, and the nonthermal electron distribution at low energies derived from the X-ray spectral fit, while the remaining parameters, unconstrained by the X-ray data, are selected such as to match the microwave images and total power spectra.more » The modeling suggests that the accelerated electrons are trapped in the coronal part of the flaring loop, but away from where the magnetic field is minimal, and, thus, demonstrates that the data are clearly inconsistent with electron magnetic trapping in the weak diffusion regime mediated by the Coulomb collisions. Thus, the modeling supports the interpretation of the coronal thick-target sources as sites of electron acceleration in flares and supplies us with a realistic 3D model with physical parameters of the acceleration region and flaring loop.« less

  7. The muon component in extensive air showers and new p+C data in fixed target experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meurer, C.; Bluemer, J.; Engel, R.

    2007-03-19

    One of the most promising approaches to determine the energy spectrum and composition of the cosmic rays with energies above 1015 eV is the measurement of the number of electrons and muons produced in extensive air showers (EAS). Therefore simulation of air showers using electromagnetic and hadronic interaction models are necessary. These simulations show uncertainties which come mainly from hadronic interaction models. One aim of this work is to specify the low energy hadronic interactions which are important for the muon production in EAS. Therefore we simulate extensive air showers with a modified version of the simulation package CORSIKA. Inmore » particular we investigate in detail the energy and the phase space regions of secondary particle production, which are most important for muon production. This phase space region is covered by fixed target experiments at CERN. In the second part of this work we present preliminary momentum spectra of secondary {pi}+ and {pi}- in p+C collisions at 12 GeV/c measured with the HARP spectrometer at the PS accelerator at CERN. In addition we use the new p+C NA49 data at 158 GeV/c to check the reliability of hadronic interaction models for muon production in EAS. Finally, possibilities to measure relevant quantities of hadron production in existing and planned accelerator experiments are discussed.« less

  8. Simulation of target interpretation based on infrared image features and psychology principle

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Chen, Yu-hua; Gao, Hong-sheng; Wang, Zhan-feng; Wang, Ji-jun; Su, Rong-hua; Huang, Yan-ping

    2009-07-01

    It's an important and complicated process in target interpretation that target features extraction and identification, which effect psychosensorial quantity of interpretation person to target infrared image directly, and decide target viability finally. Using statistical decision theory and psychology principle, designing four psychophysical experiment, the interpretation model of the infrared target is established. The model can get target detection probability by calculating four features similarity degree between target region and background region, which were plotted out on the infrared image. With the verification of a great deal target interpretation in practice, the model can simulate target interpretation and detection process effectively, get the result of target interpretation impersonality, which can provide technique support for target extraction, identification and decision-making.

  9. Polar-direct-drive experiments with contoured-shell targets on OMEGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, F. J.; Radha, P. B.; Bonino, M. J.

    Polar-driven direct-drive experiments recently performed on the OMEGA Laser System have demonstrated the efficacy of using a target with a contoured shell with varying thickness to improve the symmetry and fusion performance of the implosion. The polar-driven contoured-shell implosions have substantially reduced low mode perturbations compared to polar-driven spherical-shell implosions as diagnosed by x-ray radiographs up to shell stagnation. As a result, fusion yields were increased by more than a factor of ~2 without increasing the energy of the laser by the use of contoured shells.

  10. Polar-direct-drive experiments with contoured-shell targets on OMEGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, F. J.; Radha, P. B.; Bonino, M. J.

    Polar-driven direct-drive experiments recently performed on the OMEGA Laser System have demonstrated the efficacy of using a target with a contoured shell with varying thickness to improve the symmetry and fusion performance of the implosion. The polar-driven contoured-shell implosions have substantially reduced low mode perturbations compared to polar-driven spherical-shell implosions as diagnosed by x-ray radiographs up to shell stagnation. Fusion yields were increased by more than a factor of ∼2 without increasing the energy of the laser by the use of contoured shells.

  11. Polar-direct-drive experiments with contoured-shell targets on OMEGA

    DOE PAGES

    Marshall, F. J.; Radha, P. B.; Bonino, M. J.; ...

    2016-01-28

    Polar-driven direct-drive experiments recently performed on the OMEGA Laser System have demonstrated the efficacy of using a target with a contoured shell with varying thickness to improve the symmetry and fusion performance of the implosion. The polar-driven contoured-shell implosions have substantially reduced low mode perturbations compared to polar-driven spherical-shell implosions as diagnosed by x-ray radiographs up to shell stagnation. As a result, fusion yields were increased by more than a factor of ~2 without increasing the energy of the laser by the use of contoured shells.

  12. Modeling and validation of photometric characteristics of space targets oriented to space-based observation.

    PubMed

    Wang, Hongyuan; Zhang, Wei; Dong, Aotuo

    2012-11-10

    A modeling and validation method of photometric characteristics of the space target was presented in order to track and identify different satellites effectively. The background radiation characteristics models of the target were built based on blackbody radiation theory. The geometry characteristics of the target were illustrated by the surface equations based on its body coordinate system. The material characteristics of the target surface were described by a bidirectional reflectance distribution function model, which considers the character of surface Gauss statistics and microscale self-shadow and is obtained by measurement and modeling in advance. The contributing surfaces of the target to observation system were determined by coordinate transformation according to the relative position of the space-based target, the background radiation sources, and the observation platform. Then a mathematical model on photometric characteristics of the space target was built by summing reflection components of all the surfaces. Photometric characteristics simulation of the space-based target was achieved according to its given geometrical dimensions, physical parameters, and orbital parameters. Experimental validation was made based on the scale model of the satellite. The calculated results fit well with the measured results, which indicates the modeling method of photometric characteristics of the space target is correct.

  13. Dynamic interactions between visual working memory and saccade target selection

    PubMed Central

    Schneegans, Sebastian; Spencer, John P.; Schöner, Gregor; Hwang, Seongmin; Hollingworth, Andrew

    2014-01-01

    Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. PMID:25228628

  14. Target & Propagation Models for the FINDER Radar

    NASA Technical Reports Server (NTRS)

    Cable, Vaughn; Lux, James; Haque, Salmon

    2013-01-01

    Finding persons still alive in piles of rubble following an earthquake, a severe storm, or other disaster is a difficult problem. JPL is currently developing a victim detection radar called FINDER (Finding Individuals in Emergency and Response). The subject of this paper is directed toward development of propagation & target models needed for simulation & testing of such a system. These models are both physical (real rubble piles) and numerical. Early results from the numerical modeling phase show spatial and temporal spreading characteristics when signals are passed through a randomly mixed rubble pile.

  15. Evolution of Gas Cell Targets for Magnetized Liner Inertial Fusion Experiments at the Sandia National Laboratories PECOS Test Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paguio, R. R.; Smith, G. E.; Taylor, J. L.

    Z-Beamlet (ZBL) experiments conducted at the PECOS test facility at Sandia National Laboratories (SNL) investigated the nonlinear processes in laser plasma interaction (or laserplasma instabilities LPI) that complicate the deposition of laser energy by enhanced absorption, backscatter, filamentation and beam-spray that can occur in large-scale laser-heated gas cell targets. These targets and experiments were designed to provide better insight into the physics of the laser preheat stage of the Magnetized Liner Inertial Fusion (MagLIF) scheme being tested on the SNL Z-machine. The experiments aim to understand the tradeoffs between laser spot size, laser pulse shape, laser entrance hole (LEH) windowmore » thickness, and fuel density for laser preheat. Gas cell target design evolution and fabrication adaptations to accommodate the evolving experiment and scientific requirements are also described in this paper.« less

  16. Evolution of Gas Cell Targets for Magnetized Liner Inertial Fusion Experiments at the Sandia National Laboratories PECOS Test Facility

    DOE PAGES

    Paguio, R. R.; Smith, G. E.; Taylor, J. L.; ...

    2017-12-04

    Z-Beamlet (ZBL) experiments conducted at the PECOS test facility at Sandia National Laboratories (SNL) investigated the nonlinear processes in laser plasma interaction (or laserplasma instabilities LPI) that complicate the deposition of laser energy by enhanced absorption, backscatter, filamentation and beam-spray that can occur in large-scale laser-heated gas cell targets. These targets and experiments were designed to provide better insight into the physics of the laser preheat stage of the Magnetized Liner Inertial Fusion (MagLIF) scheme being tested on the SNL Z-machine. The experiments aim to understand the tradeoffs between laser spot size, laser pulse shape, laser entrance hole (LEH) windowmore » thickness, and fuel density for laser preheat. Gas cell target design evolution and fabrication adaptations to accommodate the evolving experiment and scientific requirements are also described in this paper.« less

  17. Quantitative targeting maps based on experimental investigations for a branched tube model in magnetic drug targeting

    NASA Astrophysics Data System (ADS)

    Gitter, K.; Odenbach, S.

    2011-12-01

    Magnetic drug targeting (MDT), because of its high targeting efficiency, is a promising approach for tumour treatment. Unwanted side effects are considerably reduced, since the nanoparticles are concentrated within the target region due to the influence of a magnetic field. Nevertheless, understanding the transport phenomena of nanoparticles in an artery system is still challenging. This work presents experimental results for a branched tube model. Quantitative results describe, for example, the net amount of nanoparticles that are targeted towards the chosen region due to the influence of a magnetic field. As a result of measurements, novel drug targeting maps, combining, e.g. the magnetic volume force, the position of the magnet and the net amount of targeted nanoparticles, are presented. The targeting maps are valuable for evaluation and comparison of setups and are also helpful for the design and the optimisation of a magnet system with an appropriate strength and distribution of the field gradient. The maps indicate the danger of accretion within the tube and also show the promising result of magnetic drug targeting that up to 97% of the nanoparticles were successfully targeted.

  18. Testing Measurement Invariance in the Target Rotated Multigroup Exploratory Factor Model

    ERIC Educational Resources Information Center

    Dolan, Conor V.; Oort, Frans J.; Stoel, Reinoud D.; Wicherts, Jelte M.

    2009-01-01

    We propose a method to investigate measurement invariance in the multigroup exploratory factor model, subject to target rotation. We consider both oblique and orthogonal target rotation. This method has clear advantages over other approaches, such as the use of congruence measures. We demonstrate that the model can be implemented readily in the…

  19. Comparison of measured and modeled BRDF of natural targets

    NASA Astrophysics Data System (ADS)

    Boucher, Yannick; Cosnefroy, Helene; Petit, Alain D.; Serrot, Gerard; Briottet, Xavier

    1999-07-01

    The Bidirectional Reflectance Distribution Function (BRDF) plays a major role to evaluate or simulate the signatures of natural and artificial targets in the solar spectrum. A goniometer covering a large spectral and directional domain has been recently developed by the ONERA/DOTA. It was designed to allow both laboratory and outside measurements. The spectral domain ranges from 0.40 to 0.95 micrometer, with a resolution of 3 nm. The geometrical domain ranges 0 - 60 degrees for the zenith angle of the source and the sensor, and 0 - 180 degrees for the relative azimuth between the source and the sensor. The maximum target size for nadir measurements is 22 cm. The spatial target irradiance non-uniformity has been evaluated and then used to correct the raw measurements. BRDF measurements are calibrated thanks to a spectralon reference panel. Some BRDF measurements performed on sand and short grass and are presented here. Eight bidirectional models among the most popular models found in the literature have been tested on these measured data set. A code fitting the model parameters to the measured BRDF data has been developed. The comparative evaluation of the model performances is carried out, versus different criteria (root mean square error, root mean square relative error, correlation diagram . . .). The robustness of the models is evaluated with respect to the number of BRDF measurements, noise and interpolation.

  20. Application of a post-collisional-interaction distorted-wave model for (e, 2e) of some atomic targets and methane

    NASA Astrophysics Data System (ADS)

    Chinoune, M.; Houamer, S.; Dal Cappello, C.; Galstyan, A.

    2016-10-01

    Recently Isik et al (2016 J. Phys B: At. Mol. Opt. Phys. 49 065203) performed measurements of the triple differential cross sections (TDCSs) of methane by electron impact. Their data clearly show that post-collisional interaction (PCI) effects are present in the angular distributions of ejected electrons. A model describing the ejected electron by a distorted wave and including PCI is applied for the single ionization of atomic targets and for methane. Extensive comparisons between this model and other previous models are made with available experiments.

  1. Fluid mechanics aspects of magnetic drug targeting.

    PubMed

    Odenbach, Stefan

    2015-10-01

    Experiments and numerical simulations using a flow phantom for magnetic drug targeting have been undertaken. The flow phantom is a half y-branched tube configuration where the main tube represents an artery from which a tumour-supplying artery, which is simulated by the side branch of the flow phantom, branches off. In the experiments a quantification of the amount of magnetic particles targeted towards the branch by a magnetic field applied via a permanent magnet is achieved by impedance measurement using sensor coils. Measuring the targeting efficiency, i.e. the relative amount of particles targeted to the side branch, for different field configurations one obtains targeting maps which combine the targeting efficiency with the magnetic force densities in characteristic points in the flow phantom. It could be shown that targeting efficiency depends strongly on the magnetic field configuration. A corresponding numerical model has been set up, which allows the simulation of targeting efficiency for variable field configuration. With this simulation good agreement of targeting efficiency with experimental data has been found. Thus, the basis has been laid for future calculations of optimal field configurations in clinical applications of magnetic drug targeting. Moreover, the numerical model allows the variation of additional parameters of the drug targeting process and thus an estimation of the influence, e.g. of the fluid properties on the targeting efficiency. Corresponding calculations have shown that the non-Newtonian behaviour of the fluid will significantly influence the targeting process, an aspect which has to be taken into account, especially recalling the fact that the viscosity of magnetic suspensions depends strongly on the magnetic field strength and the mechanical load.

  2. Data-driven risk models could help target pipeline safety inspections

    DOT National Transportation Integrated Search

    2008-07-01

    Federal safety agencies share a common problemthe : need to target resources effectively to reduce risk. One : way this targeting is commonly done is with a risk model : that uses safety data along with expert judgment to identify : and weight ris...

  3. Termites as targets and models for biotechnology.

    PubMed

    Scharf, Michael E

    2015-01-07

    Termites have many unique evolutionary adaptations associated with their eusocial lifestyles. Recent omics research has created a wealth of new information in numerous areas of termite biology (e.g., caste polyphenism, lignocellulose digestion, and microbial symbiosis) with wide-ranging applications in diverse biotechnological niches. Termite biotechnology falls into two categories: (a) termite-targeted biotechnology for pest management purposes, and (b) termite-modeled biotechnology for use in various industrial applications. The first category includes several candidate termiticidal modes of action such as RNA interference, digestive inhibition, pathogen enhancement, antimicrobials, endocrine disruption, and primer pheromone mimicry. In the second category, termite digestomes are deep resources for host and symbiont lignocellulases and other enzymes with applications in a variety of biomass, industrial, and processing applications. Moving forward, one of the most important approaches for accelerating advances in both termite-targeted and termite-modeled biotechnology will be to consider host and symbiont together as a single functional unit.

  4. Designing Multi-target Compound Libraries with Gaussian Process Models.

    PubMed

    Bieler, Michael; Reutlinger, Michael; Rodrigues, Tiago; Schneider, Petra; Kriegl, Jan M; Schneider, Gisbert

    2016-05-01

    We present the application of machine learning models to selecting G protein-coupled receptor (GPCR)-focused compound libraries. The library design process was realized by ant colony optimization. A proprietary Boehringer-Ingelheim reference set consisting of 3519 compounds tested in dose-response assays at 11 GPCR targets served as training data for machine learning and activity prediction. We compared the usability of the proprietary data with a public data set from ChEMBL. Gaussian process models were trained to prioritize compounds from a virtual combinatorial library. We obtained meaningful models for three of the targets (5-HT2c , MCH, A1), which were experimentally confirmed for 12 of 15 selected and synthesized or purchased compounds. Overall, the models trained on the public data predicted the observed assay results more accurately. The results of this study motivate the use of Gaussian process regression on public data for virtual screening and target-focused compound library design. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA. This is an open access article under the terms of the Creative Commons Attribution Non-Commercial NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.

  5. Analysis of a Neutronic Experiment on a Simulated Mercury Spallation Neutron Target Assembly Bombarded by Giga-Electron-Volt Protons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maekawa, Fujio; Meigo, Shin-ichiro; Kasugai, Yoshimi

    2005-05-15

    A neutronic benchmark experiment on a simulated spallation neutron target assembly was conducted by using the Alternating Gradient Synchrotron at Brookhaven National Laboratory and was analyzed to investigate the prediction capability of Monte Carlo simulation codes used in neutronic designs of spallation neutron sources. The target assembly consisting of a mercury target, a light water moderator, and a lead reflector was bombarded by 1.94-, 12-, and 24-GeV protons, and the fast neutron flux distributions around the target and the spectra of thermal neutrons leaking from the moderator were measured in the experiment. In this study, the Monte Carlo particle transportmore » simulation codes NMTC/JAM, MCNPX, and MCNP-4A with associated cross-section data in JENDL and LA-150 were verified based on benchmark analysis of the experiment. As a result, all the calculations predicted the measured quantities adequately; calculated integral fluxes of fast and thermal neutrons agreed approximately within {+-}40% with the experiments although the overall energy range encompassed more than 12 orders of magnitude. Accordingly, it was concluded that these simulation codes and cross-section data were adequate for neutronics designs of spallation neutron sources.« less

  6. Modeling a High Explosive Cylinder Experiment

    NASA Astrophysics Data System (ADS)

    Zocher, Marvin A.

    2017-06-01

    Cylindrical assemblies constructed from high explosives encased in an inert confining material are often used in experiments aimed at calibrating and validating continuum level models for the so-called equation of state (constitutive model for the spherical part of the Cauchy tensor). Such is the case in the work to be discussed here. In particular, work will be described involving the modeling of a series of experiments involving PBX-9501 encased in a copper cylinder. The objective of the work is to test and perhaps refine a set of phenomenological parameters for the Wescott-Stewart-Davis reactive burn model. The focus of this talk will be on modeling the experiments, which turned out to be non-trivial. The modeling is conducted using ALE methodology.

  7. Targeting community-dwelling urinary incontinence sufferers: a multi-disciplinary community based model for conservative continence services.

    PubMed

    St John, Winsome; Wallis, Marianne; James, Heather; McKenzie, Shona; Guyatt, Sheridan

    2004-10-01

    This paper presents an argument that there is a need to provide services that target community-dwelling incontinence sufferers, and presents a demonstration case study of a multi-disciplinary, community-based conservative model of service delivery: The Waterworx Model. Rationale for approaches taken, implementation of the model, evaluation and lessons learned are discussed. In this paper community-dwelling sufferers of urinary incontinence are identified as an underserved group, and useful information is provided for those wishing to establish services for them. The Waterworx Model of continence service delivery incorporates three interrelated approaches. Firstly, client access is achieved by using community-based services via clinic and home visits, creating referral pathways and active promotion of services. Secondly, multi-disciplinary client care is provided by targeting a specific client group, multi-disciplinary assessment, promoting client self-management and developing client knowledge and health literacy. Finally, interdisciplinary collaboration and linkages is facilitated by developing multidisciplinary assessment tools, using interdisciplinary referrals, staff development, multi-disciplinary management and providing professional education. Implementation of the model achieved greater client access, improvement in urinary incontinence and client satisfaction. Our experiences suggest that those suffering urinary incontinence and living in the community are an underserved group and that continence services should be community focussed, multi-disciplinary, generalist in nature.

  8. NIF Target Designs and OMEGA Experiments for Shock-Ignition Inertial Confinement Fusion

    NASA Astrophysics Data System (ADS)

    Anderson, K. S.

    2012-10-01

    Shock ignition (SI)footnotetextR. Betti et al., Phys. Rev. Lett. 98, 155001 (2007). is being pursued as a viable option to achieve ignition on the National Ignition Facility (NIF). Shock-ignition target designs require the addition of a high-intensity (˜5 x 10^15 W/cm^2) laser spike at the end of a low-adiabat assembly pulse to launch a spherically convergent strong shock to ignite the imploding capsule. Achieving ignition with SI requires the laser spike to generate an ignitor shock with a launching pressure typically in excess of ˜300 Mbar. At the high laser intensities required during the spike pulse, stimulated Raman (SRS) and Brillouin scattering (SBS) could reflect a significant fraction of the incident light. In addition, SRS and the two-plasmon-decay instability can accelerate hot electrons into the shell and preheat the fuel. Since the high-power spike occurs at the end of the pulse when the areal density of the shell is several tens of mg/cm^2, shock-ignition fuel layers are shielded against hot electrons with energies below 150 keV. This paper will present data for a set of OMEGA experiments that were designed to study laser--plasma interactions during the spike pulse. In addition, these experiments were used to demonstrate that high-pressure shocks can be produced in long-scale-length plasmas with SI-relevant intensities. Within the constraints imposed by the hydrodynamics of strong shock generation and the laser--plasma instabilities, target designs for SI experiments on the NIF will be presented. Two-dimensional radiation--hydrodynamic simulations of SI target designs for the NIF predict ignition in the polar-drive beam configuration at sub-MJ laser energies. Design robustness to various 1-D effects and 2-D nonuniformities has been characterized. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC52-08NA28302.

  9. Target detection using the background model from the topological anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Dorado Munoz, Leidy P.; Messinger, David W.; Ziemann, Amanda K.

    2013-05-01

    The Topological Anomaly Detection (TAD) algorithm has been used as an anomaly detector in hyperspectral and multispectral images. TAD is an algorithm based on graph theory that constructs a topological model of the background in a scene, and computes an anomalousness ranking for all of the pixels in the image with respect to the background in order to identify pixels with uncommon or strange spectral signatures. The pixels that are modeled as background are clustered into groups or connected components, which could be representative of spectral signatures of materials present in the background. Therefore, the idea of using the background components given by TAD in target detection is explored in this paper. In this way, these connected components are characterized in three different approaches, where the mean signature and endmembers for each component are calculated and used as background basis vectors in Orthogonal Subspace Projection (OSP) and Adaptive Subspace Detector (ASD). Likewise, the covariance matrix of those connected components is estimated and used in detectors: Constrained Energy Minimization (CEM) and Adaptive Coherence Estimator (ACE). The performance of these approaches and the different detectors is compared with a global approach, where the background characterization is derived directly from the image. Experiments and results using self-test data set provided as part of the RIT blind test target detection project are shown.

  10. Tracking of multiple targets using online learning for reference model adaptation.

    PubMed

    Pernkopf, Franz

    2008-12-01

    Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.

  11. Rapid Target Detection in High Resolution Remote Sensing Images Using Yolo Model

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Chen, X.; Gao, Y.; Li, Y.

    2018-04-01

    Object detection in high resolution remote sensing images is a fundamental and challenging problem in the field of remote sensing imagery analysis for civil and military application due to the complex neighboring environments, which can cause the recognition algorithms to mistake irrelevant ground objects for target objects. Deep Convolution Neural Network(DCNN) is the hotspot in object detection for its powerful ability of feature extraction and has achieved state-of-the-art results in Computer Vision. Common pipeline of object detection based on DCNN consists of region proposal, CNN feature extraction, region classification and post processing. YOLO model frames object detection as a regression problem, using a single CNN predicts bounding boxes and class probabilities in an end-to-end way and make the predict faster. In this paper, a YOLO based model is used for object detection in high resolution sensing images. The experiments on NWPU VHR-10 dataset and our airport/airplane dataset gain from GoogleEarth show that, compare with the common pipeline, the proposed model speeds up the detection process and have good accuracy.

  12. Target signature modeling and bistatic scattering measurement studies

    NASA Technical Reports Server (NTRS)

    Burnside, W. D.; Lee, T. H.; Rojas, R.; Marhefka, R. J.; Bensman, D.

    1989-01-01

    Four areas of study are summarized: bistatic scattering measurements studies for a compact range; target signature modeling for test and evaluation hardware in the loop situation; aircraft code modification study; and SATCOM antenna studies on aircraft.

  13. Target fragmentation in radiobiology

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Cucinotta, Francis A.; Shinn, Judy L.; Townsend, Lawrence W.

    1993-01-01

    Nuclear reactions in biological systems produce low-energy fragments of the target nuclei seen as local high events of linear energy transfer (LET). A nuclear-reaction formalism is used to evaluate the nuclear-induced fields within biosystems and their effects within several biological models. On the basis of direct ionization interaction, one anticipates high-energy protons to have a quality factor and relative biological effectiveness (RBE) of unity. Target fragmentation contributions raise the effective quality factor of 10 GeV protons to 3.3 in reasonable agreement with RBE values for induced micronuclei in bean sprouts. Application of the Katz model indicates that the relative increase in RBE with decreasing exposure observed in cell survival experiments with 160 MeV protons is related solely to target fragmentation events. Target fragment contributions to lens opacity given an RBE of 1.4 for 2 GeV protons in agreement with the work of Lett and Cox. Predictions are made for the effective RBE for Harderian gland tumors induced by high-energy protons. An exposure model for lifetime cancer risk is derived from NCRP 98 risk tables, and protraction effects are examined for proton and helium ion exposures. The implications of dose rate enhancement effects on space radiation protection are considered.

  14. Computational Modeling and Neuroimaging Techniques for Targeting during Deep Brain Stimulation

    PubMed Central

    Sweet, Jennifer A.; Pace, Jonathan; Girgis, Fady; Miller, Jonathan P.

    2016-01-01

    Accurate surgical localization of the varied targets for deep brain stimulation (DBS) is a process undergoing constant evolution, with increasingly sophisticated techniques to allow for highly precise targeting. However, despite the fastidious placement of electrodes into specific structures within the brain, there is increasing evidence to suggest that the clinical effects of DBS are likely due to the activation of widespread neuronal networks directly and indirectly influenced by the stimulation of a given target. Selective activation of these complex and inter-connected pathways may further improve the outcomes of currently treated diseases by targeting specific fiber tracts responsible for a particular symptom in a patient-specific manner. Moreover, the delivery of such focused stimulation may aid in the discovery of new targets for electrical stimulation to treat additional neurological, psychiatric, and even cognitive disorders. As such, advancements in surgical targeting, computational modeling, engineering designs, and neuroimaging techniques play a critical role in this process. This article reviews the progress of these applications, discussing the importance of target localization for DBS, and the role of computational modeling and novel neuroimaging in improving our understanding of the pathophysiology of diseases, and thus paving the way for improved selective target localization using DBS. PMID:27445709

  15. Dynamically polarized target for the g p 2 and G p E experiments at Jefferson Lab

    DOE PAGES

    Pierce, J.; Maxwell, J.; Badman, T.; ...

    2013-12-16

    We describe a dynamically polarized target that has been utilized for two electron scattering experiments in Hall A at Jefferson Lab. The primary components of the target are a new, high cooling power 4 He evaporation refrigerator, and a re-purposed, superconducting split-coil magnet. It has been used to polarize protons in irradiated NH 3 at a temperature of 1 K and at fields of 2.5 and 5.0 Tesla. The performance of the target material in the electron beam under these conditions will be discussed. The maximum polarizations of 28% and 95% were obtained at those fields, respectively. To satisfy themore » requirements of both experiments, the magnet had to be routinely rotated between angles of 0, 6, and 90 degrees with respect to the incident electron beam. This was accomplished using a new rotating vacuum seal which permits rotations to be performed in only a few minutes.« less

  16. Continuous movement decoding using a target-dependent model with EMG inputs.

    PubMed

    Sachs, Nicholas A; Corbett, Elaine A; Miller, Lee E; Perreault, Eric J

    2011-01-01

    Trajectory-based models that incorporate target position information have been shown to accurately decode reaching movements from bio-control signals, such as muscle (EMG) and cortical activity (neural spikes). One major hurdle in implementing such models for neuroprosthetic control is that they are inherently designed to decode single reaches from a position of origin to a specific target. Gaze direction can be used to identify appropriate targets, however information regarding movement intent is needed to determine when a reach is meant to begin and when it has been completed. We used linear discriminant analysis to classify limb states into movement classes based on recorded EMG from a sparse set of shoulder muscles. We then used the detected state transitions to update target information in a mixture of Kalman filters that incorporated target position explicitly in the state, and used EMG activity to decode arm movements. Updating the target position initiated movement along new trajectories, allowing a sequence of appropriately timed single reaches to be decoded in series and enabling highly accurate continuous control.

  17. Parametric investigations of target normal sheath acceleration experiments

    NASA Astrophysics Data System (ADS)

    Zani, Alessandro; Sgattoni, Andrea; Passoni, Matteo

    2011-10-01

    One of the most important challenges related to laser-driven ion acceleration research is to actively control some important ion beam features. This is a peculiar topic in the light of future possible technological applications. In the present work we make use of one theoretical model for target normal sheath acceleration in order to reproduce recent experimental parametric studies about maximum ion energy dependencies on laser parameters. The key role played by pulse energy and intensity is enlightened. Finally the effective dependence of maximum ion energy on intensity is evaluated using a combined theoretical approach, obtained by means of an analytical and a particle-in-cell numerical investigation.

  18. A chest-shape target automatic detection method based on Deformable Part Models

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Jin, Weiqi; Li, Li

    2016-10-01

    Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.

  19. Generating target system specifications from a domain model using CLIPS

    NASA Technical Reports Server (NTRS)

    Sugumaran, Vijayan; Gomaa, Hassan; Kerschberg, Larry

    1991-01-01

    The quest for reuse in software engineering is still being pursued and researchers are actively investigating the domain modeling approach to software construction. There are several domain modeling efforts reported in the literature and they all agree that the components that are generated from domain modeling are more conducive to reuse. Once a domain model is created, several target systems can be generated by tailoring the domain model or by evolving the domain model and then tailoring it according to the specified requirements. This paper presents the Evolutionary Domain Life Cycle (EDLC) paradigm in which a domain model is created using multiple views, namely, aggregation hierarchy, generalization/specialization hierarchies, object communication diagrams and state transition diagrams. The architecture of the Knowledge Based Requirements Elicitation Tool (KBRET) which is used to generate target system specifications is also presented. The preliminary version of KBRET is implemented in the C Language Integrated Production System (CLIPS).

  20. Dynamically polarized target for the g {2/ p } and G {/E p } experiments at Jefferson Lab

    NASA Astrophysics Data System (ADS)

    Pierce, J.; Maxwell, J.; Keith, C.

    2014-01-01

    Recently, two experiments were concluded in Hall A at Jefferson Lab which utilized a newly assembled, solid, polarized hydrogen target. The primary components of the target are a new, high cooling power 4He evaporation refrigerator, and a re-purposed, superconducting split-coil magnet. It has been used to polarize protons in irradiated NH3 at a temperature of 1 K and at fields of 2.5 and 5.0 tesla. Maximum polarizations of 55% and 95% were obtained at those fields, respectively. To satisfy the requirements of both experiments, the magnet had to be routinely rotated between angles of 0°, 6°, and 90° with respect to the incident electron beam.

  1. Infrared images target detection based on background modeling in the discrete cosine domain

    NASA Astrophysics Data System (ADS)

    Ye, Han; Pei, Jihong

    2018-02-01

    Background modeling is the critical technology to detect the moving target for video surveillance. Most background modeling techniques are aimed at land monitoring and operated in the spatial domain. A background establishment becomes difficult when the scene is a complex fluctuating sea surface. In this paper, the background stability and separability between target are analyzed deeply in the discrete cosine transform (DCT) domain, on this basis, we propose a background modeling method. The proposed method models each frequency point as a single Gaussian model to represent background, and the target is extracted by suppressing the background coefficients. Experimental results show that our approach can establish an accurate background model for seawater, and the detection results outperform other background modeling methods in the spatial domain.

  2. Kalman filter data assimilation: targeting observations and parameter estimation.

    PubMed

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  3. Modeling health gains and cost savings for ten dietary salt reduction targets.

    PubMed

    Wilson, Nick; Nghiem, Nhung; Eyles, Helen; Mhurchu, Cliona Ni; Shields, Emma; Cobiac, Linda J; Cleghorn, Christine L; Blakely, Tony

    2016-04-26

    Dietary salt reduction is included in the top five priority actions for non-communicable disease control internationally. We therefore aimed to identify health gain and cost impacts of achieving a national target for sodium reduction, along with component targets in different food groups. We used an established dietary sodium intervention model to study 10 interventions to achieve sodium reduction targets. The 2011 New Zealand (NZ) adult population (2.3 million aged 35+ years) was simulated over the remainder of their lifetime in a Markov model with a 3 % discount rate. Achieving an overall 35 % reduction in dietary salt intake via implementation of mandatory maximum levels of sodium in packaged foods along with reduced sodium from fast foods/restaurant food and discretionary intake (the "full target"), was estimated to gain 235,000 QALYs over the lifetime of the cohort (95 % uncertainty interval [UI]: 176,000 to 298,000). For specific target components the range was from 122,000 QALYs gained (for the packaged foods target) down to the snack foods target (6100 QALYs; and representing a 34-48 % sodium reduction in such products). All ten target interventions studied were cost-saving, with the greatest costs saved for the mandatory "full target" at NZ$1260 million (US$820 million). There were relatively greater health gains per adult for men and for Māori (indigenous population). This work provides modeling-level evidence that achieving dietary sodium reduction targets (including specific food category targets) could generate large health gains and cost savings for a national health sector. Demographic groups with the highest cardiovascular disease rates stand to gain most, assisting in reducing health inequalities between sex and ethnic groups.

  4. Modeling and validation of spectral BRDF on material surface of space target

    NASA Astrophysics Data System (ADS)

    Hou, Qingyu; Zhi, Xiyang; Zhang, Huili; Zhang, Wei

    2014-11-01

    The modeling and the validation methods of the spectral BRDF on the material surface of space target were presented. First, the microscopic characteristics of the space targets' material surface were analyzed based on fiber-optic spectrometer using to measure the direction reflectivity of the typical materials surface. To determine the material surface of space target is isotropic, atomic force microscopy was used to measure the material surface structure of space target and obtain Gaussian distribution model of microscopic surface element height. Then, the spectral BRDF model based on that the characteristics of the material surface were isotropic and the surface micro-facet with the Gaussian distribution which we obtained was constructed. The model characterizes smooth and rough surface well for describing the material surface of the space target appropriately. Finally, a spectral BRDF measurement platform in a laboratory was set up, which contains tungsten halogen lamp lighting system, fiber optic spectrometer detection system and measuring mechanical systems with controlling the entire experimental measurement and collecting measurement data by computers automatically. Yellow thermal control material and solar cell were measured with the spectral BRDF, which showed the relationship between the reflection angle and BRDF values at three wavelengths in 380nm, 550nm, 780nm, and the difference between theoretical model values and the measured data was evaluated by relative RMS error. Data analysis shows that the relative RMS error is less than 6%, which verified the correctness of the spectral BRDF model.

  5. Precision Modeling Of Targets Using The VALUE Computer Program

    NASA Astrophysics Data System (ADS)

    Hoffman, George A.; Patton, Ronald; Akerman, Alexander

    1989-08-01

    The 1976-vintage LASERX computer code has been augmented to produce realistic electro-optical images of targets. Capabilities lacking in LASERX but recently incorporated into its VALUE successor include: •Shadows cast onto the ground •Shadows cast onto parts of the target •See-through transparencies (e.g.,canopies) •Apparent images due both to atmospheric scattering and turbulence •Surfaces characterized by multiple bi-directional reflectance functions VALUE provides not only realistic target modeling by its precise and comprehensive representation of all target attributes, but additionally VALUE is very user friendly. Specifically, setup of runs is accomplished by screen prompting menus in a sequence of queries that is logical to the user. VALUE also incorporates the Optical Encounter (OPEC) software developed by Tricor Systems,Inc., Elgin, IL.

  6. Polarimetric subspace target detector for SAR data based on the Huynen dihedral model

    NASA Astrophysics Data System (ADS)

    Larson, Victor J.; Novak, Leslie M.

    1995-06-01

    Two new polarimetric subspace target detectors are developed based on a dihedral signal model for bright peaks within a spatially extended target signature. The first is a coherent dihedral target detector based on the exact Huynen model for a dihedral. The second is a noncoherent dihedral target detector based on the Huynen model with an extra unknown phase term. Expressions for these polarimetric subspace target detectors are developed for both additive Gaussian clutter and more general additive spherically invariant random vector clutter including the K-distribution. For the case of Gaussian clutter with unknown clutter parameters, constant false alarm rate implementations of these polarimetric subspace target detectors are developed. The performance of these dihedral detectors is demonstrated with real millimeter-wave fully polarimetric SAR data. The coherent dihedral detector which is developed with a more accurate description of a dihedral offers no performance advantage over the noncoherent dihedral detector which is computationally more attractive. The dihedral detectors do a better job of separating a set of tactical military targets from natural clutter compared to a detector that assumes no knowledge about the polarimetric structure of the target signal.

  7. Focused ultrasound facilitated thermo-chemotherapy for targeted retinoblastoma treatment: a modeling study.

    PubMed

    Wang, Shutao; Mahesh, Sankaranarayana P; Liu, Ji; Geist, Craig; Zderic, Vesna

    2012-07-01

    Retinoblastoma is the most common type of intraocular tumors in children. Currently, with early detection and improved systemic chemo-adjuvant therapies, treatment paradigm has shifted from survival to globe salvation/vision preservation. The objective of our work has been to explore the possible application of focused ultrasound (FUS) for targeted drug delivery in the posterior pole retinoblastoma. Specifically, theoretical models were implemented to evaluate the feasibility of using FUS to generate localized hyperthermia in retinal tumor areas, for potential triggering the chemotherapeutic agent deployment from heat-sensitive drug carriers. In-vitro experiments were conducted in tissue-mimicking phantoms with embedded excised rabbit eyes to validate the reliability of the modeling results. After confirming the reliability of our model, various FUS transducer parameters were investigated to induce maximal hyperthermia coverage in the tumor, while sparing adjacent eye structures (e.g. the lens). The evaluated FUS parameters included operating frequency, total acoustic power, geometric dimensions, transducer f-number, standoff distance, as well as different pulsing scenarios. Our modeling results suggest that the most suitable ultrasound frequency for this type of treatments was in the range of 2-3.5 MHz depending on the size of retinoblastoma. Appropriate transducer f-number (close to 1) and standoff distance could be selected to minimize the risks of over-heating undesired regions. With the total acoustic power of 0.4 W, 56.3% of the tumor was heated to hyperthermic temperature range (39-44 °C) while the temperature in lens was maintained below 41 °C. In conclusion, FUS-induced hyperthermia for targeted drug delivery may be a viable option in treatments of juxta-foveal or posterior pole retinoblastomas. Future in-vivo studies will allow us to determine the effectiveness and safety of the proposed approach. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Masking of Figure-Ground Texture and Single Targets by Surround Inhibition: A Computational Spiking Model

    PubMed Central

    Supèr, Hans; Romeo, August

    2012-01-01

    A visual stimulus can be made invisible, i.e. masked, by the presentation of a second stimulus. In the sensory cortex, neural responses to a masked stimulus are suppressed, yet how this suppression comes about is still debated. Inhibitory models explain masking by asserting that the mask exerts an inhibitory influence on the responses of a neuron evoked by the target. However, other models argue that the masking interferes with recurrent or reentrant processing. Using computer modeling, we show that surround inhibition evoked by ON and OFF responses to the mask suppresses the responses to a briefly presented stimulus in forward and backward masking paradigms. Our model results resemble several previously described psychophysical and neurophysiological findings in perceptual masking experiments and are in line with earlier theoretical descriptions of masking. We suggest that precise spatiotemporal influence of surround inhibition is relevant for visual detection. PMID:22393370

  9. Tumor spheroid model for the biologically targeted radiotherapy of neuroblastoma micrometastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, K.A.; Mairs, R.; Murray, T.

    Neuroblastoma is a pediatric malignancy with a poor prognosis at least partly attributable to an early pattern of dissemination. New approaches to treatment of micrometastases include targeted radiotherapy using radiolabeled antibodies or molecules which are taken up preferentially by tumor cells. Multicellular tumor spheroids (MTS) resemble micrometastases during the avascular phase of their development. A human neuroblastoma cell line (NBl-G) was grown as MTS and incubated briefly with a radiolabeled monoclonal antibody ({sup 131}I-UJ13A) directed against neuroectodermal antigens. Spheroid response was evaluated in terms of regrowth delay or proportion sterilized. A dose-response relationship was demonstrated in terms of {sup 131}Imore » activity or duration of incubation. Control experiments using unlabeled UJ13A, radiolabeled nonspecific antibody (T2.10), radiolabeled human serum albumin, and radiolabeled sodium iodide showed these to be relatively ineffective compared to {sup 131}I-UJ13A. The cell line NBl-G grown as MTS has also been found to preferentially accumulate the radiolabeled catecholamine precursor molecule m-({sup 131}I)iodobenzylguanidine compared to cell lines derived from other tumor types. NBl-G cells grown as MTS provide a promising laboratory model for targeted radiotherapy of neuroblastoma micrometastases using radiolabeled antibodies or m-iodobenzylguanidine.« less

  10. Small Molecule Sequential Dual-Targeting Theragnostic Strategy (SMSDTTS): from Preclinical Experiments towards Possible Clinical Anticancer Applications

    PubMed Central

    Li, Junjie; Oyen, Raymond; Verbruggen, Alfons; Ni, Yicheng

    2013-01-01

    Hitting the evasive tumor cells proves challenging in targeted cancer therapies. A general and unconventional anticancer approach namely small molecule sequential dual-targeting theragnostic strategy (SMSDTTS) has recently been introduced with the aims to target and debulk the tumor mass, wipe out the residual tumor cells, and meanwhile enable cancer detectability. This dual targeting approach works in two steps for systemic delivery of two naturally derived drugs. First, an anti-tubulin vascular disrupting agent, e.g., combretastatin A4 phosphate (CA4P), is injected to selectively cut off tumor blood supply and to cause massive necrosis, which nevertheless always leaves peripheral tumor residues. Secondly, a necrosis-avid radiopharmaceutical, namely 131I-hypericin (131I-Hyp), is administered the next day, which accumulates in intratumoral necrosis and irradiates the residual cancer cells with beta particles. Theoretically, this complementary targeted approach may biologically and radioactively ablate solid tumors and reduce the risk of local recurrence, remote metastases, and thus cancer mortality. Meanwhile, the emitted gamma rays facilitate radio-scintigraphy to detect tumors and follow up the therapy, hence a simultaneous theragnostic approach. SMSDTTS has now shown promise from multicenter animal experiments and may demonstrate unique anticancer efficacy in upcoming preliminary clinical trials. In this short review article, information about the two involved agents, the rationale of SMSDTTS, its preclinical antitumor efficacy, multifocal targetability, simultaneous theragnostic property, and toxicities of the dose regimens are summarized. Meanwhile, possible drawbacks, practical challenges and future improvement with SMSDTTS are discussed, which hopefully may help to push forward this strategy from preclinical experiments towards possible clinical applications. PMID:23412554

  11. Small Molecule Sequential Dual-Targeting Theragnostic Strategy (SMSDTTS): from Preclinical Experiments towards Possible Clinical Anticancer Applications.

    PubMed

    Li, Junjie; Oyen, Raymond; Verbruggen, Alfons; Ni, Yicheng

    2013-01-01

    Hitting the evasive tumor cells proves challenging in targeted cancer therapies. A general and unconventional anticancer approach namely small molecule sequential dual-targeting theragnostic strategy (SMSDTTS) has recently been introduced with the aims to target and debulk the tumor mass, wipe out the residual tumor cells, and meanwhile enable cancer detectability. This dual targeting approach works in two steps for systemic delivery of two naturally derived drugs. First, an anti-tubulin vascular disrupting agent, e.g., combretastatin A4 phosphate (CA4P), is injected to selectively cut off tumor blood supply and to cause massive necrosis, which nevertheless always leaves peripheral tumor residues. Secondly, a necrosis-avid radiopharmaceutical, namely (131)I-hypericin ((131)I-Hyp), is administered the next day, which accumulates in intratumoral necrosis and irradiates the residual cancer cells with beta particles. Theoretically, this complementary targeted approach may biologically and radioactively ablate solid tumors and reduce the risk of local recurrence, remote metastases, and thus cancer mortality. Meanwhile, the emitted gamma rays facilitate radio-scintigraphy to detect tumors and follow up the therapy, hence a simultaneous theragnostic approach. SMSDTTS has now shown promise from multicenter animal experiments and may demonstrate unique anticancer efficacy in upcoming preliminary clinical trials. In this short review article, information about the two involved agents, the rationale of SMSDTTS, its preclinical antitumor efficacy, multifocal targetability, simultaneous theragnostic property, and toxicities of the dose regimens are summarized. Meanwhile, possible drawbacks, practical challenges and future improvement with SMSDTTS are discussed, which hopefully may help to push forward this strategy from preclinical experiments towards possible clinical applications.

  12. Projection of heat waves over China for eight different global warming targets using 12 CMIP5 models

    NASA Astrophysics Data System (ADS)

    Guo, Xiaojun; Huang, Jianbin; Luo, Yong; Zhao, Zongci; Xu, Ying

    2017-05-01

    Simulation and projection of the characteristics of heat waves over China were investigated using 12 CMIP5 global climate models and the CN05.1 observational gridded dataset. Four heat wave indices (heat wave frequency, longest heat wave duration, heat wave days, and high temperature days) were adopted in the analysis. Evaluations of the 12 CMIP5 models and their ensemble indicated that the multi-model ensemble could capture the spatiotemporal characteristics of heat wave variation over China. The inter-decadal variations of heat waves during 1961-2005 can be well simulated by multi-model ensemble. Based on model projections, the features of heat waves over China for eight different global warming targets (1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, and 5.0 °C) were explored. The results showed that the frequency and intensity of heat waves would increase more dramatically as the global mean temperature rise attained higher warming targets. Under the RCP8.5 scenario, the four China-averaged heat wave indices would increase from about 1.0 times/year, 2.5, 5.4, and 13.8 days/year to about 3.2 times/year, 14.0, 32.0, and 31.9 days/year for 1.5 and 5.0 °C warming targets, respectively. Those regions that suffer severe heat waves in the base climate would experience the heat waves with greater frequency and severity following global temperature rise. It is also noteworthy that the areas in which a greater number of severe heat waves occur displayed considerable expansion. Moreover, the model uncertainties exhibit a gradual enhancement with projected time extending from 2006 to 2099.

  13. Greater Experience of Negative Non-Target Emotions by Patients with Neurodegenerative Diseases Is Related to Lower Emotional Well-Being in Caregivers.

    PubMed

    Chen, Kuan-Hua; Wells, Jenna L; Otero, Marcela C; Lwi, Sandy J; Haase, Claudia M; Levenson, Robert W

    2017-01-01

    Behavioral symptoms in patients with neurodegenerative diseases can be particularly challenging for caregivers. Previously, we reported that patients with frontotemporal dementia (FTD) and Alzheimer's disease (AD) experienced emotions that were atypical or incongruent with a given situation (i.e., non-target emotions). We tested the hypothesis that greater experience of non-target emotions by patients is associated with lower caregiver emotional well-being. 178 patients with FTD, AD, or other neurodegenerative diseases and 35 healthy individuals watched 3 films designed to induce amusement, sadness, and disgust, and then reported their emotions during the films. Caregivers of the patients reported their own emotional well-being on the Medical Outcomes Study 36-item Short-Form Health Survey. In response to the amusement and sadness (but not disgust) films, greater experience of non-target emotions by patients was related to lower caregiver emotional well-being. These effects were specific to patients' experience of negative non-target emotions (i.e., not found for positive non-target emotions or for negative or positive target emotions). The findings reveal a previously unstudied patient behavior that is related to worse caregiver emotional well-being. Future research and clinical assessment may benefit from evaluating non-target emotions in patients. © 2017 S. Karger AG, Basel.

  14. A computational framework for modeling targets as complex adaptive systems

    NASA Astrophysics Data System (ADS)

    Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh

    2017-05-01

    Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.

  15. The Brain-Targeted Teaching Model for 21st-Century Schools

    ERIC Educational Resources Information Center

    Hardiman, Mariale

    2012-01-01

    "The Brain-Targeted Teaching Model for 21st-Century Schools" serves as a bridge between research and practice by providing a cohesive, proven, and usable model of effective instruction. Compatible with other professional development programs, this model shows how to apply relevant research from educational and cognitive neuroscience to classroom…

  16. Kalman filter data assimilation: Targeting observations and parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex

    2014-06-15

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less

  17. Method calibration of the model 13145 infrared target projectors

    NASA Astrophysics Data System (ADS)

    Huang, Jianxia; Gao, Yuan; Han, Ying

    2014-11-01

    The SBIR Model 13145 Infrared Target Projectors ( The following abbreviation Evaluation Unit ) used for characterizing the performances of infrared imaging system. Test items: SiTF, MTF, NETD, MRTD, MDTD, NPS. Infrared target projectors includes two area blackbodies, a 12 position target wheel, all reflective collimator. It provide high spatial frequency differential targets, Precision differential targets imaged by infrared imaging system. And by photoelectricity convert on simulate signal or digital signal. Applications software (IR Windows TM 2001) evaluate characterizing the performances of infrared imaging system. With regards to as a whole calibration, first differently calibration for distributed component , According to calibration specification for area blackbody to calibration area blackbody, by means of to amend error factor to calibration of all reflective collimator, radiance calibration of an infrared target projectors using the SR5000 spectral radiometer, and to analyze systematic error. With regards to as parameter of infrared imaging system, need to integrate evaluation method. According to regulation with -GJB2340-1995 General specification for military thermal imaging sets -testing parameters of infrared imaging system, the results compare with results from Optical Calibration Testing Laboratory . As a goal to real calibration performances of the Evaluation Unit.

  18. Modeling side-chains using molecular dynamics improve recognition of binding region in CAPRI targets.

    PubMed

    Camacho, Carlos J

    2005-08-01

    The CAPRI-II experiment added an extra level of complexity to the problem of predicting protein-protein interactions by including 5 targets for which participants had to build or complete the 3-dimensional (3D) structure of either the receptor or ligand based on the structure of a close homolog. In this article, we describe how modeling key side-chains using molecular dynamics (MD) in explicit solvent improved the recognition of the binding region of a free energy- based computational docking method. In particular, we show that MD is able to predict with relatively high accuracy the rotamer conformation of the anchor side-chains important for molecular recognition as suggested by Rajamani et al. (Proc Natl Acad Sci USA 2004;101:11287-11292). As expected, the conformations are some of the most common rotamers for the given residue, while latch side-chains that undergo induced fit upon binding are forced into less common conformations. Using these models as starting conformations in conjunction with the rigid-body docking server ClusPro and the flexible docking algorithm SmoothDock, we produced valuable predictions for 6 of the 9 targets in CAPRI-II, missing only the 3 targets that underwent significant structural rearrangements upon binding. We also show that our free energy- based scoring function, consisting of the sum of van der Waals, Coulombic electrostatic with a distance-dependent dielectric, and desolvation free energy successfully discriminates the nativelike conformation of our submitted predictions. The latter emphasizes the critical role that thermodynamics plays on our methodology, and validates the generality of the algorithm to predict protein interactions.

  19. Polar versus Cartesian velocity models for maneuvering target tracking with IMM

    NASA Astrophysics Data System (ADS)

    Laneuville, Dann

    This paper compares various model sets in different IMM filters for the maneuvering target tracking problem. The aim is to see whether we can improve the tracking performance of what is certainly the most widely used model set in the literature for the maneuvering target tracking problem: a Nearly Constant Velocity model and a Nearly Coordinated Turn model. Our new challenger set consists of a mixed Cartesian position and polar velocity state vector to describe the uniform motion segments and is augmented with the turn rate to obtain the second model for the maneuvering segments. This paper also gives a general procedure to discretize up to second order any non-linear continuous time model with linear diffusion. Comparative simulations on an air defence scenario with a 2D radar, show that this new approach improves significantly the tracking performance in this case.

  20. Understanding Laser-Imprint Effects on Plastic-Target Implosions on OMEGA with New Physics Models

    NASA Astrophysics Data System (ADS)

    Hu, S. X.; Michel, D. T.; Davis, A. K.; Betti, R.; Radha, P. B.; Campbell, E. M.; Froula, D. H.; Stoeckl, C.

    2016-10-01

    Using the state-of-the-art physics models (nonlocal thermal transport, cross-beam energy transfer, and first-principles equation of state) recently implemented in our two-dimensional hydrocode DRACO, we have performed a systematic study of laser-imprint effects on plastic-target implosions on OMEGA by both simulations and experiments. Through varying the laser picket intensity, the imploding shells were set at different adiabats ranging from α = 2 to α = 6 . As the shell adiabat α decreases, we observed: (1) the measured shell thickness at the hot spot emission becomes larger than the uniform prediction; (2) the hot-spot core emits and neutron burn starts earlier than the corresponding 1-D prediction; and (3) the measured neutron yields are significantly reduced from their 1-D designs. Most of these experimental observations are well reproduced by our DRACO simulations with laser imprints. These studies clearly identify that laser imprint is the major cause for target performance degradation of OMEGA implosions of α <= 3 . Mitigating laser imprints must be an essential effort to improve low- α target performance in direct-drive inertial confinement fusion ignition attempts. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  1. A thin gold coated hydrogen heat pipe-cryogenic target for external experiments at COSY

    NASA Astrophysics Data System (ADS)

    Abdel-Bary, M.; Abdel-Samad, S.; Elawadi, G. A.; Kilian, K.; Ritman, J.

    2009-05-01

    A gravity assisted Gold coated heat pipe (GCHP) with 5-mm diameter has been developed and tested to cool a liquid hydrogen target for external beam experiments at COSY. The need for a narrow target diameter leads us to study the effect of reducing the heat pipe diameter to 5 mm instead of 7 mm, to study the effect of coating the external surface of the heat pipe by a shiny gold layer (to decrease the radiation heat load), and to study the effect of using the heat pipe without using 20 layers of' super-insulation around it (aluminized Mylar foil) to keep the target diameter as small as possible. The developed gold coated heat pipe was tested with 20 layers of super-insulation (WI) and without super-insulation (WOI). The operating characteristics for both conditions were compared to show the advantages and disadvantages.

  2. Fuzzy Neural Network-Based Interacting Multiple Model for Multi-Node Target Tracking Algorithm

    PubMed Central

    Sun, Baoliang; Jiang, Chunlan; Li, Ming

    2016-01-01

    An interacting multiple model for multi-node target tracking algorithm was proposed based on a fuzzy neural network (FNN) to solve the multi-node target tracking problem of wireless sensor networks (WSNs). Measured error variance was adaptively adjusted during the multiple model interacting output stage using the difference between the theoretical and estimated values of the measured error covariance matrix. The FNN fusion system was established during multi-node fusion to integrate with the target state estimated data from different nodes and consequently obtain network target state estimation. The feasibility of the algorithm was verified based on a network of nine detection nodes. Experimental results indicated that the proposed algorithm could trace the maneuvering target effectively under sensor failure and unknown system measurement errors. The proposed algorithm exhibited great practicability in the multi-node target tracking of WSNs. PMID:27809271

  3. Model-based recognition of 3D articulated target using ladar range data.

    PubMed

    Lv, Dan; Sun, Jian-Feng; Li, Qi; Wang, Qi

    2015-06-10

    Ladar is suitable for 3D target recognition because ladar range images can provide rich 3D geometric surface information of targets. In this paper, we propose a part-based 3D model matching technique to recognize articulated ground military vehicles in ladar range images. The key of this approach is to solve the decomposition and pose estimation of articulated parts of targets. The articulated components were decomposed into isolate parts based on 3D geometric properties of targets, such as surface point normals, data histogram distribution, and data distance relationships. The corresponding poses of these separate parts were estimated through the linear characteristics of barrels. According to these pose parameters, all parts of the target were roughly aligned to 3D point cloud models in a library and fine matching was finally performed to accomplish 3D articulated target recognition. The recognition performance was evaluated with 1728 ladar range images of eight different articulated military vehicles with various part types and orientations. Experimental results demonstrated that the proposed approach achieved a high recognition rate.

  4. An experiment with interactive planning models

    NASA Technical Reports Server (NTRS)

    Beville, J.; Wagner, J. H.; Zannetos, Z. S.

    1970-01-01

    Experiments on decision making in planning problems are described. Executives were tested in dealing with capital investments and competitive pricing decisions under conditions of uncertainty. A software package, the interactive risk analysis model system, was developed, and two controlled experiments were conducted. It is concluded that planning models can aid management, and predicted uses of the models are as a central tool, as an educational tool, to improve consistency in decision making, to improve communications, and as a tool for consensus decision making.

  5. Artificially-induced organelles are optimal targets for optical trapping experiments in living cells

    PubMed Central

    López-Quesada, C.; Fontaine, A.-S.; Farré, A.; Joseph, M.; Selva, J.; Egea, G.; Ludevid, M. D.; Martín-Badosa, E.; Montes-Usategui, M.

    2014-01-01

    Optical trapping supplies information on the structural, kinetic or rheological properties of inner constituents of the cell. However, the application of significant forces to intracellular objects is notoriously difficult due to a combination of factors, such as the small difference between the refractive indices of the target structures and the cytoplasm. Here we discuss the possibility of artificially inducing the formation of spherical organelles in the endoplasmic reticulum, which would contain densely packed engineered proteins, to be used as optimized targets for optical trapping experiments. The high index of refraction and large size of our organelles provide a firm grip for optical trapping and thereby allow us to exert large forces easily within safe irradiation limits. This has clear advantages over alternative probes, such as subcellular organelles or internalized synthetic beads. PMID:25071944

  6. Target Control in Logical Models Using the Domain of Influence of Nodes.

    PubMed

    Yang, Gang; Gómez Tejeda Zañudo, Jorge; Albert, Réka

    2018-01-01

    Dynamical models of biomolecular networks are successfully used to understand the mechanisms underlying complex diseases and to design therapeutic strategies. Network control and its special case of target control, is a promising avenue toward developing disease therapies. In target control it is assumed that a small subset of nodes is most relevant to the system's state and the goal is to drive the target nodes into their desired states. An example of target control would be driving a cell to commit to apoptosis (programmed cell death). From the experimental perspective, gene knockout, pharmacological inhibition of proteins, and providing sustained external signals are among practical intervention techniques. We identify methodologies to use the stabilizing effect of sustained interventions for target control in Boolean network models of biomolecular networks. Specifically, we define the domain of influence (DOI) of a node (in a certain state) to be the nodes (and their corresponding states) that will be ultimately stabilized by the sustained state of this node regardless of the initial state of the system. We also define the related concept of the logical domain of influence (LDOI) of a node, and develop an algorithm for its identification using an auxiliary network that incorporates the regulatory logic. This way a solution to the target control problem is a set of nodes whose DOI can cover the desired target node states. We perform greedy randomized adaptive search in node state space to find such solutions. We apply our strategy to in silico biological network models of real systems to demonstrate its effectiveness.

  7. Active imaging system performance model for target acquisition

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Teaney, Brian; Nguyen, Quang; Jacobs, Eddie L.; Halford, Carl E.; Tofsted, David H.

    2007-04-01

    The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has developed a laser-range-gated imaging system performance model for the detection, recognition, and identification of vehicle targets. The model is based on the established US Army RDECOM CERDEC NVESD sensor performance models of the human system response through an imaging system. The Java-based model, called NVLRG, accounts for the effect of active illumination, atmospheric attenuation, and turbulence effects relevant to LRG imagers, such as speckle and scintillation, and for the critical sensor and display components. This model can be used to assess the performance of recently proposed active SWIR systems through various trade studies. This paper will describe the NVLRG model in detail, discuss the validation of recent model components, present initial trade study results, and outline plans to validate and calibrate the end-to-end model with field data through human perception testing.

  8. Infrared small target detection based on Danger Theory

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Yang, Xiao

    2009-11-01

    To solve the problem that traditional method can't detect the small objects whose local SNR is less than 2 in IR images, a Danger Theory-based model to detect infrared small target is presented in this paper. First, on the analog with immunology, the definition is given, in this paper, to such terms as dangerous signal, antigens, APC, antibodies. Besides, matching rule between antigen and antibody is improved. Prior to training the detection model and detecting the targets, the IR images are processed utilizing adaptive smooth filter to decrease the stochastic noise. Then at the training process, deleting rule, generating rule, crossover rule and the mutation rule are established after a large number of experiments in order to realize immediate convergence and obtain good antibodies. The Danger Theory-based model is built after the training process, and this model can detect the target whose local SNR is only 1.5.

  9. Targeted Proteomics-Driven Computational Modeling of Macrophage S1P Chemosensing*

    PubMed Central

    Manes, Nathan P.; Angermann, Bastian R.; Koppenol-Raab, Marijke; An, Eunkyung; Sjoelund, Virginie H.; Sun, Jing; Ishii, Masaru; Germain, Ronald N.; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra

    2015-01-01

    Osteoclasts are monocyte-derived multinuclear cells that directly attach to and resorb bone. Sphingosine-1-phosphate (S1P)1 regulates bone resorption by functioning as both a chemoattractant and chemorepellent of osteoclast precursors through two G-protein coupled receptors that antagonize each other in an S1P-concentration-dependent manner. To quantitatively explore the behavior of this chemosensing pathway, we applied targeted proteomics, transcriptomics, and rule-based pathway modeling using the Simmune toolset. RAW264.7 cells (a mouse monocyte/macrophage cell line) were used as model osteoclast precursors, RNA-seq was used to identify expressed target proteins, and selected reaction monitoring (SRM) mass spectrometry using internal peptide standards was used to perform absolute abundance measurements of pathway proteins. The resulting transcript and protein abundance values were strongly correlated. Measured protein abundance values, used as simulation input parameters, led to in silico pathway behavior matching in vitro measurements. Moreover, once model parameters were established, even simulated responses toward stimuli that were not used for parameterization were consistent with experimental findings. These findings demonstrate the feasibility and value of combining targeted mass spectrometry with pathway modeling for advancing biological insight. PMID:26199343

  10. Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking

    NASA Astrophysics Data System (ADS)

    Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.

    2009-08-01

    The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.

  11. Targeting Trypsin-Inflammation Axis for Pancreatitis Therapy in a Humanized Pancreatitis Model

    DTIC Science & Technology

    2016-10-01

    Award Number: W81XWH-15-1-0257 TITLE: Targeting Trypsin-Inflammation Axis for Pancreatitis Therapy in a Humanized Pancreatitis Model PRINCIPAL...AND SUBTITLE Targeting Trypsin-Inflammation Axis for Pancreatitis Therapy in a Humanized Pancreatitis Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER...remains the same since it is covered under the institutional review. We set up monthly video conferences with our partnership PI to discuss any

  12. Drug-Target Interaction Prediction through Label Propagation with Linear Neighborhood Information.

    PubMed

    Zhang, Wen; Chen, Yanlin; Li, Dingfang

    2017-11-25

    Interactions between drugs and target proteins provide important information for the drug discovery. Currently, experiments identified only a small number of drug-target interactions. Therefore, the development of computational methods for drug-target interaction prediction is an urgent task of theoretical interest and practical significance. In this paper, we propose a label propagation method with linear neighborhood information (LPLNI) for predicting unobserved drug-target interactions. Firstly, we calculate drug-drug linear neighborhood similarity in the feature spaces, by considering how to reconstruct data points from neighbors. Then, we take similarities as the manifold of drugs, and assume the manifold unchanged in the interaction space. At last, we predict unobserved interactions between known drugs and targets by using drug-drug linear neighborhood similarity and known drug-target interactions. The experiments show that LPLNI can utilize only known drug-target interactions to make high-accuracy predictions on four benchmark datasets. Furthermore, we consider incorporating chemical structures into LPLNI models. Experimental results demonstrate that the model with integrated information (LPLNI-II) can produce improved performances, better than other state-of-the-art methods. The known drug-target interactions are an important information source for computational predictions. The usefulness of the proposed method is demonstrated by cross validation and the case study.

  13. Extended Fitts' model of pointing time in eye-gaze input system - Incorporating effects of target shape and movement direction into modeling.

    PubMed

    Murata, Atsuo; Fukunaga, Daichi

    2018-04-01

    This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.

  14. The Unintended Consequences of Targeting: Young People's Lived Experiences of Social and Emotional Learning Interventions

    ERIC Educational Resources Information Center

    Evans, Rhiannon; Scourfield, Jonathan; Murphy, Simon

    2015-01-01

    In the past twenty years there has been a proliferation of targeted school-based social and emotional learning (SEL) interventions. However, the lived experience of young peoples' participation is often elided, while the potential for interventions to confer unintended and even adverse effects remains under-theorised and empirically…

  15. Extracting Models in Single Molecule Experiments

    NASA Astrophysics Data System (ADS)

    Presse, Steve

    2013-03-01

    Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.

  16. Knowledge-based approach for generating target system specifications from a domain model

    NASA Technical Reports Server (NTRS)

    Gomaa, Hassan; Kerschberg, Larry; Sugumaran, Vijayan

    1992-01-01

    Several institutions in industry and academia are pursuing research efforts in domain modeling to address unresolved issues in software reuse. To demonstrate the concepts of domain modeling and software reuse, a prototype software engineering environment is being developed at George Mason University to support the creation of domain models and the generation of target system specifications. This prototype environment, which is application domain independent, consists of an integrated set of commercial off-the-shelf software tools and custom-developed software tools. This paper describes the knowledge-based tool that was developed as part of the environment to generate target system specifications from a domain model.

  17. Kinetic analysis of the effects of target structure on siRNA efficiency

    NASA Astrophysics Data System (ADS)

    Chen, Jiawen; Zhang, Wenbing

    2012-12-01

    RNAi efficiency for target cleavage and protein expression is related to the target structure. Considering the RNA-induced silencing complex (RISC) as a multiple turnover enzyme, we investigated the effect of target mRNA structure on siRNA efficiency with kinetic analysis. The 4-step model was used to study the target cleavage kinetic process: hybridization nucleation at an accessible target site, RISC-mRNA hybrid elongation along with mRNA target structure melting, target cleavage, and enzyme reactivation. At this model, the terms accounting for the target accessibility, stability, and the seed and the nucleation site effects are all included. The results are in good agreement with that of experiments which show different arguments about the structure effects on siRNA efficiency. It shows that the siRNA efficiency is influenced by the integrated factors of target's accessibility, stability, and the seed effects. To study the off-target effects, a simple model of one siRNA binding to two mRNA targets was designed. By using this model, the possibility for diminishing the off-target effects by the concentration of siRNA was discussed.

  18. Implementing demand side targeting mechanisms for maternal and child health-experiences from national health insurance fund program in Rungwe District, Tanzania.

    PubMed

    Kuwawenaruwa, August; Mtei, Gemini; Baraka, Jitihada; Tani, Kassimu

    2016-08-02

    Low and middle income countries have adopted targeting mechanisms as a means of increasing program efficiency in reaching marginalized people in the community given the available resources. Design of targeting mechanisms has been changing over time and it is important to understand implementers' experience with such targeting mechanisms since such mechanisms impact equity in access and use of maternal health care services. The case study approach was considered as appropriate method for exploring implementers' and decision-makers' experiences with the two targeting mechanisms. In-depth interviews in order to explore implementer experience with the two targeting mechanisms. A total of 10 in-depth interviews (IDI) and 4 group discussions (GDs) were conducted with implementers at national level, regional, district and health care facility level. A thematic analysis approach was adopted during data analysis. The whole process of screening and identifying poor pregnant women resulted in delay in implementation of the intervention. Individual targeting was perceived to have some form of stigmatization; hence beneficiaries did not like to be termed as poor. Geographical targeting had a few cons as health care providers experienced an increase in workload while staff remained the same and poor quality of information in the claim forms. However geographical targeting increase in the number of women going to higher level of care (district/regional referral hospital), increase in facility revenue and insurance coverage. Interventions which are using targeting mechanisms to reach poor people are useful in increasing access and use of health care services for marginalized communities so long as they are well designed and beneficiaries as well as all implementers and decision makers are involved from the very beginning. Implementation of demand side financing strategies using targeting mechanisms should go together with supply side interventions in order to achieve project

  19. The TARGET project in Tuscany: the first disease management model of a regional project for the prevention of hip re-fractures in the elderly.

    PubMed

    Piscitelli, Prisco; Brandi, Maria Luisa; Nuti, Ranuccio; Rizzuti, Carla; Giorni, Loredano; Giovannini, Valtere; Metozzi, Alessia; Merlotti, Daniela

    2010-09-01

    The official inquiry on osteoporosis in Italy, promoted by the Italian Senate in 2002 concluded that proper preventive strategies should be adopted at regional level in order to prevent osteoporotic fractures. Tuscany is the first Italian region who has promoted an official program (the TARGET project) aimed to reduce osteoporotic fractures by ensuring adequate treatment to all people aged ≥65 years old who experience a hip fragility fracture. this paper provides information concerning the implementation of TARGET project in Tuscany, assuming that it may represent an useful model for similar experiences to be promoted in other Italian Regions and across Europe. we have examined the model proposed for the regional program, and we have particularly analyzed the in-hospital and post-hospitalization path of hip fractured patients aged >65 years old in Tuscany after the adoption of TARGET project by Tuscany healthcare system and during its ongoing start-up phase. orthopaedic surgeons have been gradually involved in the project and are increasingly fulfilling all the clinical prescriptions and recommendations provided in the project protocol. Different forms of cooperation between orthopaedic surgeons and other clinical specialists have been adopted at each hospital for the treatment of hip fractured elderly patients. GPs involvement needs to be fostered both at regional and local level. The effort of Tuscany region to cope with hip fractures suffered from elderly people must be acknowledged as an interesting way of addressing this critical health problem. Specific preventive strategies modelled on the Tuscany TARGET project should be implemented in other Italian regions.

  20. 'Targeting' sedation: the lived experience of the intensive care nurse.

    PubMed

    Everingham, Kirsty; Fawcett, Tonks; Walsh, Tim

    2014-03-01

    To discuss the findings from a phenomenological study that provides insights into the intensive care nurses' 'world' following changes in the sedation management of patients in an intensive care unit. Intensive care sedation practices have undergone significant changes. Patients, where possible, are now managed on lighter levels of sedation, often achieved through the performance of sedation holds (SHs). The performance of SHs is normally carried out by the bedside nurse but compliance is reported to be poor. There has been little exploration of the nurses' experiences of these changes and the implications of SHs and subsequent wakefulness on their delivery of care. Following ethical approval, 16 intensive care nurses, experienced and inexperienced, from within a general intensive care unit. A Heideggerian phenomenological approach was used. Data collection consisted of interviews guided by an aide memoir and a framework adapted from Van Manen informed the analysis. The findings reveal new insights into the world of the intensive care nurse in the light of the changes to sedation management. They demonstrate that there have been unforeseen outcomes from well-intentioned initiatives to improve the quality of patients' care. There were implications from the changes introduced for the nurses care delivery. The main themes that emerged were 'working priorities' and 'unintended consequences', in turn revealing embedded tensions between evidence-based targets and holistic care. Intensive care nurses find that the current approach to the changes in sedation management can threaten their professional obligation and personal desire to provide holistic care. The 'targeted' approach by healthcare organisations is perceived to militate against the patient-centred care they want to deliver. Sedation management is complex and needs further consideration particularly the potential constraints 'target-led' care has on nursing practice. © 2013 Blackwell Publishing Ltd.

  1. Turbulence modeling and experiments

    NASA Technical Reports Server (NTRS)

    Shabbir, Aamir

    1992-01-01

    The best way of verifying turbulence is to do a direct comparison between the various terms and their models. The success of this approach depends upon the availability of the data for the exact correlations (both experimental and DNS). The other approach involves numerically solving the differential equations and then comparing the results with the data. The results of such a computation will depend upon the accuracy of all the modeled terms and constants. Because of this it is sometimes difficult to find the cause of a poor performance by a model. However, such a calculation is still meaningful in other ways as it shows how a complete Reynolds stress model performs. Thirteen homogeneous flows are numerically computed using the second order closure models. We concentrate only on those models which use a linear (or quasi-linear) model for the rapid term. This, therefore, includes the Launder, Reece and Rodi (LRR) model; the isotropization of production (IP) model; and the Speziale, Sarkar, and Gatski (SSG) model. Which of the three models performs better is examined along with what are their weaknesses, if any. The other work reported deal with the experimental balances of the second moment equations for a buoyant plume. Despite the tremendous amount of activity toward the second order closure modeling of turbulence, very little experimental information is available about the budgets of the second moment equations. Part of the problem stems from our inability to measure the pressure correlations. However, if everything else appearing in these equations is known from the experiment, pressure correlations can be obtained as the closing terms. This is the closest we can come to in obtaining these terms from experiment, and despite the measurement errors which might be present in such balances, the resulting information will be extremely useful for the turbulence modelers. The purpose of this part of the work was to provide such balances of the Reynolds stress and heat

  2. Evanescent acoustic waves: Production and scattering by resonant targets

    NASA Astrophysics Data System (ADS)

    Osterhoudt, Curtis F.

    Small targets with acoustic resonances which may be excited by incident acoustic planewaves are shown to possess high-Q modes ("organ-pipe" modes) which may be suitable for ocean-based calibration and ranging purposes. The modes are modeled using a double point-source model; this, along with acoustic reciprocity and inversion symmetry, is shown to adequately model the backscattering form functions of the modes at low frequencies. The backscattering form-functions are extended to apply to any bistatic acoustic experiment using the targets when the target response is dominated by the modes in question. An interface between two fluids which each approximate an unbounded half-space has been produced in the laboratory. The fluids have different sound speeds. When sound is incident on this interface at beyond the critical angle from within the first fluid, the second fluid is made to evince a region dominated by evanescent acoustic energy. Such a system is shown to be an possible laboratory-based proxy for a flat sediment bottom in the ocean, or sloped (unrippled) bottom in littoral environments. The evanescent sound field is characterized and shown to have complicated features despite the simplicity of its production. Notable among these features is the presence of dips in the soundfield amplitude, or "quasi-nulls". These are proposed to be extremely important when considering the return from ocean-based experiments. The soundfield features are also shown to be accurately predicted and characterized by wavenumber-integration software. The targets which exhibit organ-pipe modes in the free-field are shown to also be excited by the evanescent waves, and may be used as soundfield probes when the target returns are well characterized. Alternately, if the soundfield is well-known, the target parameters may be extracted from back- or bistatic-scattering experiments in evanescent fields. It is shown that the spatial decay rate as measured by a probe directly in the evanescent

  3. Three dimensional modelling for the target asteroid of HAYABUSA

    NASA Astrophysics Data System (ADS)

    Demura, H.; Kobayashi, S.; Asada, N.; Hashimoto, T.; Saito, J.

    Hayabusa program is the first sample return mission of Japan. This was launched at May 9 2003, and will arrive at the target asteroid 25143 Itokawa on June 2005. The spacecraft has three optical navigation cameras, which are two wide angle ones and a telescopic one. The telescope with a filter wheel was named AMICA (Asteroid Multiband Imaging CAmera). We are going to model a shape of the target asteroid by this telescope; expected resolution: 1m/pixel at 10 km in distanc, field of view: 5.7 squared degrees, MPP-type CCD with 1024 x 1000 pixels. Because size of the Hayabusa is about 1x1x1 m, our goal is shape modeling with about 1m in precision on the basis of a camera system with scanning by rotation of the asteroid. This image-based modeling requires sequential images via AMICA and a history of distance between the asteroid and Hayabusa provided by a Laser Range Finder. We established a system of hierarchically recursive search with sub-pixel matching of Ground Control Points, which are picked up with Susan Operator. The matched dataset is restored with a restriction of epipolar geometry, and the obtained a group of three dimensional points are converted to a polygon model with Delaunay Triangulation. The current status of our development for the shape modeling is displayed.

  4. Implosion of multilayered cylindrical targets driven by intense heavy ion beams.

    PubMed

    Piriz, A R; Portugues, R F; Tahir, N A; Hoffmann, D H H

    2002-11-01

    An analytical model for the implosion of a multilayered cylindrical target driven by an intense heavy ion beam has been developed. The target is composed of a cylinder of frozen hydrogen or deuterium, which is enclosed in a thick shell of solid lead. This target has been designed for future high-energy-density matter experiments to be carried out at the Gesellschaft für Schwerionenforschung, Darmstadt. The model describes the implosion dynamics including the motion of the incident shock and the first reflected shock and allows for calculation of the physical conditions of the hydrogen at stagnation. The model predicts that the conditions of the compressed hydrogen are not sensitive to significant variations in target and beam parameters. These predictions are confirmed by one-dimensional numerical simulations and thus allow for a robust target design.

  5. HARP targets pion production cross section and yield measurements: Implications for MiniBooNE neutrino flux

    NASA Astrophysics Data System (ADS)

    Wickremasinghe, Don Athula Abeyarathna

    The prediction of the muon neutrino flux from a 71.0 cm long beryllium target for the MiniBooNE experiment is based on a measured pion production cross section which was taken from a short beryllium target (2.0 cm thick - 5% nuclear interaction length) in the Hadron Production (HARP) experiment at CERN. To verify the extrapolation to our longer target, HARP also measured the pion production from 20.0 cm and 40.0 cm beryllium targets. The measured production yields on targets of 50% and 100% nuclear interaction lengths in the kinematic rage of momentum from 0.75 GeV/c to 6.5 GeV/c and the range of angle from 30 mrad to 210 mrad are presented along with an update of the short target cross sections. The best fitted extended Sanford-Wang (SW) model parameterization for updated short beryllium target positive pion production cross section is presented. Yield measurements for all three targets are also compared with that from the Monte Carlo predictions in the MiniBooNE experiment for different SW parameterization. The comparisons of muon neutrino flux predictions for updated SW model is presented.

  6. Targeting the minimal supersymmetric standard model with the compact muon solenoid experiment

    NASA Astrophysics Data System (ADS)

    Bein, Samuel Louis

    An interpretation of CMS searches for evidence of supersymmetry in the context of the minimal supersymmetric Standard Model (MSSM) is given. It is found that supersymmetric particles with color charge are excluded in the mass range below about 400 GeV, but neutral and weakly-charged sparticles remain non-excluded in all mass ranges. Discussion of the non-excluded regions of the model parameter space is given, including details on the strengths and weaknesses of existing searches, and recommendations for future analysis strategies. Advancements in the modeling of events arising from quantum chromodynamics and electroweak boson production, which are major backgrounds in searches for new physics at the LHC, are also presented. These methods have been implemented as components of CMS searches for supersymmetry in proton-proton collisions resulting in purely hadronic events (i.e., events with no identified leptons) at a center of momentum energy of 13 TeV. These searches, interpreted in the context of simplified models, exclude supersymmetric gluons (gluinos) up to masses of 1400 to 1600 GeV, depending on the model considered, and exclude scalar top quarks with masses up to about 800 GeV, assuming a massless lightest supersymmetric particle. A search for non-excluded supersymmetry models is also presented, which uses multivariate discriminants to isolate potential signal candidate events. The search achieves sensitivity to new physics models in background-dominated kinematic regions not typically considered by analyses, and rules out supersymmetry models that survived 7 and 8 TeV searches performed by CMS.

  7. Drug synergy screen and network modeling in dedifferentiated liposarcoma identifies CDK4 and IGF1R as synergistic drug targets.

    PubMed

    Miller, Martin L; Molinelli, Evan J; Nair, Jayasree S; Sheikh, Tahir; Samy, Rita; Jing, Xiaohong; He, Qin; Korkut, Anil; Crago, Aimee M; Singer, Samuel; Schwartz, Gary K; Sander, Chris

    2013-09-24

    Dedifferentiated liposarcoma (DDLS) is a rare but aggressive cancer with high recurrence and low response rates to targeted therapies. Increasing treatment efficacy may require combinations of targeted agents that counteract the effects of multiple abnormalities. To identify a possible multicomponent therapy, we performed a combinatorial drug screen in a DDLS-derived cell line and identified cyclin-dependent kinase 4 (CDK4) and insulin-like growth factor 1 receptor (IGF1R) as synergistic drug targets. We measured the phosphorylation of multiple proteins and cell viability in response to systematic drug combinations and derived computational models of the signaling network. These models predict that the observed synergy in reducing cell viability with CDK4 and IGF1R inhibitors depends on the activity of the AKT pathway. Experiments confirmed that combined inhibition of CDK4 and IGF1R cooperatively suppresses the activation of proteins within the AKT pathway. Consistent with these findings, synergistic reductions in cell viability were also found when combining CDK4 inhibition with inhibition of either AKT or epidermal growth factor receptor (EGFR), another receptor similar to IGF1R that activates AKT. Thus, network models derived from context-specific proteomic measurements of systematically perturbed cancer cells may reveal cancer-specific signaling mechanisms and aid in the design of effective combination therapies.

  8. Hydrodynamic simulations of long-scale-length plasmas for two-plasmon-decay planar-target experiments on the NIF

    NASA Astrophysics Data System (ADS)

    Solodov, A. A.; Rosenberg, M. J.; Myatt, J. F.; Epstein, R.; Regan, S. P.; Seka, W.; Shaw, J.; Hohenberger, M.; Bates, J. W.; Moody, J. D.; Ralph, J. E.; Turnbull, D. P.; Barrios, M. A.

    2016-05-01

    The two-plasmon-decay (TPD) instability can be detrimental for direct-drive inertial confinement fusion because it generates high-energy electrons that can preheat the target, thereby reducing target performance. Hydrodynamic simulations to design a new experimental platform to investigate TPD and other laser-plasma instabilities relevant to direct-drive-ignition implosions at the National Ignition Facility are presented. The proposed experiments utilize planar plastic targets with an embedded Mo layer to characterize generation of hot electrons through Mo Kα fluorescence and hard x-ray emission. Different laser-irradiation geometries approximate conditions near both the equator and the pole of a polar-direct-drive implosion.

  9. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot.

    PubMed

    Duan, Xingguang; Gao, Liang; Wang, Yonggui; Li, Jianxi; Li, Haoyuan; Guo, Yanjun

    2018-01-01

    In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, "kinematics + optics" hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning.

  10. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot

    PubMed Central

    Duan, Xingguang; Gao, Liang; Li, Jianxi; Li, Haoyuan; Guo, Yanjun

    2018-01-01

    In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, “kinematics + optics” hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning. PMID:29599948

  11. Nail-like targets for laser plasma interaction experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasley, J; Wei, M; Shipton, E

    2007-12-18

    The interaction of ultra-high power picosecond laser pulses with solid targets is of interest both for benchmarking the results of hybrid particle in cell (PIC) codes and also for applications to re-entrant cone guided fast ignition. We describe the construction of novel targets in which copper/titanium wires are formed into 'nail-like' objects by a process of melting and micromachining, so that energy can be reliably coupled to a 24 {micro}m diameter wire. An extreme-ultraviolet image of the interaction of the Titan laser with such a target is shown.

  12. Dissecting and Targeting Latent Metastasis

    DTIC Science & Technology

    2014-09-01

    metastasis of breast cancer (LMBC). These cells retain the potential to form overt metastasis for years. Targeting LMBC with new drugs offers an...cancer cell extravasation through the BBB in experimental models and predict brain metastasis in the clinic (16). Once inside the brain parenchyma... drugs that perturb gap junction activity (Chen et al submitted for publication). In our experiments, both drugs as single agents were effective

  13. Pose measurement method and experiments for high-speed rolling targets in a wind tunnel.

    PubMed

    Jia, Zhenyuan; Ma, Xin; Liu, Wei; Lu, Wenbo; Li, Xiao; Chen, Ling; Wang, Zhengqu; Cui, Xiaochun

    2014-12-12

    High-precision wind tunnel simulation tests play an important role in aircraft design and manufacture. In this study, a high-speed pose vision measurement method is proposed for high-speed and rolling targets in a supersonic wind tunnel. To obtain images with high signal-to-noise ratio and avoid impacts on the aerodynamic shape of the rolling targets, a high-speed image acquisition method based on ultrathin retro-reflection markers is presented. Since markers are small-sized and some of them may be lost when the target is rolling, a novel markers layout with which markers are distributed evenly on the surface is proposed based on a spatial coding method to achieve highly accurate pose information. Additionally, a pose acquisition is carried out according to the mentioned markers layout after removing mismatching points by Case Deletion Diagnostics. Finally, experiments on measuring the pose parameters of high-speed targets in the laboratory and in a supersonic wind tunnel are conducted to verify the feasibility and effectiveness of the proposed method. Experimental results indicate that the position measurement precision is less than 0.16 mm, the pitching and yaw angle precision less than 0.132° and the roll angle precision 0.712°.

  14. Pose Measurement Method and Experiments for High-Speed Rolling Targets in a Wind Tunnel

    PubMed Central

    Jia, Zhenyuan; Ma, Xin; Liu, Wei; Lu, Wenbo; Li, Xiao; Chen, Ling; Wang, Zhengqu; Cui, Xiaochun

    2014-01-01

    High-precision wind tunnel simulation tests play an important role in aircraft design and manufacture. In this study, a high-speed pose vision measurement method is proposed for high-speed and rolling targets in a supersonic wind tunnel. To obtain images with high signal-to-noise ratio and avoid impacts on the aerodynamic shape of the rolling targets, a high-speed image acquisition method based on ultrathin retro-reflection markers is presented. Since markers are small-sized and some of them may be lost when the target is rolling, a novel markers layout with which markers are distributed evenly on the surface is proposed based on a spatial coding method to achieve highly accurate pose information. Additionally, a pose acquisition is carried out according to the mentioned markers layout after removing mismatching points by Case Deletion Diagnostics. Finally, experiments on measuring the pose parameters of high-speed targets in the laboratory and in a supersonic wind tunnel are conducted to verify the feasibility and effectiveness of the proposed method. Experimental results indicate that the position measurement precision is less than 0.16 mm, the pitching and yaw angle precision less than 0.132° and the roll angle precision 0.712°. PMID:25615732

  15. A comparative modeling and molecular docking study on Mycobacterium tuberculosis targets involved in peptidoglycan biosynthesis.

    PubMed

    Fakhar, Zeynab; Naiker, Suhashni; Alves, Claudio N; Govender, Thavendran; Maguire, Glenn E M; Lameira, Jeronimo; Lamichhane, Gyanu; Kruger, Hendrik G; Honarparvar, Bahareh

    2016-11-01

    An alarming rise of multidrug-resistant Mycobacterium tuberculosis strains and the continuous high global morbidity of tuberculosis have reinvigorated the need to identify novel targets to combat the disease. The enzymes that catalyze the biosynthesis of peptidoglycan in M. tuberculosis are essential and noteworthy therapeutic targets. In this study, the biochemical function and homology modeling of MurI, MurG, MraY, DapE, DapA, Alr, and Ddl enzymes of the CDC1551 M. tuberculosis strain involved in the biosynthesis of peptidoglycan cell wall are reported. Generation of the 3D structures was achieved with Modeller 9.13. To assess the structural quality of the obtained homology modeled targets, the models were validated using PROCHECK, PDBsum, QMEAN, and ERRAT scores. Molecular dynamics simulations were performed to calculate root mean square deviation (RMSD) and radius of gyration (Rg) of MurI and MurG target proteins and their corresponding templates. For further model validation, RMSD and Rg for selected targets/templates were investigated to compare the close proximity of their dynamic behavior in terms of protein stability and average distances. To identify the potential binding mode required for molecular docking, binding site information of all modeled targets was obtained using two prediction algorithms. A docking study was performed for MurI to determine the potential mode of interaction between the inhibitor and the active site residues. This study presents the first accounts of the 3D structural information for the selected M. tuberculosis targets involved in peptidoglycan biosynthesis.

  16. Introducing a simple model system for binding studies of known and novel inhibitors of AMPK: a therapeutic target for prostate cancer.

    PubMed

    Kumar, Rakesh; Maurya, Ranjana; Saran, Shweta

    2018-02-23

    Prostate cancer (PC) is one of the leading cancers in men, raising a serious health issue worldwide. Due to lack of suitable biomarker, their inhibitors and the platform for testing those inhibitors result in poor prognosis of PC. AMP-activated protein kinase (AMPK) is a highly conserved protein kinase found in eukaryotes that is involved in growth and development, and also acts as a therapeutic target for PC. The aim of the present study is to identify novel potent inhibitors of AMPK and propose a simple cellular model system for understanding its biology. Structural modelling and MD simulations were performed to construct and refine the 3D models of Dictyostelium and human AMPK. Binding mechanisms of different drug compounds were studied by performing molecular docking, molecular dynamics and MM-PBSA methods. Two novel drugs were isolated having higher binding affinity over the known drugs and hydrophobic forces that played a key role during protein-ligand interactions. The study also explored the simple cellular model system for drug screening and understanding the biology of a therapeutic target by performing in vitro experiments.

  17. Impact experiments onto heterogeneous targets and their interpretation in relation with formation of the asteroid families

    NASA Astrophysics Data System (ADS)

    Leliwa-Kopystynski, J.; Arakawa, M.

    2014-07-01

    Results of laboratory impact experiments, when extrapolated to the planetary scale of events, are aimed for better understanding of cratering and/or disruption of asteroids, satellites, and cometary nuclei. There is absolutely no reason to assume that these bodies are uniform rocky or icy monoliths. So, we studied reactions of the heterogeneous targets on the impacts. A series of impact experiments onto solid decimeter-sized cylinders made of porous gypsum mixed with approximately one-centimeter-sized pebbles have been performed. The mean density of the material of the targets was 1867 kg m^{-3}, the mean mass ratio (pebbles / gypsum) = 0.856 / 0.144, and the mean volume ratio (pebbles / gypsum / pores) = 0.585 / 0.116 / 0.299. The target densities and their heterogeneous structures could be representative of those of the asteroids Ida, Eros, and many others, because asteroid sub-surface volumes could be composed of consolidated boulders formed by self-compaction and/or by impact compaction. Impact velocities in the experiments ranged from 2.0 km/s to 6.7 km/s (collision velocity in the asteroid main belt is approximately 5 km/s). By means of weighting and counting the post-impact fragments, their distribution function was found. Let Q [J/kg] be the specific energy of impact per unit of the target mass. Of particular interest is the value of impact strength, that is, the specific energy of disruption Q^*, corresponding to the ratio (mass of the largest fragment) / (mass of the target) = m_l/M = 0.5, which is, by convention, the value separating the cratering events from the catastrophic disruption impacts. Mass or size distribution of the post-impact fragments is expressed by the power law N ∝ m^{-p} ∝ r^{-3p}, p=p(Q/Q^{*}) A parameter that can be measured in the laboratory is the exponent p. For the case of a swarm of asteroids forming an asteroid family, the observationally estimated value is not the exponent p but rather the exponent q = 3p, since the sizes

  18. Identification of the feedforward component in manual control with predictable target signals.

    PubMed

    Drop, Frank M; Pool, Daan M; Damveld, Herman J; van Paassen, Marinus M; Mulder, Max

    2013-12-01

    In the manual control of a dynamic system, the human controller (HC) often follows a visible and predictable reference path. Compared with a purely feedback control strategy, performance can be improved by making use of this knowledge of the reference. The operator could effectively introduce feedforward control in conjunction with a feedback path to compensate for errors, as hypothesized in literature. However, feedforward behavior has never been identified from experimental data, nor have the hypothesized models been validated. This paper investigates human control behavior in pursuit tracking of a predictable reference signal while being perturbed by a quasi-random multisine disturbance signal. An experiment was done in which the relative strength of the target and disturbance signals were systematically varied. The anticipated changes in control behavior were studied by means of an ARX model analysis and by fitting three parametric HC models: two different feedback models and a combined feedforward and feedback model. The ARX analysis shows that the experiment participants employed control action on both the error and the target signal. The control action on the target was similar to the inverse of the system dynamics. Model fits show that this behavior can be modeled best by the combined feedforward and feedback model.

  19. Requirements and Capabilities for Fielding Cryogenic DT-Containing Fill-Tube Targets for Direct-Drive Experiments on OMEGA

    DOE PAGES

    Harding, D. R.; Ulreich, J.; Wittman, M. D.; ...

    2017-12-06

    Improving the performance of direct-drive cryogenic targets at the Omega Laser Facility requires the development of a new cryogenic system to (i) field non permeable targets with a fill tube, and (ii) provide a clean environment around the target. This capability is to demonstrate that imploding a scaled-down version of the direct-drive–ignition target for the National Ignition Facility (NIF) on the OMEGA laser will generate the hot-spot pressure that is needed for ignition; this will justify future cryogenic direct-drive experiments on the NIF. The paper describes the target, the cryogenic equipment that is being constructed to achieve this goal, andmore » the proposed target delivery process. Thermal calculations, fill-tube–based target designs, and structural/vibrational analyses are provided to demonstrate the credibility of the design. This new design will include capabilities not available (or possible) with the existing OMEGA cryogenic system, with the emphasis being to preserve a pristinely clean environment around the target, and to provide upgraded diagnostics to characterize both the ice layer and the target’s surface. The conceptual design is complete and testing of prototypes and subcomponents is underway. The rationale and capabilities of the new design are discussed.« less

  20. Requirements and Capabilities for Fielding Cryogenic DT-Containing Fill-Tube Targets for Direct-Drive Experiments on OMEGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, D. R.; Ulreich, J.; Wittman, M. D.

    Improving the performance of direct-drive cryogenic targets at the Omega Laser Facility requires the development of a new cryogenic system to (i) field non permeable targets with a fill tube, and (ii) provide a clean environment around the target. This capability is to demonstrate that imploding a scaled-down version of the direct-drive–ignition target for the National Ignition Facility (NIF) on the OMEGA laser will generate the hot-spot pressure that is needed for ignition; this will justify future cryogenic direct-drive experiments on the NIF. The paper describes the target, the cryogenic equipment that is being constructed to achieve this goal, andmore » the proposed target delivery process. Thermal calculations, fill-tube–based target designs, and structural/vibrational analyses are provided to demonstrate the credibility of the design. This new design will include capabilities not available (or possible) with the existing OMEGA cryogenic system, with the emphasis being to preserve a pristinely clean environment around the target, and to provide upgraded diagnostics to characterize both the ice layer and the target’s surface. The conceptual design is complete and testing of prototypes and subcomponents is underway. The rationale and capabilities of the new design are discussed.« less

  1. Causal Modeling the Delayed-Choice Experiment

    NASA Astrophysics Data System (ADS)

    Chaves, Rafael; Lemos, Gabriela Barreto; Pienaar, Jacques

    2018-05-01

    Wave-particle duality has become one of the flagships of quantum mechanics. This counterintuitive concept is highlighted in a delayed-choice experiment, where the experimental setup that reveals either the particle or wave nature of a quantum system is decided after the system has entered the apparatus. Here we consider delayed-choice experiments from the perspective of device-independent causal models and show their equivalence to a prepare-and-measure scenario. Within this framework, we consider Wheeler's original proposal and its variant using a quantum control and show that a simple classical causal model is capable of reproducing the quantum mechanical predictions. Nonetheless, among other results, we show that, in a slight variant of Wheeler's gedanken experiment, a photon in an interferometer can indeed generate statistics incompatible with any nonretrocausal hidden variable model, whose dimensionality is the same as that of the quantum system it is supposed to mimic. Our proposal tolerates arbitrary losses and inefficiencies, making it specially suited to loophole-free experimental implementations.

  2. An empirical model for transient crater growth in granular targets based on direct observations

    NASA Astrophysics Data System (ADS)

    Yamamoto, Satoru; Barnouin-Jha, Olivier S.; Toriumi, Takashi; Sugita, Seiji; Matsui, Takafumi

    2009-09-01

    The present paper describes observations of crater growth up to the time of transient crater formation and presents a new empirical model for transient crater growth as a function of time. Polycarbonate projectiles were impacted vertically into soda-lime glass sphere targets using a single-stage light-gas gun. Using a new technique with a laser sheet illuminating the target [Barnouin-Jha, O.S., Yamamoto, S., Toriumi, T., Sugita, S., Matsui, T., 2007. Non-intrusive measurements of the crater growth. Icarus, 188, 506-521], we measured the temporal change in diameter of crater cavities (diameter growth). The rate of increase in diameter at early times follows a power law relation, but the data at later times (before the end of transient crater formation) deviates from the power law relation. In addition, the power law exponent at early times and the degree of deviation from a power law at later times depend on the target. In order to interpret these features, we proposed to modify Maxwell's Z-model under the assumption that the strength of the excavation flow field decreases exponentially with time. We also derived a diameter growth model as: d(t)∝[1-exp(-βt)]γ, where d(t) is the apparent diameter of the crater cavity at time t after impact, and β and γ are constants. We demonstrated that the diameter growth model could represent well the experimental data for various targets with different target material properties, such as porosity or angle of repose. We also investigated the diameter growth for a dry sand target, which has been used to formulate previous scaling relations. The obtained results showed that the dry sand target has larger degree of deviation from a power law, indicating that the target material properties of the dry sand target have a significant effect on diameter growth, especially at later times. This may suggest that the previously reported scaling relations should be reexamined in order to account for the late-stage behavior with the

  3. Polyurethane Foam Impact Experiments and Simulations

    NASA Astrophysics Data System (ADS)

    Kipp, M. E.; Chhabildas, L. C.; Reinhart, W. D.; Wong, M. K.

    1999-06-01

    Uniaxial strain impact experiments with a rigid polyurethane foam of nominal density 0.22g/cc are reported. A 6 mm thick foam impactor is mounted on the face of a projectile and impacts a thin (1 mm) target plate of aluminum or copper, on which the rear free surface velocity history is acquired with a VISAR. Impact velocities ranged from 300 to 1500 m/s. The velocity record monitors the initial shock from the foam transmitted through the target, followed by a reverberation within the target plate as the wave interacts with the compressed foam at the impact interface and the free recording surface. These one-dimensional uniaxial strain impact experiments were modeled using a traditional p-alpha porous material model for the distended polyurethane, which generally captured the motion imparted to the target by the foam. Some of the high frequency aspects of the data, reflecting the heterogeneous nature of the foam, can be recovered with computations of fully 3-dimensional explicit representations of this porous material.

  4. HARP targets pion production cross section and yield measurements. Implications for MiniBooNE neutrino flux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wickremasinghe, Don Athula Abeyarathna

    2015-07-01

    The prediction of the muon neutrino flux from a 71.0 cm long beryllium target for the MiniBooNE experiment is based on a measured pion production cross section which was taken from a short beryllium target (2.0 cm thick - 5% nuclear interaction length) in the Hadron Production (HARP) experiment at CERN. To verify the extrapolation to our longer target, HARP also measured the pion production from 20.0 cm and 40.0 cm beryllium targets. The measured production yields, d 2N π± (p; θ )=dpd Ω, on targets of 50% and 100% nuclear interaction lengths in the kinematic rage of momentum frommore » 0.75 GeV/c to 6.5 GeV/c and the range of angle from 30 mrad to 210 mrad are presented along with an update of the short target cross sections. The best fitted extended Sanford-Wang (SW) model parameterization for updated short beryllium target π + production cross section is presented. Yield measurements for all three targets are also compared with that from the Monte Carlo predictions in the MiniBooNE experiment for different SW parameterization. The comparisons of v μ flux predictions for updated SW model is presented.« less

  5. Impact of Targeted Ocean Observations for Improving Ocean Model Initialization for Coupled Hurricane Forecasting

    NASA Astrophysics Data System (ADS)

    Halliwell, G. R.; Srinivasan, A.; Kourafalou, V. H.; Yang, H.; Le Henaff, M.; Atlas, R. M.

    2012-12-01

    The accuracy of hurricane intensity forecasts produced by coupled forecast models is influenced by errors and biases in SST forecasts produced by the ocean model component and the resulting impact on the enthalpy flux from ocean to atmosphere that powers the storm. Errors and biases in fields used to initialize the ocean model seriously degrade SST forecast accuracy. One strategy for improving ocean model initialization is to design a targeted observing program using airplanes and in-situ devices such as floats and drifters so that assimilation of the additional data substantially reduces errors in the ocean analysis system that provides the initial fields. Given the complexity and expense of obtaining these additional observations, observing system design methods such as OSSEs are attractive for designing efficient observing strategies. A new fraternal-twin ocean OSSE system based on the HYbrid Coordinate Ocean Model (HYCOM) is used to assess the impact of targeted ocean profiles observed by hurricane research aircraft, and also by in-situ float and drifter deployments, on reducing errors in initial ocean fields. A 0.04-degree HYCOM simulation of the Gulf of Mexico is evaluated as the nature run by determining that important ocean circulation features such as the Loop Current and synoptic cyclones and anticyclones are realistically simulated. The data-assimilation system is run on a 0.08-degree HYCOM mesh with substantially different model configuration than the nature run, and it uses a new ENsemble Kalman Filter (ENKF) algorithm optimized for the ocean model's hybrid vertical coordinates. The OSSE system is evaluated and calibrated by first running Observing System Experiments (OSEs) to evaluate existing observing systems, specifically quantifying the impact of assimilating more than one satellite altimeter, and also the impact of assimilating targeted ocean profiles taken by the NOAA WP-3D hurricane research aircraft in the Gulf of Mexico during the Deepwater

  6. Modeling human target acquisition in ground-to-air weapon systems

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.; Mohr, R. L.; Vikmanis, M.; Wei, K. C.

    1982-01-01

    The problems associated with formulating and validating mathematical models for describing and predicting human target acquisition response are considered. In particular, the extension of the human observer model to include the acquisition phase as well as the tracking segment is presented. Relationship of the Observer model structure to the more complex Standard Optimal Control model formulation and to the simpler Transfer Function/Noise representation is discussed. Problems pertinent to structural identifiability and the form of the parameterization are elucidated. A systematic approach toward the identification of the observer acquisition model parameters from ensemble tracking error data is presented.

  7. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty.

    PubMed

    Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E

    2015-09-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.

  8. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty

    PubMed Central

    Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.

    2015-01-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275

  9. High affinity ligands from in vitro selection: Complex targets

    PubMed Central

    Morris, Kevin N.; Jensen, Kirk B.; Julin, Carol M.; Weil, Michael; Gold, Larry

    1998-01-01

    Human red blood cell membranes were used as a model system to determine if the systematic evolution of ligands by exponential enrichment (SELEX) methodology, an in vitro protocol for isolating high-affinity oligonucleotides that bind specifically to virtually any single protein, could be used with a complex mixture of potential targets. Ligands to multiple targets were generated simultaneously during the selection process, and the binding affinities of these ligands for their targets are comparable to those found in similar experiments against pure targets. A secondary selection scheme, deconvolution-SELEX, facilitates rapid isolation of the ligands to targets of special interest within the mixture. SELEX provides high-affinity compounds for multiple targets in a mixture and might allow a means for dissecting complex biological systems. PMID:9501188

  10. Optical simulation of flying targets using physically based renderer

    NASA Astrophysics Data System (ADS)

    Cheng, Ye; Zheng, Quan; Peng, Junkai; Lv, Pin; Zheng, Changwen

    2018-02-01

    The simulation of aerial flying targets is widely needed in many fields. This paper proposes a physically based method for optical simulation of flying targets. In the first step, three-dimensional target models are built and the motion speed and direction are defined. Next, the material of the outward appearance of a target is also simulated. Then the illumination conditions are defined. After all definitions are given, all settings are encoded in a description file. Finally, simulated results are generated by Monte Carlo ray tracing in a physically based renderer. Experiments show that this method is able to simulate materials, lighting and motion blur for flying targets, and it can generate convincing and highquality simulation results.

  11. The Role of Experience in Location Estimation: Target Distributions Shift Location Memory Biases

    ERIC Educational Resources Information Center

    Lipinski, John; Simmering, Vanessa R.; Johnson, Jeffrey S.; Spencer, John P.

    2010-01-01

    Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. "Cognition, 93", 75-97]. This conflicts with earlier results showing…

  12. Identification of control targets in Boolean molecular network models via computational algebra.

    PubMed

    Murrugarra, David; Veliz-Cuba, Alan; Aguilar, Boris; Laubenbacher, Reinhard

    2016-09-23

    Many problems in biomedicine and other areas of the life sciences can be characterized as control problems, with the goal of finding strategies to change a disease or otherwise undesirable state of a biological system into another, more desirable, state through an intervention, such as a drug or other therapeutic treatment. The identification of such strategies is typically based on a mathematical model of the process to be altered through targeted control inputs. This paper focuses on processes at the molecular level that determine the state of an individual cell, involving signaling or gene regulation. The mathematical model type considered is that of Boolean networks. The potential control targets can be represented by a set of nodes and edges that can be manipulated to produce a desired effect on the system. This paper presents a method for the identification of potential intervention targets in Boolean molecular network models using algebraic techniques. The approach exploits an algebraic representation of Boolean networks to encode the control candidates in the network wiring diagram as the solutions of a system of polynomials equations, and then uses computational algebra techniques to find such controllers. The control methods in this paper are validated through the identification of combinatorial interventions in the signaling pathways of previously reported control targets in two well studied systems, a p53-mdm2 network and a blood T cell lymphocyte granular leukemia survival signaling network. Supplementary data is available online and our code in Macaulay2 and Matlab are available via http://www.ms.uky.edu/~dmu228/ControlAlg . This paper presents a novel method for the identification of intervention targets in Boolean network models. The results in this paper show that the proposed methods are useful and efficient for moderately large networks.

  13. Generation of calibrated tungsten target x-ray spectra: modified TBC model.

    PubMed

    Costa, Paulo R; Nersissian, Denise Y; Salvador, Fernanda C; Rio, Patrícia B; Caldas, Linda V E

    2007-01-01

    In spite of the recent advances in the experimental detection of x-ray spectra, theoretical or semi-empirical approaches for determining realistic x-ray spectra in the range of diagnostic energies are important tools for planning experiments, estimating radiation doses in patients, and formulating radiation shielding models. The TBC model is one of the most useful approaches since it allows for straightforward computer implementation, and it is able to accurately reproduce the spectra generated by tungsten target x-ray tubes. However, as originally presented, the TBC model fails in situations where the determination of x-ray spectra produced by an arbitrary waveform or the calculation of realistic values of air kerma for a specific x-ray system is desired. In the present work, the authors revisited the assumptions used in the original paper published by . They proposed a complementary formulation for taking into account the waveform and the representation of the calculated spectra in a dosimetric quantity. The performance of the proposed model was evaluated by comparing values of air kerma and first and second half value layers from calculated and measured spectra by using different voltages and filtrations. For the output, the difference between experimental and calculated data was better then 5.2%. First and second half value layers presented differences of 23.8% and 25.5% in the worst case. The performance of the model in accurately calculating these data was better for lower voltage values. Comparisons were also performed with spectral data measured using a CZT detector. Another test was performed by the evaluation of the model when considering a waveform distinct of a constant potential. In all cases the model results can be considered as a good representation of the measured data. The results from the modifications to the TBC model introduced in the present work reinforce the value of the TBC model for application of quantitative evaluations in radiation

  14. Asteroid collisions: Target size effects and resultant velocity distributions

    NASA Technical Reports Server (NTRS)

    Ryan, Eileen V.

    1993-01-01

    To study the dynamic fragmentation of rock to simulate asteroid collisions, we use a 2-D, continuum damage numerical hydrocode which models two-body impacts. This hydrocode monitors stress wave propagation and interaction within the target body, and includes a physical model for the formation and growth of cracks in rock. With this algorithm we have successfully reproduced fragment size distributions and mean ejecta speeds from laboratory impact experiments using basalt, and weak and strong mortar as target materials. Using the hydrocode, we have determined that the energy needed to fracture a body has a much stronger dependence on target size than predicted from most scaling theories. In addition, velocity distributions obtained indicate that mean ejecta speeds resulting from large-body collisions do not exceed escape velocities.

  15. Attentional Control via Parallel Target-Templates in Dual-Target Search

    PubMed Central

    Barrett, Doug J. K.; Zobay, Oliver

    2014-01-01

    Simultaneous search for two targets has been shown to be slower and less accurate than independent searches for the same two targets. Recent research suggests this ‘dual-target cost’ may be attributable to a limit in the number of target-templates than can guide search at any one time. The current study investigated this possibility by comparing behavioural responses during single- and dual-target searches for targets defined by their orientation. The results revealed an increase in reaction times for dual- compared to single-target searches that was largely independent of the number of items in the display. Response accuracy also decreased on dual- compared to single-target searches: dual-target accuracy was higher than predicted by a model restricting search guidance to a single target-template and lower than predicted by a model simulating two independent single-target searches. These results are consistent with a parallel model of dual-target search in which attentional control is exerted by more than one target-template at a time. The requirement to maintain two target-templates simultaneously, however, appears to impose a reduction in the specificity of the memory representation that guides search for each target. PMID:24489793

  16. Tropical forest response to elevated CO2: Model-experiment integration at the AmazonFACE site.

    NASA Astrophysics Data System (ADS)

    Frankenberg, C.; Berry, J. A.; Guanter, L.; Joiner, J.

    2014-12-01

    The terrestrial biosphere's response to current and future elevated atmospheric carbon dioxide (eCO2) is a large source of uncertainty in future projections of the C cycle, climate and ecosystem functioning. In particular, the sensitivity of tropical rainforest ecosystems to eCO­2 is largely unknown even though the importance of tropical forests for biodiversity, carbon storage and regional and global climate feedbacks is unambiguously recognized. The AmazonFACE (Free-Air Carbon Enrichment) project will be the first ecosystem scale eCO2 experiment undertaken in the tropics, as well as the first to be undertaken in a mature forest. AmazonFACE provides the opportunity to integrate ecosystem modeling with experimental observations right from the beginning of the experiment, harboring a two-way exchange, i.e. models provide hypotheses to be tested, and observations deliver the crucial data to test and improve ecosystem models. We present preliminary exploration of observed and expected process responses to eCO2 at the AmazonFACE site from the dynamic global vegetation model LPJ-GUESS, highlighting opportunities and pitfalls for model integration of tropical FACE experiments. The preliminary analysis provides baseline hypotheses, which are to be further developed with a follow-up multiple model inter-comparison. The analysis builds on the recently undertaken FACE-MDS (Model-Data Synthesis) project, which was applied to two temperate FACE experiments and exceeds the traditional focus on comparing modeled end-target output. The approach has proven successful in identifying well (and less well) represented processes in models, which are separated for six clusters also here; (1) Carbon fluxes, (2) Carbon pools, (3) Energy balance, (4) Hydrology, (5) Nutrient cycling, and (6) Population dynamics. Simulation performance of observed conditions at the AmazonFACE site (a.o. from Manaus K34 eddy flux tower) will highlight process-based model deficiencies, and aid the separation

  17. Tropical forest response to elevated CO2: Model-experiment integration at the AmazonFACE site.

    NASA Astrophysics Data System (ADS)

    Fleischer, K.

    2015-12-01

    The terrestrial biosphere's response to current and future elevated atmospheric carbon dioxide (eCO2) is a large source of uncertainty in future projections of the C cycle, climate and ecosystem functioning. In particular, the sensitivity of tropical rainforest ecosystems to eCO­2 is largely unknown even though the importance of tropical forests for biodiversity, carbon storage and regional and global climate feedbacks is unambiguously recognized. The AmazonFACE (Free-Air Carbon Enrichment) project will be the first ecosystem scale eCO2 experiment undertaken in the tropics, as well as the first to be undertaken in a mature forest. AmazonFACE provides the opportunity to integrate ecosystem modeling with experimental observations right from the beginning of the experiment, harboring a two-way exchange, i.e. models provide hypotheses to be tested, and observations deliver the crucial data to test and improve ecosystem models. We present preliminary exploration of observed and expected process responses to eCO2 at the AmazonFACE site from the dynamic global vegetation model LPJ-GUESS, highlighting opportunities and pitfalls for model integration of tropical FACE experiments. The preliminary analysis provides baseline hypotheses, which are to be further developed with a follow-up multiple model inter-comparison. The analysis builds on the recently undertaken FACE-MDS (Model-Data Synthesis) project, which was applied to two temperate FACE experiments and exceeds the traditional focus on comparing modeled end-target output. The approach has proven successful in identifying well (and less well) represented processes in models, which are separated for six clusters also here; (1) Carbon fluxes, (2) Carbon pools, (3) Energy balance, (4) Hydrology, (5) Nutrient cycling, and (6) Population dynamics. Simulation performance of observed conditions at the AmazonFACE site (a.o. from Manaus K34 eddy flux tower) will highlight process-based model deficiencies, and aid the separation

  18. Polarized Solid State Target

    NASA Astrophysics Data System (ADS)

    Dutz, Hartmut; Goertz, Stefan; Meyer, Werner

    2017-01-01

    The polarized solid state target is an indispensable experimental tool to study single and double polarization observables at low intensity particle beams like tagged photons. It was one of the major components of the Crystal-Barrel experiment at ELSA. Besides the operation of the 'CB frozen spin target' within the experimental program of the Crystal-Barrel collaboration both collaborative groups of the D1 project, the polarized target group of the Ruhr Universität Bochum and the Bonn polarized target group, have made significant developments in the field of polarized targets within the CRC16. The Bonn polarized target group has focused its work on the development of technically challenging polarized solid target systems towards the so called '4π continuous mode polarized target' to operate them in combination with 4π-particle detection systems. In parallel, the Bochum group has developed various highly polarized deuterated target materials and high precision NMR-systems, in the meantime used for polarization experiments at CERN, JLAB and MAMI, too.

  19. Drug Synergy Screen and Network Modeling in Dedifferentiated Liposarcoma Identifies CDK4 and IGF1R as Synergistic Drug Targets

    PubMed Central

    Miller, Martin L.; Molinelli, Evan J.; Nair, Jayasree S.; Sheikh, Tahir; Samy, Rita; Jing, Xiaohong; He, Qin; Korkut, Anil; Crago, Aimee M.; Singer, Samuel; Schwartz, Gary K.; Sander, Chris

    2014-01-01

    Dedifferentiated liposarcoma (DDLS) is a rare but aggressive cancer with high recurrence and low response rates to targeted therapies. Increasing treatment efficacy may require combinations of targeted agents that counteract the effects of multiple abnormalities. To identify a possible multicomponent therapy, we performed a combinatorial drug screen in a DDLS-derived cell line and identified cyclin-dependent kinase 4 (CDK4) and insulin-like growth factor 1 receptor (IGF1R) as synergistic drug targets. We measured the phosphorylation of multiple proteins and cell viability in response to systematic drug combinations and derived computational models of the signaling network. These models predict that the observed synergy in reducing cell viability with CDK4 and IGF1R inhibitors depend on activity of the AKT pathway. Experiments confirmed that combined inhibition of CDK4 and IGF1R cooperatively suppresses the activation of proteins within the AKT pathway. Consistent with these findings, synergistic reductions in cell viability were also found when combining CDK4 inhibition with inhibition of either AKT or epidermal growth factor receptor (EGFR), another receptor similar to IGF1R that activates AKT. Thus, network models derived from context-specific proteomic measurements of systematically perturbed cancer cells may reveal cancer-specific signaling mechanisms and aid in the design of effective combination therapies. PMID:24065146

  20. Supervised target detection in hyperspectral images using one-class Fukunaga-Koontz Transform

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah

    2016-05-01

    A novel hyperspectral target detection technique based on Fukunaga-Koontz transform (FKT) is presented. FKT offers significant properties for feature selection and ordering. However, it can only be used to solve multi-pattern classification problems. Target detection may be considered as a two-class classification problem, i.e., target versus background clutter. Nevertheless, background clutter typically contains different types of materials. That's why; target detection techniques are different than classification methods by way of modeling clutter. To avoid the modeling of the background clutter, we have improved one-class FKT (OC-FKT) for target detection. The statistical properties of target training samples are used to define tunnel-like boundary of the target class. Non-target samples are then created synthetically as to be outside of the boundary. Thus, only limited target samples become adequate for training of FKT. The hyperspectral image experiments confirm that the proposed OC-FKT technique provides an effective means for target detection.

  1. Universal Versus Targeted Screening for Lynch Syndrome: Comparing Ascertainment and Costs Based on Clinical Experience.

    PubMed

    Erten, Mujde Z; Fernandez, Luca P; Ng, Hank K; McKinnon, Wendy C; Heald, Brandie; Koliba, Christopher J; Greenblatt, Marc S

    2016-10-01

    Strategies to screen colorectal cancers (CRCs) for Lynch syndrome are evolving rapidly; the optimal strategy remains uncertain. We compared targeted versus universal screening of CRCs for Lynch syndrome. In 2010-2011, we employed targeted screening (age < 60 and/or Bethesda criteria). From 2012 to 2014, we screened all CRCs. Immunohistochemistry for the four mismatch repair proteins was done in all cases, followed by other diagnostic studies as indicated. We modeled the diagnostic costs of detecting Lynch syndrome and estimated the 5-year costs of preventing CRC by colonoscopy screening, using a system dynamics model. Using targeted screening, 51/175 (29 %) cancers fit criteria and were tested by immunohistochemistry; 15/51 (29 %, or 8.6 % of all CRCs) showed suspicious loss of ≥1 mismatch repair protein. Germline mismatch repair gene mutations were found in 4/4 cases sequenced (11 suspected cases did not have germline testing). Using universal screening, 17/292 (5.8 %) screened cancers had abnormal immunohistochemistry suspicious for Lynch syndrome. Germline mismatch repair mutations were found in only 3/10 cases sequenced (7 suspected cases did not have germline testing). The mean cost to identify Lynch syndrome probands was ~$23,333/case for targeted screening and ~$175,916/case for universal screening at our institution. Estimated costs to identify and screen probands and relatives were: targeted, $9798/case and universal, $38,452/case. In real-world Lynch syndrome management, incomplete clinical follow-up was the major barrier to do genetic testing. Targeted screening costs 2- to 7.5-fold less than universal and rarely misses Lynch syndrome cases. Future changes in testing costs will likely change the optimal algorithm.

  2. Designing Experiments to Discriminate Families of Logic Models.

    PubMed

    Videla, Santiago; Konokotina, Irina; Alexopoulos, Leonidas G; Saez-Rodriguez, Julio; Schaub, Torsten; Siegel, Anne; Guziolowski, Carito

    2015-01-01

    Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice, the number and type of allowed perturbations are not exhaustive. Moreover, experimental data are unavoidably subjected to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty. In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input-output behaviors) within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments, we obtained logical models with fitness scores (mean square error) 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration.

  3. Nonthermal ablation of deep brain targets: A simulation study on a large animal model

    PubMed Central

    Top, Can Barış; White, P. Jason; McDannold, Nathan J.

    2016-01-01

    Purpose: Thermal ablation with transcranial MRI-guided focused ultrasound (FUS) is currently limited to central brain targets because of heating and other beam effects caused by the presence of the skull. Recently, it was shown that it is possible to ablate tissues without depositing thermal energy by driving intravenously administered microbubbles to inertial cavitation using low-duty-cycle burst sonications. A recent study demonstrated that this ablation method could ablate tissue volumes near the skull base in nonhuman primates without thermally damaging the nearby bone. However, blood–brain disruption was observed in the prefocal region, and in some cases, this region contained small areas of tissue damage. The objective of this study was to analyze the experimental model with simulations and to interpret the cause of these effects. Methods: The authors simulated prior experiments where nonthermal ablation was performed in the brain in anesthetized rhesus macaques using a 220 kHz clinical prototype transcranial MRI-guided FUS system. Low-duty-cycle sonications were applied at deep brain targets with the ultrasound contrast agent Definity. For simulations, a 3D pseudospectral finite difference time domain tool was used. The effects of shear mode conversion, focal steering, skull aberrations, nonlinear propagation, and the presence of skull base on the pressure field were investigated using acoustic and elastic wave propagation models. Results: The simulation results were in agreement with the experimental findings in the prefocal region. In the postfocal region, however, side lobes were predicted by the simulations, but no effects were evident in the experiments. The main beam was not affected by the different simulated scenarios except for a shift of about 1 mm in peak position due to skull aberrations. However, the authors observed differences in the volume, amplitude, and distribution of the side lobes. In the experiments, a single element passive

  4. Hand-eye calibration using a target registration error model.

    PubMed

    Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M

    2017-10-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.

  5. Hand–eye calibration using a target registration error model

    PubMed Central

    Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M.

    2017-01-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand–eye calibration between the camera and the tracking system. The authors introduce the concept of ‘guided hand–eye calibration’, where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand–eye calibration as a registration problem between homologous point–line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera. PMID:29184657

  6. Neuroinflammatory targets and treatments for epilepsy validated in experimental models.

    PubMed

    Aronica, Eleonora; Bauer, Sebastian; Bozzi, Yuri; Caleo, Matteo; Dingledine, Raymond; Gorter, Jan A; Henshall, David C; Kaufer, Daniela; Koh, Sookyong; Löscher, Wolfgang; Louboutin, Jean-Pierre; Mishto, Michele; Norwood, Braxton A; Palma, Eleonora; Poulter, Michael O; Terrone, Gaetano; Vezzani, Annamaria; Kaminski, Rafal M

    2017-07-01

    A large body of evidence that has accumulated over the past decade strongly supports the role of inflammation in the pathophysiology of human epilepsy. Specific inflammatory molecules and pathways have been identified that influence various pathologic outcomes in different experimental models of epilepsy. Most importantly, the same inflammatory pathways have also been found in surgically resected brain tissue from patients with treatment-resistant epilepsy. New antiseizure therapies may be derived from these novel potential targets. An essential and crucial question is whether targeting these molecules and pathways may result in anti-ictogenesis, antiepileptogenesis, and/or disease-modification effects. Therefore, preclinical testing in models mimicking relevant aspects of epileptogenesis is needed to guide integrated experimental and clinical trial designs. We discuss the most recent preclinical proof-of-concept studies validating a number of therapeutic approaches against inflammatory mechanisms in animal models that could represent novel avenues for drug development in epilepsy. Finally, we suggest future directions to accelerate preclinical to clinical translation of these recent discoveries. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  7. Models for discovery of targeted therapy in genetic epileptic encephalopathies.

    PubMed

    Maljevic, Snezana; Reid, Christopher A; Petrou, Steven

    2017-10-01

    Epileptic encephalopathies are severe disorders emerging in the first days to years of life that commonly include refractory seizures, various types of movement disorders, and different levels of developmental delay. In recent years, many de novo occurring variants have been identified in individuals with these devastating disorders. To unravel disease mechanisms, the functional impact of detected variants associated with epileptic encephalopathies is investigated in a range of cellular and animal models. This review addresses efforts to advance and use such models to identify specific molecular and cellular targets for the development of novel therapies. We focus on ion channels as the best-studied group of epilepsy genes. Given the clinical and genetic heterogeneity of epileptic encephalopathy disorders, experimental models that can reflect this complexity are critical for the development of disease mechanisms-based targeted therapy. The convergence of technological advances in gene sequencing, stem cell biology, genome editing, and high throughput functional screening together with massive unmet clinical needs provides unprecedented opportunities and imperatives for precision medicine in epileptic encephalopathies. © 2017 International Society for Neurochemistry.

  8. Hitting the TARGET? A Case Study of the Experiences of Teachers in Steel Mill Learning Centers.

    ERIC Educational Resources Information Center

    Rose, Amy D.; Jeris, Laurel; Smith, Robert

    Part of a larger study on the experience of teaching in the steel mill learning environment was an inquiry focused on professional development. Teachers and coordinators were all members of the Teachers Action Research Group for Educational Technology (TARGET), a group of adult educators interested in improving learning and teaching in career…

  9. Fractal active contour model for segmenting the boundary of man-made target in nature scenes

    NASA Astrophysics Data System (ADS)

    Li, Min; Tang, Yandong; Wang, Lidi; Shi, Zelin

    2006-02-01

    In this paper, a novel geometric active contour model based on the fractal dimension feature to extract the boundary of man-made target in nature scenes is presented. In order to suppress the nature clutters, an adaptive weighting function is defined using the fractal dimension feature. Then the weighting function is introduced into the geodesic active contour model to detect the boundary of man-made target. Curve driven by our proposed model can evolve gradually from the initial position to the boundary of man-made target without being disturbed by nature clutters, even if the initial curve is far away from the true boundary. Experimental results validate the effectiveness and feasibility of our model.

  10. Targeting smokers with empathy appeal antismoking public service announcements: a field experiment.

    PubMed

    Shen, Lijiang

    2015-01-01

    A field experiment study (N = 189) was conducted to investigate the effectiveness of empathy appeal antismoking messages and their potential advantage over fear appeal messages. Data from 12 antismoking public service announcements showed that (a) smokers resist antismoking messages and (b) overall empathy appeal was equally effective as fear appeal messages. There was also evidence for moderators. First, empathy messages were more effective to women than to men. Second, fear appeal messages were more effective to occasional smokers than were empathy messages. Third, empathy messages were more effective to regular smokers than were fear appeal messages. Implications for audience segmentation and message targeting in public health antismoking efforts are discussed.

  11. Investigating the effect of tumor vascularization on magnetic targeting in vivo using retrospective design of experiment.

    PubMed

    Mei, Kuo-Ching; Bai, Jie; Lorrio, Silvia; Wang, Julie Tzu-Wen; Al-Jamal, Khuloud T

    2016-11-01

    Nanocarriers take advantages of the enhanced permeability and retention (EPR) to accumulate passively in solid tumors. Magnetic targeting has shown to further enhance tumor accumulation in response to a magnetic field gradient. It is widely known that passive accumulation of nanocarriers varies hugely in tumor tissues of different tumor vascularization. It is hypothesized that magnetic targeting is likely to be influenced by such factors. In this work, magnetic targeting is assessed in a range of subcutaneously implanted murine tumors, namely, colon (CT26), breast (4T1), lung (Lewis lung carcinoma) cancer and melanoma (B16F10). Passively- and magnetically-driven tumor accumulation of the radiolabeled polymeric magnetic nanocapsules are assessed with gamma counting. The influence of tumor vasculature, namely, the tumor microvessel density, permeability and diameter on passive and magnetic tumor targeting is assessed with the aid of the retrospective design of experiment (DoE) approach. It is clear that the three tumor vascular parameters contribute greatly to both passive and magnetically targeted tumor accumulation but play different roles when nanocarriers are targeted to the tumor with different strategies. It is concluded that tumor permeability is a rate-limiting factor in both targeting modes. Diameter and microvessel density influence passive and magnetic tumor targeting, respectively. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  12. Production of 9Be targets for nuclear physics experiments

    NASA Astrophysics Data System (ADS)

    Marín-Lámbarri, D. J.; Kheswa, N. Y.

    2018-05-01

    Self-supporting beryllium (9Be) targets were produced by mechanical rolling method in which a double pack technique was implemented. Targets were used for the investigation of the low-lying excitation energy region in 9B through the 9Be(3He,t)9B reaction at the K600 spectrometer, at iThemba LABS facility. Beryllium is a semi-metal in nature and this makes it hard to deform by rolling or vacuum evaporate as a self-supporting target. Therefore heat treatment was needed to avoid brittleness and breakage of the material during rolling process. A description is given on how beryllium targets were manufactured.

  13. Internal model of gravity for hand interception: parametric adaptation to zero-gravity visual targets on Earth.

    PubMed

    Zago, Myrka; Lacquaniti, Francesco

    2005-08-01

    Internal model is a neural mechanism that mimics the dynamics of an object for sensory motor or cognitive functions. Recent research focuses on the issue of whether multiple internal models are learned and switched to cope with a variety of conditions, or single general models are adapted by tuning the parameters. Here we addressed this issue by investigating how the manual interception of a moving target changes with changes of the visual environment. In our paradigm, a virtual target moves vertically downward on a screen with different laws of motion. Subjects are asked to punch a hidden ball that arrives in synchrony with the visual target. By using several different protocols, we systematically found that subjects do not develop a new internal model appropriate for constant speed targets, but they use the default gravity model and reduce the central processing time. The results imply that adaptation to zero-gravity targets involves a compression of temporal processing through the cortical and subcortical regions interconnected with the vestibular cortex, which has previously been shown to be the site of storage of the internal model of gravity.

  14. Tensionless Strings and Supersymmetric Sigma Models: Aspects of the Target Space Geometry

    NASA Astrophysics Data System (ADS)

    Bredthauer, Andreas

    2007-01-01

    In this thesis, two aspects of string theory are discussed, tensionless strings and supersymmetric sigma models. The equivalent to a massless particle in string theory is a tensionless string. Even almost 30 years after it was first mentioned, it is still quite poorly understood. We discuss how tensionless strings give rise to exact solutions to supergravity and solve closed tensionless string theory in the ten dimensional maximally supersymmetric plane wave background, a contraction of AdS(5)xS(5) where tensionless strings are of great interest due to their proposed relation to higher spin gauge theory via the AdS/CFT correspondence. For a sigma model, the amount of supersymmetry on its worldsheet restricts the geometry of the target space. For N=(2,2) supersymmetry, for example, the target space has to be bi-hermitian. Recently, with generalized complex geometry, a new mathematical framework was developed that is especially suited to discuss the target space geometry of sigma models in a Hamiltonian formulation. Bi-hermitian geometry is so-called generalized Kaehler geometry but the relation is involved. We discuss various amounts of supersymmetry in phase space and show that this relation can be established by considering the equivalence between the Hamilton and Lagrange formulation of the sigma model. In the study of generalized supersymmetric sigma models, we find objects that favor a geometrical interpretation beyond generalized complex geometry.

  15. DISSECTING OCD CIRCUITS: FROM ANIMAL MODELS TO TARGETED TREATMENTS.

    PubMed

    Ahmari, Susanne E; Dougherty, Darin D

    2015-08-01

    Obsessive-compulsive disorder (OCD) is a chronic, severe mental illness with up to 2-3% prevalence worldwide. In fact, OCD has been classified as one of the world's 10 leading causes of illness-related disability according to the World Health Organization, largely because of the chronic nature of disabling symptoms.([1]) Despite the severity and high prevalence of this chronic and disabling disorder, there is still relatively limited understanding of its pathophysiology. However, this is now rapidly changing due to development of powerful technologies that can be used to dissect the neural circuits underlying pathologic behaviors. In this article, we describe recent technical advances that have allowed neuroscientists to start identifying the circuits underlying complex repetitive behaviors using animal model systems. In addition, we review current surgical and stimulation-based treatments for OCD that target circuit dysfunction. Finally, we discuss how findings from animal models may be applied in the clinical arena to help inform and refine targeted brain stimulation-based treatment approaches. © 2015 Wiley Periodicals, Inc.

  16. [Mathematical modeling: an essential tool for the study of therapeutic targeting in solid tumors].

    PubMed

    Saidak, Zuzana; Giacobbi, Anne-Sophie; Morisse, Mony Chenda; Mammeri, Youcef; Galmiche, Antoine

    2017-12-01

    Recent progress in biology has made the study of the medical treatment of cancer more effective, but it has also revealed the large complexity of carcinogenesis and cell signaling. For many types of cancer, several therapeutic targets are known and in some cases drugs against these targets exist. Unfortunately, the target proteins often work in networks, resulting in functional adaptation and the development of resilience/resistance to medical treatment. The use of mathematical modeling makes it possible to carry out system-level analyses for improved study of therapeutic targeting in solid tumours. We present the main types of mathematical models used in cancer research and we provide examples illustrating the relevance of these approaches in molecular oncobiology. © 2017 médecine/sciences – Inserm.

  17. Novel Modeling of Combinatorial miRNA Targeting Identifies SNP with Potential Role in Bone Density

    PubMed Central

    Coronnello, Claudia; Hartmaier, Ryan; Arora, Arshi; Huleihel, Luai; Pandit, Kusum V.; Bais, Abha S.; Butterworth, Michael; Kaminski, Naftali; Stormo, Gary D.; Oesterreich, Steffi; Benos, Panayiotis V.

    2012-01-01

    MicroRNAs (miRNAs) are post-transcriptional regulators that bind to their target mRNAs through base complementarity. Predicting miRNA targets is a challenging task and various studies showed that existing algorithms suffer from high number of false predictions and low to moderate overlap in their predictions. Until recently, very few algorithms considered the dynamic nature of the interactions, including the effect of less specific interactions, the miRNA expression level, and the effect of combinatorial miRNA binding. Addressing these issues can result in a more accurate miRNA:mRNA modeling with many applications, including efficient miRNA-related SNP evaluation. We present a novel thermodynamic model based on the Fermi-Dirac equation that incorporates miRNA expression in the prediction of target occupancy and we show that it improves the performance of two popular single miRNA target finders. Modeling combinatorial miRNA targeting is a natural extension of this model. Two other algorithms show improved prediction efficiency when combinatorial binding models were considered. ComiR (Combinatorial miRNA targeting), a novel algorithm we developed, incorporates the improved predictions of the four target finders into a single probabilistic score using ensemble learning. Combining target scores of multiple miRNAs using ComiR improves predictions over the naïve method for target combination. ComiR scoring scheme can be used for identification of SNPs affecting miRNA binding. As proof of principle, ComiR identified rs17737058 as disruptive to the miR-488-5p:NCOA1 interaction, which we confirmed in vitro. We also found rs17737058 to be significantly associated with decreased bone mineral density (BMD) in two independent cohorts indicating that the miR-488-5p/NCOA1 regulatory axis is likely critical in maintaining BMD in women. With increasing availability of comprehensive high-throughput datasets from patients ComiR is expected to become an essential tool for mi

  18. MRPrimerW: a tool for rapid design of valid high-quality primers for multiple target qPCR experiments

    PubMed Central

    Kim, Hyerin; Kang, NaNa; An, KyuHyeon; Koo, JaeHyung; Kim, Min-Soo

    2016-01-01

    Design of high-quality primers for multiple target sequences is essential for qPCR experiments, but is challenging due to the need to consider both homology tests on off-target sequences and the same stringent filtering constraints on the primers. Existing web servers for primer design have major drawbacks, including requiring the use of BLAST-like tools for homology tests, lack of support for ranking of primers, TaqMan probes and simultaneous design of primers against multiple targets. Due to the large-scale computational overhead, the few web servers supporting homology tests use heuristic approaches or perform homology tests within a limited scope. Here, we describe the MRPrimerW, which performs complete homology testing, supports batch design of primers for multi-target qPCR experiments, supports design of TaqMan probes and ranks the resulting primers to return the top-1 best primers to the user. To ensure high accuracy, we adopted the core algorithm of a previously reported MapReduce-based method, MRPrimer, but completely redesigned it to allow users to receive query results quickly in a web interface, without requiring a MapReduce cluster or a long computation. MRPrimerW provides primer design services and a complete set of 341 963 135 in silico validated primers covering 99% of human and mouse genes. Free access: http://MRPrimerW.com. PMID:27154272

  19. Polarized targets in high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cates, G.D. Jr.

    1994-12-01

    Various approaches are discussed for producing polarized nuclear targets for high energy physics experiments. As a unifying theme, examples are drawn from experiments to measure spin dependent structure functions of nucleons in deep inelastic scattering. This single physics goal has, over roughly two decades, been a driving force in advances in target technology. Actual or planned approaches have included solid targets polarized by dynamic nuclear polarization (DNP), several types of internal targets for use in storage rings, and gaseous {sup 3}He targets polarized by spin-exchange optical pumping. This last approach is the type of target adopted for SLAC E-142, anmore » experiment to measure the spin structure function of the neutron, and is described in detail.« less

  20. Microwave scattering models and basic experiments

    NASA Technical Reports Server (NTRS)

    Fung, Adrian K.

    1989-01-01

    Progress is summarized which has been made in four areas of study: (1) scattering model development for sparsely populated media, such as a forested area; (2) scattering model development for dense media, such as a sea ice medium or a snow covered terrain; (3) model development for randomly rough surfaces; and (4) design and conduct of basic scattering and attenuation experiments suitable for the verification of theoretical models.

  1. Tsunami-induced morphological change of a coastal lake: comparing hydraulic experiment with numerical modeling

    NASA Astrophysics Data System (ADS)

    Sugawara, D.; Imai, K.; Mitobe, Y.; Takahashi, T.

    2016-12-01

    Coastal lakes are one of the promising environments to identify deposits of past tsunamis, and such deposits have been an important key to know the recurrence of tsunami events. In contrast to tsunami deposits on the coastal plains, however, relationship between deposit geometry and tsunami hydrodynamic character in the coastal lakes has poorly been understood. Flume experiment and numerical modeling will be important measures to clarify such relationship. In this study, data from a series of flume experiment were compared with simulations by an existing tsunami sediment transport model to examine applicability of the numerical model for tsunami-induced morphological change in a coastal lake. A coastal lake with a non-erodible beach ridge was modeled as the target geomorphology. The ridge separates the lake from the offshore part of the flume, and the lake bottom was filled by sand. Tsunami bore was generated by a dam-break flow, which is capable of generating a maximum near-bed flow speed of 2.5 m/s. Test runs with varying magnitude of the bore demonstrated that the duration of tsunami overflow controls the scouring depth of the lake bottom behind the ridge. The maximum scouring depth reached up to 7 cm, and sand deposition occurred mainly in the seaward-half of the lake. A conventional depth-averaged tsunami hydrodynamic model coupled with the sediment transport model was used to compare the simulation and experimental results. In the Simulation, scouring depth behind the ridge reached up to 6 cm. In addition, the width of the scouring was consistent between the simulation and experiment. However, sand deposition occurred mainly in a zone much far from the ridge, showing a considerable deviation from the experimental results. This may be associated with the lack of model capability to resolve some important physics, such as vortex generation behind the ridge and shoreward migration of hydraulic jump. In this presentation, the results from the flume experiment and

  2. Fixed target experiments at the Fermilab Tevatron

    DOE PAGES

    Gutierrez, Gaston; Reyes, Marco A.

    2014-11-10

    This paper presents a review of the study of Exclusive Central Production at a Center of Mass energy of √s = 40 GeV at the Fermilab Fixed Target program. In all reactions reviewed in this paper, protons with an energy of 800 GeV were extracted from the Tevatron accelerator at Fermilab and directed to a Liquid Hydrogen target. The states reviewed include π⁺π⁻, K⁰ s K⁰ s, K⁰ s K ±π ∓, φφ and D *±. Partial Wave Analysis results will be presented on the light states but only the cross-section will be reviewed in the diffractive production of Dmore » *±.« less

  3. Experiments with radioactive target samples at FRANZ

    NASA Astrophysics Data System (ADS)

    Sonnabend, K.; Altstadt, S.; Beinrucker, C.; Berger, M.; Endres, A.; Fiebiger, S.; Gerbig, J.; Glorius, J.; Göbel, K.; Heftrich, T.; Hinrichs, O.; Koloczek, A.; Lazarus, A.; Lederer, C.; Lier, A.; Mei, B.; Meusel, O.; Mevius, E.; Ostermöller, J.; Plag, R.; Pohl, M.; Reifarth, R.; Schmidt, S.; Slavkovská, Z.; Thomas, B.; Thomas, T.; Weigand, M.; Wolf, C.

    2016-01-01

    The FRANZ facility is currently under construction at Goethe Universität Frankfurt a.M., Germany. It is designed to produce the world's highest neutron intensities in the astrophysically relevant energy range between 10 keV and 1 MeV and consists of a high-intensity proton linac providing energies close to the threshold of the 7Li(p,n) reaction at Ep = 1880 keV. The high intensities of both the proton and the neutron beam allow the investigation of reactions of unstable target isotopes since the needed amount of target material is significantly reduced. We will present two examplary reactions relevant for the s process and the nucleosynthesis of p nuclei, respectively.

  4. Multi-AUV Target Search Based on Bioinspired Neurodynamics Model in 3-D Underwater Environments.

    PubMed

    Cao, Xiang; Zhu, Daqi; Yang, Simon X

    2016-11-01

    Target search in 3-D underwater environments is a challenge in multiple autonomous underwater vehicles (multi-AUVs) exploration. This paper focuses on an effective strategy for multi-AUV target search in the 3-D underwater environments with obstacles. First, the Dempster-Shafer theory of evidence is applied to extract information of environment from the sonar data to build a grid map of the underwater environments. Second, a topologically organized bioinspired neurodynamics model based on the grid map is constructed to represent the dynamic environment. The target globally attracts the AUVs through the dynamic neural activity landscape of the model, while the obstacles locally push the AUVs away to avoid collision. Finally, the AUVs plan their search path to the targets autonomously by a steepest gradient descent rule. The proposed algorithm deals with various situations, such as static targets search, dynamic targets search, and one or several AUVs break down in the 3-D underwater environments with obstacles. The simulation results show that the proposed algorithm is capable of guiding multi-AUV to achieve search task of multiple targets with higher efficiency and adaptability compared with other algorithms.

  5. A mathematical analysis of rebound in a target-mediated drug disposition model: II. With feedback.

    PubMed

    Aston, Philip J; Derks, Gianne; Agoram, Balaji M; van der Graaf, Piet H

    2017-07-01

    We consider the possibility of free receptor (antigen/cytokine) levels rebounding to higher than the baseline level after the application of an antibody drug using a target-mediated drug disposition model. It is assumed that the receptor synthesis rate experiences homeostatic feedback from the receptor levels. It is shown for a very fast feedback response, that the occurrence of rebound is determined by the ratio of the elimination rates, in a very similar way as for no feedback. However, for a slow feedback response, there will always be rebound. This result is illustrated with an example involving the drug efalizumab for patients with psoriasis. It is shown that slow feedback can be a plausible explanation for the observed rebound in this example.

  6. Prevention of hepatocellular carcinoma: potential targets, experimental models, and clinical challenges

    PubMed Central

    Hoshida, Yujin; Fuchs, Bryan C.; Tanabe, Kenneth K.

    2013-01-01

    Chronic fibrotic liver diseases such as viral hepatitis eventually develop liver cirrhosis, which causes occurrence of hepatocellular carcinoma (HCC). Given the limited therapeutic efficacy in advanced HCC, prevention of HCC development could be an effective strategy for improving patient prognosis. However, there is still no established therapy to meet the goal. Studies have elucidated a wide variety of molecular mechanisms and signaling pathways involved in HCC development. Genetically-engineered or chemically-treated experimental models of cirrhosis and HCC have been developed and shown their potential value in investigating molecular therapeutic targets and diagnostic biomarkers for HCC prevention. In this review, we overview potential targets of prevention and currently available experimental models, and discuss strategies to translate the findings into clinical practice. PMID:22873223

  7. Effects of target heating on experiments using Kα and Kβ diagnostics.

    PubMed

    Palmeri, P; Boutoux, G; Batani, D; Quinet, P

    2015-09-01

    We describe the impact of heating and ionization on emission from the target of Kα and Kβ radiation induced by the propagation of hot electrons generated by laser-matter interaction. We consider copper as a test case and, starting from basic principles, we calculate the changes in emission wavelength, ionization cross section, and fluorescence yield as Cu is progressively ionized. We have finally considered the more realistic case when hot electrons have a distribution of energies with average energies of 50 and 500 keV (representative respectively of "shock ignition" and of "fast ignition" experiments) and in which the ions are distributed according to ionization equilibrium. In addition, by confronting our theoretical calculations with existing data, we demonstrate that this study offers a generic theoretical background for temperature diagnostics in laser-plasma interactions.

  8. X-ray spectroscopic diagnostics and modeling of polar-drive implosion experiments on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Hakel, P.; Kyrala, G. A.; Bradley, P. A.; Krasheninnikova, N. S.; Murphy, T. J.; Schmitt, M. J.; Tregillis, I. L.; Kanzleieter, R. J.; Batha, S. H.; Fontes, C. J.; Sherrill, M. E.; Kilcrease, D. P.; Regan, S. P.

    2014-06-01

    A series of experiments featuring laser-imploded plastic-shell targets filled with hydrogen or deuterium were performed on the National Ignition Facility. The shells (some deuterated) were doped in selected locations with Cu, Ga, and Ge, whose spectroscopic signals (indicative of local plasma conditions) were collected with a time-integrated, 1-D imaging, spectrally resolved, and absolute-intensity calibrated instrument. The experimental spectra compare well with radiation hydrodynamics simulations post-processed with a non-local thermal equilibrium atomic kinetics and spectroscopic-quality radiation-transport model. The obtained degree of agreement between the modeling and experimental data supports the application of spectroscopic techniques for the determination of plasma conditions, which can ultimately lead to the validation of theoretical models for thermonuclear burn in the presence of mix. Furthermore, the use of a lower-Z dopant element (e.g., Fe) is suggested for future experiments, since the ˜2 keV electron temperatures reached in mixed regions are not high enough to drive sufficient H-like Ge and Cu line emissions needed for spectroscopic plasma diagnostics.

  9. Experiences with two-equation turbulence models

    NASA Technical Reports Server (NTRS)

    Singhal, Ashok K.; Lai, Yong G.; Avva, Ram K.

    1995-01-01

    This viewgraph presentation discusses the following: introduction to CFD Research Corporation; experiences with two-equation models - models used, numerical difficulties, validation and applications, and strengths and weaknesses; and answers to three questions posed by the workshop organizing committee - what are your customers telling you, what are you doing in-house, and how can NASA-CMOTT (Center for Modeling of Turbulence and Transition) help.

  10. Modeling of detachment experiments at DIII-D

    DOE PAGES

    Canik, John M.; Briesemeister, Alexis R.; Lasnier, C. J.; ...

    2014-11-26

    Edge fluid–plasma/kinetic–neutral modeling of well-diagnosed DIII-D experiments is performed in order to document in detail how well certain aspects of experimental measurements are reproduced within the model as the transition to detachment is approached. Results indicate, that at high densities near detachment onset, the poloidal temperature profile produced in the simulations agrees well with that measured in experiment. However, matching the heat flux in the model requires a significant increase in the radiated power compared to what is predicted using standard chemical sputtering rates. Lastly, these results suggest that the model is adequate to predict the divertor temperature, provided thatmore » the discrepancy in radiated power level can be resolved.« less

  11. Experiment and modeling of paired effect on evacuation from a three-dimensional space

    NASA Astrophysics Data System (ADS)

    Jun, Hu; Huijun, Sun; Juan, Wei; Xiaodan, Chen; Lei, You; Musong, Gu

    2014-10-01

    A novel three-dimensional cellular automata evacuation model was proposed based on stairs factor for paired effect and variety velocities in pedestrian evacuation. In the model pedestrians' moving probability of target position at the next moment was defined based on distance profit and repulsive force profit, and evacuation strategy was elaborated in detail through analyzing variety velocities and repulsive phenomenon in moving process. At last, experiments with the simulation platform were conducted to study the relationships of evacuation time, average velocity and pedestrian velocity. The results showed that when the ratio of single pedestrian was higher in the system, the shortest route strategy was good for improving evacuation efficiency; in turn, if ratio of paired pedestrians was higher, it is good for improving evacuation efficiency to adopt strategy that avoided conflicts, and priority should be given to scattered evacuation.

  12. Evolution of egg target size: an analysis of selection on correlated characters.

    PubMed

    Podolsky, R D

    2001-12-01

    In broadcast-spawning marine organisms, chronic sperm limitation should select for traits that improve chances of sperm-egg contact. One mechanism may involve increasing the size of the physical or chemical target for sperm. However, models of fertilization kinetics predict that increasing egg size can reduce net zygote production due to an associated decline in fecundity. An alternate method for increasing physical target size is through addition of energetically inexpensive external structures, such as the jelly coats typical of eggs in species from several phyla. In selection experiments on eggs of the echinoid Dendraster excentricus, in which sperm was used as the agent of selection, eggs with larger overall targets were favored in fertilization. Actual shifts in target size following selection matched quantitative predictions of a model that assumed fertilization was proportional to target size. Jelly volume and ovum volume, two characters that contribute to target size, were correlated both within and among females. A cross-sectional analysis of selection partitioned the independent effects of these characters on fertilization success and showed that they experience similar direct selection pressures. Coupled with data on relative organic costs of the two materials, these results suggest that, under conditions where fertilization is limited by egg target size, selection should favor investment in low-cost accessory structures and may have a relatively weak effect on the evolution of ovum size.

  13. Dosimetric model for intraperitoneal targeted liposomal radioimmunotherapy of ovarian cancer micrometastases

    NASA Astrophysics Data System (ADS)

    Syme, A. M.; McQuarrie, S. A.; Middleton, J. W.; Fallone, B. G.

    2003-05-01

    A simple model has been developed to investigate the dosimetry of micrometastases in the peritoneal cavity during intraperitoneal targeted liposomal radioimmunotherapy. The model is applied to free-floating tumours with radii between 0.005 cm and 0.1 cm. Tumour dose is assumed to come from two sources: free liposomes in solution in the peritoneal cavity and liposomes bound to the surface of the micrometastases. It is assumed that liposomes do not penetrate beyond the surface of the tumours and that the total amount of surface antigen does not change over the course of treatment. Integrated tumour doses are expressed as a function of biological parameters that describe the rates at which liposomes bind to and unbind from the tumour surface, the rate at which liposomes escape from the peritoneal cavity and the tumour surface antigen density. Integrated doses are translated into time-dependent tumour control probabilities (TCPs). The results of the work are illustrated in the context of a therapy in which liposomes labelled with Re-188 are targeted at ovarian cancer cells that express the surface antigen CA-125. The time required to produce a TCP of 95% is used to investigate the importance of the various parameters. The relative contributions of surface-bound radioactivity and unbound radioactivity are used to assess the conditions required for a targeted approach to provide an improvement over a non-targeted approach during intraperitoneal radiation therapy. Using Re-188 as the radionuclide, the model suggests that, for microscopic tumours, the relative importance of the surface-bound radioactivity increases with tumour size. This is evidenced by the requirement for larger antigen densities on smaller tumours to affect an improvement in the time required to produce a TCP of 95%. This is because for the smallest tumours considered, the unbound radioactivity is often capable of exerting a tumouricidal effect before the targeting agent has time to accumulate

  14. Modeling the target acquisition performance of active imaging systems

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Jacobs, Eddie L.; Halford, Carl E.; Vollmerhausen, Richard; Tofsted, David H.

    2007-04-01

    Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.

  15. Modeling the target acquisition performance of active imaging systems.

    PubMed

    Espinola, Richard L; Jacobs, Eddie L; Halford, Carl E; Vollmerhausen, Richard; Tofsted, David H

    2007-04-02

    Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.

  16. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  17. Analytical dose modeling for preclinical proton irradiation of millimetric targets.

    PubMed

    Vanstalle, Marie; Constanzo, Julie; Karakaya, Yusuf; Finck, Christian; Rousseau, Marc; Brasse, David

    2018-01-01

    Due to the considerable development of proton radiotherapy, several proton platforms have emerged to irradiate small animals in order to study the biological effectiveness of proton radiation. A dedicated analytical treatment planning tool was developed in this study to accurately calculate the delivered dose given the specific constraints imposed by the small dimensions of the irradiated areas. The treatment planning system (TPS) developed in this study is based on an analytical formulation of the Bragg peak and uses experimental range values of protons. The method was validated after comparison with experimental data from the literature and then compared to Monte Carlo simulations conducted using Geant4. Three examples of treatment planning, performed with phantoms made of water targets and bone-slab insert, were generated with the analytical formulation and Geant4. Each treatment planning was evaluated using dose-volume histograms and gamma index maps. We demonstrate the value of the analytical function for mouse irradiation, which requires a targeting accuracy of 0.1 mm. Using the appropriate database, the analytical modeling limits the errors caused by misestimating the stopping power. For example, 99% of a 1-mm tumor irradiated with a 24-MeV beam receives the prescribed dose. The analytical dose deviations from the prescribed dose remain within the dose tolerances stated by report 62 of the International Commission on Radiation Units and Measurements for all tested configurations. In addition, the gamma index maps show that the highly constrained targeting accuracy of 0.1 mm for mouse irradiation leads to a significant disagreement between Geant4 and the reference. This simulated treatment planning is nevertheless compatible with a targeting accuracy exceeding 0.2 mm, corresponding to rat and rabbit irradiations. Good dose accuracy for millimetric tumors is achieved with the analytical calculation used in this work. These volume sizes are typical in mouse

  18. Young Children's Prosocial and Aggressive Behaviors and Their Experiences of Being Targeted for Similar Behaviors by Peers

    ERIC Educational Resources Information Center

    Persson, Gun E. B.

    2005-01-01

    Children's target experiences (as recipients of prosocial peer acts and victims of peer aggression) were investigated for their concurrent and longitudinal associations with prosocial and aggressive behavior. Forty-four children (initially 22-40 months) were observed in naturalistic interactions with peers during a two-month period for each of…

  19. Interaction of the high energy deuterons with the graphite target in the plasma focus devices based on Lee model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akel, M., E-mail: pscientific2@aec.org.sy; Alsheikh Salo, S.; Ismael, Sh.

    2014-07-15

    Numerical experiments are systematically carried out using the Lee model code extended to compute the ion beams on various plasma focus devices operated with Deuterium gas. The deuteron beam properties of the plasma focus are studied for low and high energy plasma focus device. The energy spectral distribution for deuteron ions ejected from the pinch plasma is calculated and the ion numbers with energy around 1 MeV is then determined. The deuteron–graphite target interaction is studied for different conditions. The yield of the reaction {sup 12}C(d,n){sup 13}N and the induced radioactivity for one and multi shots plasma focus devices in themore » graphite solid target is investigated. Our results present the optimized high energy repetitive plasma focus devices as an alternative to accelerators for the production of {sup 13}N short lived radioisotopes. However, technical challenges await solutions on two fronts: (a) operation of plasma focus machines at high rep rates for a sufficient period of time (b) design of durable targets that can take the thermal load.« less

  20. Shooting Star Experiment

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The Shooting Star Experiment (SSE) is designed to develop and demonstrate the technology required to focus the sun's energy and use the energy for inexpensive space Propulsion Research. Pictured is an engineering model (Pathfinder III) of the Shooting Star Experiment (SSE). This model was used to test and characterize the motion and deformation of the structure caused by thermal effects. In this photograph, alignment targets are being placed on the engineering model so that a theodolite (alignment telescope) could be used to accurately measure the deformation and deflections of the engineering model under extreme conditions, such as the coldness of deep space and the hotness of the sun as well as vacuum. This thermal vacuum test was performed at the X-Ray Calibration Facility because of the size of the test article and the capabilities of the facility to simulate in-orbit conditions

  1. Modeling the Nab Experiment Electronics in SPICE

    NASA Astrophysics Data System (ADS)

    Blose, Alexander; Crawford, Christopher; Sprow, Aaron; Nab Collaboration

    2017-09-01

    The goal of the Nab experiment is to measure the neutron decay coefficients a, the electron-neutrino correlation, as well as b, the Fierz interference term to precisely test the Standard Model, as well as probe for Beyond the Standard Model physics. In this experiment, protons from the beta decay of the neutron are guided through a magnetic field into a Silicon detector. Event reconstruction will be achieved via time-of-flight measurement for the proton and direct measurement of the coincident electron energy in highly segmented silicon detectors, so the amplification circuitry needs to preserve fast timing, provide good amplitude resolution, and be packaged in a high-density format. We have designed a SPICE simulation to model the full electronics chain for the Nab experiment in order to understand the contributions of each stage and optimize them for performance. Additionally, analytic solutions to each of the components have been determined where available. We will present a comparison of the output from the SPICE model, analytic solution, and empirically determined data.

  2. The Target Model of Strategic Interaction of Kazan Federal University and the Region in the Field of Education

    ERIC Educational Resources Information Center

    Gabdulchakov, Valerian F.

    2016-01-01

    The subject of the study in the article is conceptual basis of construction of the target model of interaction between University and region. Hence the topic of the article "the Target model of strategic interaction between the University and the region in the field of education." The objective was to design a target model of this…

  3. Remote sensing image ship target detection method based on visual attention model

    NASA Astrophysics Data System (ADS)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  4. Using Ecosystem Experiments to Improve Vegetation Models

    DOE PAGES

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; ...

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less

  5. Evaluation of modeled cloud chemistry mechanism against laboratory irradiation experiments: The HxOy/iron/carboxylic acid chemical system

    NASA Astrophysics Data System (ADS)

    Long, Yoann; Charbouillot, Tiffany; Brigante, Marcello; Mailhot, Gilles; Delort, Anne-Marie; Chaumerliac, Nadine; Deguillaume, Laurent

    2013-10-01

    Currently, cloud chemistry models are including more detailed and explicit multiphase mechanisms based on laboratory experiments that determine such values as kinetic constants, stability constants of complexes and hydration constants. However, these models are still subject to many uncertainties related to the aqueous chemical mechanism they used. Particularly, the role of oxidants such as iron and hydrogen peroxide in the oxidative capacity of the cloud aqueous phase has typically never been validated against laboratory experimental data. To fill this gap, we adapted the M2C2 model (Model of Multiphase Cloud Chemistry) to simulate irradiation experiments on synthetic aqueous solutions under controlled conditions (e.g., pH, temperature, light intensity) and for actual cloud water samples. Various chemical compounds that purportedly contribute to the oxidative budget in cloud water (i.e., iron, oxidants, such as hydrogen peroxide: H2O2) were considered. Organic compounds (oxalic, formic and acetic acids) were taken into account as target species because they have the potential to form iron complexes and are good indicators of the oxidative capacity of the cloud aqueous phase via their oxidation in this medium. The range of concentrations for all of the chemical compounds evaluated was representative of in situ measurements. Numerical outputs were compared with experimental data that consisted of a time evolution of the concentrations of the target species. The chemical mechanism in the model describing the “oxidative engine” of the HxOy/iron (HxOy = H2O2, HO2rad /O2rad - and HOrad ) chemical system was consistent with laboratory measurements. Thus, the degradation of the carboxylic acids evaluated was closely reproduced by the model. However, photolysis of the Fe(C2O4)+ complex needs to be considered in cloud chemistry models for polluted conditions (i.e., acidic pH) to correctly reproduce oxalic acid degradation. We also show that iron and formic acid lead to

  6. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Justin; Hund, Lauren

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less

  7. Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model.

    PubMed

    Xianfang, Wang; Junmei, Wang; Xiaolei, Wang; Yue, Zhang

    2017-01-01

    The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server.

  8. Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model

    PubMed Central

    Xiaolei, Wang

    2017-01-01

    The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server. PMID:28497044

  9. The variable target model: a paradigm shift in the incremental haemodialysis prescription.

    PubMed

    Casino, Francesco Gaetano; Basile, Carlo

    2017-01-01

    The recent interest in incremental haemodialysis (HD) is hindered by the current prescription based on a fixed target model (FTM) for the total (dialytic + renal) equivalent continuous clearance (ECC). The latter is expressed either as standard Kt/V (stdKt/V), i.e. the pre-dialysis averaged concentration of urea-based ECC, or EKRc, i.e. the time averaged concentration-based ECC, corrected for volume (V) = 40 L. Accordingly, there are two different targets: stdKt/V = 2.3 volumes per week (v/wk) and EKRc = 13 mL/min/40 L. However, fixing the total ECC necessarily implies perfect equivalence of its components-the residual renal urea clearance (Kru) and dialysis clearance (Kd). This assumption is wrong because Kru has much greater clinical weight than Kd. Here we propose that the ECC target varies as an inverse function of Kru, from a maximum value in anuria to a minimum value at Kru levels not yet requiring dialysis. The aim of the present study was to compare the current FTM with the proposed variable target model (VTM). The double pool urea kinetic model was used to model dialysis sessions for 360 virtual patients and establish equations predicting the ECC as a function of Kd, Kru and the number of sessions per week. An end-dialysis urea distribution V of 35 L (corresponding to a body surface area of 1.73 m 2 ) was used, so that the current EKRc target of 13 mL/min/40 L could be recalculated at an EKRc 35 value of 12 mL/min/35 L equal to 12 mL/min/1.73 m 2 . The latter also coincides with the maximum value of the EKRc 35 variable target in anuria. The minimum target value of EKRc 35 was assumed to coincide with Kru corrected for V = 35 L (i.e. Krc 35 = 6 mL/min/1.73 m 2 ). The corresponding target for stdKt/V was assumed to vary from 2.3 v/wk at Krc 35 = 0 to 1.7 v/wk at Krc 35 = 6 mL/min/1.73 m 2 . On this basis, the variable target values can be obtained from the following linear equations: target EKRc 35 = 12 - Krc 35 ; target stdKt/V = 2.3 - 0.1 × Krc 35 . Two

  10. Variation in psychosocial influences according to the dimensions and content of children's unusual experiences: potential routes for the development of targeted interventions.

    PubMed

    Ruffell, Tamatha; Azis, Matilda; Hassanali, Nedah; Ames, Catherine; Browning, Sophie; Bracegirdle, Karen; Corrigall, Richard; Laurens, Kristin R; Hirsch, Colette; Kuipers, Elizabeth; Maddox, Lucy; Jolley, Suzanne

    2016-03-01

    The psychosocial processes implicated in the development and maintenance of psychosis differ according to both the dimensional attributes (conviction, frequency, associated distress, adverse life impact) and the content or type (e.g. grandiosity, hallucinations, paranoia) of the psychotic symptoms experienced. This has informed the development of 'targeted' cognitive behavioural therapy for psychosis (CBTp): interventions focusing on specific psychological processes in the context of particular symptom presentations. In adults, larger effect sizes for change in primary outcomes are typically reported in trials of targeted interventions, compared to those for trials of generic CBTp approaches with multiple therapeutic foci. We set out to test the theoretical basis for developing targeted CBTp interventions for young people with distressing psychotic-like, or unusual, experiences (UEs). We investigated variations in the psychosocial processes previously associated with self-reported UE severity (reasoning, negative life events, emotional problems) according to UE dimensional attributes and content/type (using an established five-factor model) in a clinically referred sample of 72 young people aged 8-14 years. Regression analyses revealed associations of conviction and grandiosity with reasoning; of frequency, and hallucinations and paranoia, with negative life events; and of distress/adverse life impact, and paranoia and hallucinations, with emotional problems. We conclude that psychological targets for intervention differ according to particular characteristics of childhood UEs in much the same way as for psychotic symptoms in adults. The development of targeted interventions is therefore indicated, and tailoring therapy according to presentation should further improve clinical outcomes for these young people.

  11. MRPrimerW: a tool for rapid design of valid high-quality primers for multiple target qPCR experiments.

    PubMed

    Kim, Hyerin; Kang, NaNa; An, KyuHyeon; Koo, JaeHyung; Kim, Min-Soo

    2016-07-08

    Design of high-quality primers for multiple target sequences is essential for qPCR experiments, but is challenging due to the need to consider both homology tests on off-target sequences and the same stringent filtering constraints on the primers. Existing web servers for primer design have major drawbacks, including requiring the use of BLAST-like tools for homology tests, lack of support for ranking of primers, TaqMan probes and simultaneous design of primers against multiple targets. Due to the large-scale computational overhead, the few web servers supporting homology tests use heuristic approaches or perform homology tests within a limited scope. Here, we describe the MRPrimerW, which performs complete homology testing, supports batch design of primers for multi-target qPCR experiments, supports design of TaqMan probes and ranks the resulting primers to return the top-1 best primers to the user. To ensure high accuracy, we adopted the core algorithm of a previously reported MapReduce-based method, MRPrimer, but completely redesigned it to allow users to receive query results quickly in a web interface, without requiring a MapReduce cluster or a long computation. MRPrimerW provides primer design services and a complete set of 341 963 135 in silico validated primers covering 99% of human and mouse genes. Free access: http://MRPrimerW.com. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Plant microRNA-Target Interaction Identification Model Based on the Integration of Prediction Tools and Support Vector Machine

    PubMed Central

    Meng, Jun; Shi, Lin; Luan, Yushi

    2014-01-01

    Background Confident identification of microRNA-target interactions is significant for studying the function of microRNA (miRNA). Although some computational miRNA target prediction methods have been proposed for plants, results of various methods tend to be inconsistent and usually lead to more false positive. To address these issues, we developed an integrated model for identifying plant miRNA–target interactions. Results Three online miRNA target prediction toolkits and machine learning algorithms were integrated to identify and analyze Arabidopsis thaliana miRNA-target interactions. Principle component analysis (PCA) feature extraction and self-training technology were introduced to improve the performance. Results showed that the proposed model outperformed the previously existing methods. The results were validated by using degradome sequencing supported Arabidopsis thaliana miRNA-target interactions. The proposed model constructed on Arabidopsis thaliana was run over Oryza sativa and Vitis vinifera to demonstrate that our model is effective for other plant species. Conclusions The integrated model of online predictors and local PCA-SVM classifier gained credible and high quality miRNA-target interactions. The supervised learning algorithm of PCA-SVM classifier was employed in plant miRNA target identification for the first time. Its performance can be substantially improved if more experimentally proved training samples are provided. PMID:25051153

  13. Soft computing model for optimized siRNA design by identifying off target possibilities using artificial neural network model.

    PubMed

    Murali, Reena; John, Philips George; Peter S, David

    2015-05-15

    The ability of small interfering RNA (siRNA) to do posttranscriptional gene regulation by knocking down targeted genes is an important research topic in functional genomics, biomedical research and in cancer therapeutics. Many tools had been developed to design exogenous siRNA with high experimental inhibition. Even though considerable amount of work has been done in designing exogenous siRNA, design of effective siRNA sequences is still a challenging work because the target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. In some cases, siRNAs may tolerate mismatches with the target mRNA, but knockdown of genes other than the intended target could make serious consequences. Hence to design siRNAs, two important concepts must be considered: the ability in knocking down target genes and the off target possibility on any nontarget genes. So before doing gene silencing by siRNAs, it is essential to analyze their off target effects in addition to their inhibition efficacy against a particular target. Only a few methods have been developed by considering both efficacy and off target possibility of siRNA against a gene. In this paper we present a new design of neural network model with whole stacking energy (ΔG) that enables to identify the efficacy and off target effect of siRNAs against target genes. The tool lists all siRNAs against a particular target with their inhibition efficacy and number of matches or sequence similarity with other genes in the database. We could achieve an excellent performance of Pearson Correlation Coefficient (R=0. 74) and Area Under Curve (AUC=0.906) when the threshold of whole stacking energy is ≥-34.6 kcal/mol. To the best of the author's knowledge, this is one of the best score while considering the "combined efficacy and off target possibility" of siRNA for silencing a gene. The proposed model

  14. Modeling spallation reactions in tungsten and uranium targets with the Geant4 toolkit

    NASA Astrophysics Data System (ADS)

    Malyshkin, Yury; Pshenichnov, Igor; Mishustin, Igor; Greiner, Walter

    2012-02-01

    We study primary and secondary reactions induced by 600 MeV proton beams in monolithic cylindrical targets made of natural tungsten and uranium by using Monte Carlo simulations with the Geant4 toolkit [1-3]. Bertini intranuclear cascade model, Binary cascade model and IntraNuclear Cascade Liège (INCL) with ABLA model [4] were used as calculational options to describe nuclear reactions. Fission cross sections, neutron multiplicity and mass distributions of fragments for 238U fission induced by 25.6 and 62.9 MeV protons are calculated and compared to recent experimental data [5]. Time distributions of neutron leakage from the targets and heat depositions are calculated. This project is supported by Siemens Corporate Technology.

  15. Ground target geolocation based on digital elevation model for airborne wide-area reconnaissance system

    NASA Astrophysics Data System (ADS)

    Qiao, Chuan; Ding, Yalin; Xu, Yongsen; Xiu, Jihong

    2018-01-01

    To obtain the geographical position of the ground target accurately, a geolocation algorithm based on the digital elevation model (DEM) is developed for an airborne wide-area reconnaissance system. According to the platform position and attitude information measured by the airborne position and orientation system and the gimbal angles information from the encoder, the line-of-sight pointing vector in the Earth-centered Earth-fixed coordinate frame is solved by the homogeneous coordinate transformation. The target longitude and latitude can be solved with the elliptical Earth model and the global DEM. The influences of the systematic error and measurement error on ground target geolocation calculation accuracy are analyzed by the Monte Carlo method. The simulation results show that this algorithm can improve the geolocation accuracy of ground target in rough terrain area obviously. The geolocation accuracy of moving ground target can be improved by moving average filtering (MAF). The validity of the geolocation algorithm is verified by the flight test in which the plane flies at a geodetic height of 15,000 m and the outer gimbal angle is <47°. The geolocation root mean square error of the target trajectory is <45 and <7 m after MAF.

  16. Experimental model of transthoracic, vascular-targeted, photodynamically induced myocardial infarction.

    PubMed

    Chrastina, Adrian; Pokreisz, Peter; Schnitzer, Jan E

    2014-01-15

    We describe a novel model of myocardial infarction (MI) in rats induced by percutaneous transthoracic low-energy laser-targeted photodynamic irradiation. The procedure does not require thoracotomy and represents a minimally invasive alternative to existing surgical models. Target cardiac area to be photodynamically irradiated was triangulated from the thoracic X-ray scans. The acute phase of MI was histopathologically characterized by the presence of extensive vascular occlusion, hemorrhage, loss of transversal striations, neutrophilic infiltration, and necrotic changes of cardiomyocytes. Consequently, damaged myocardium was replaced with fibrovascular and granulation tissue. The fibrotic scar in the infarcted area was detected by computer tomography imaging. Cardiac troponin I (cTnI), a specific marker of myocardial injury, was significantly elevated at 6 h (41 ± 6 ng/ml, n = 4, P < 0.05 vs. baseline) and returned to baseline after 72 h. Triphenyltetrazolium chloride staining revealed transmural anterolateral infarcts targeting 25 ± 3% of the left ventricle at day 1 with a decrease to 20 ± 3% at day 40 (n = 6 for each group, P < 0.01 vs. day 1). Electrocardiography (ECG) showed significant ST-segment elevation in the acute phase with subsequent development of a pathological Q wave and premature ventricular contractions in the chronic phase of MI. Vectorcardiogram analysis of spatiotemporal electrical signal transduction revealed changes in inscription direction, QRS loop morphology, and redistribution in quadrant areas. The photodynamically induced MI in n = 51 rats was associated with 12% total mortality. Histological findings, ECG abnormalities, and elevated cTnI levels confirmed the photosensitizer-dependent induction of MI after laser irradiation. This novel rodent model of MI might provide a platform to evaluate new diagnostic or therapeutic interventions.

  17. Cratering and penetration experiments in teflon targets at velocities from 1 to 7 km/s

    NASA Technical Reports Server (NTRS)

    Horz, Friedrich; Cintala, Mark; Bernhard, Ronald P.; Cardenas, Frank; Davidson, William; Haynes, Gerald; See, Thomas H.; Winkler, Jerry; Knight, Jeffrey

    1994-01-01

    Approximately 20 sq m of protective thermal blankets, largely composed of Teflon, were retrieved from the Long Duration Exposure Facility after the spacecraft spent approximately 5.7 years in space. Examination of these blankets revealed that they contained thousands of hypervelocity impact features ranging from micron-sized craters to penetration holes several millimeters in diameter. We conducted impact experiments to reproduce such features and to understand the relationships between projectile size and the resulting crater or penetration hole diameter over a wide range of impact velocities. Such relationships are needed to derive the size and mass frequency distribution and flux of natural and man-made particles in low-earth orbit. Powder propellant and light-gas guns were used to launch soda-lime glass spheres into pure Teflon targets at velocities ranging from 1 to 7 km/s. Target thickness varied over more than three orders of magnitude from finite halfspace targets to very thin films. Cratering and penetration of massive Teflon targets is dominated by brittle failure and the development of extensive spall zones at the target's front and, if penetrated, the target's rear side. Mass removal by spallation at the back side of Teflon targets may be so severe that the absolute penetration hole diameter can become larger than that of a standard crater. The crater diameter in infinite halfspace Teflon targets increases, at otherwise constant impact conditions, with encounter velocity by a factor of V (exp 0.44). In contrast, the penetration hole size in very thin foils is essentially unaffected by impact velocity. Penetrations at target thicknesses intermediate to these extremes will scale with variable exponents of V. Our experimental matrix is sufficiently systematic and complete, up to 7 km/s, to make reasonable recommendations for velocity-scaling of Teflon craters and penetrations. We specifically suggest that cratering behavior and associated equations apply

  18. A stochastic model for eye movements during fixation on a stationary target.

    NASA Technical Reports Server (NTRS)

    Vasudevan, R.; Phatak, A. V.; Smith, J. D.

    1971-01-01

    A stochastic model describing small eye movements occurring during steady fixation on a stationary target is presented. Based on eye movement data for steady gaze, the model has a hierarchical structure; the principal level represents the random motion of the image point within a local area of fixation, while the higher level mimics the jump processes involved in transitions from one local area to another. Target image motion within a local area is described by a Langevin-like stochastic differential equation taking into consideration the microsaccadic jumps pictured as being due to point processes and the high frequency muscle tremor, represented as a white noise. The transform of the probability density function for local area motion is obtained, leading to explicit expressions for their means and moments. Evaluation of these moments based on the model is comparable with experimental results.

  19. Laser range profiling for small target recognition

    NASA Astrophysics Data System (ADS)

    Steinvall, Ove; Tulldahl, Michael

    2016-05-01

    The detection and classification of small surface and airborne targets at long ranges is a growing need for naval security. Long range ID or ID at closer range of small targets has its limitations in imaging due to the demand on very high transverse sensor resolution. It is therefore motivated to look for 1D laser techniques for target ID. These include vibrometry, and laser range profiling. Vibrometry can give good results but is also sensitive to certain vibrating parts on the target being in the field of view. Laser range profiling is attractive because the maximum range can be substantial, especially for a small laser beam width. A range profiler can also be used in a scanning mode to detect targets within a certain sector. The same laser can also be used for active imaging when the target comes closer and is angular resolved. The present paper will show both experimental and simulated results for laser range profiling of small boats out to 6-7 km range and a UAV mockup at close range (1.3 km). We obtained good results with the profiling system both for target detection and recognition. Comparison of experimental and simulated range waveforms based on CAD models of the target support the idea of having a profiling system as a first recognition sensor and thus narrowing the search space for the automatic target recognition based on imaging at close ranges. The naval experiments took place in the Baltic Sea with many other active and passive EO sensors beside the profiling system. Discussion of data fusion between laser profiling and imaging systems will be given. The UAV experiments were made from the rooftop laboratory at FOI.

  20. An Analytic Model for the Success Rate of a Robotic Actuator System in Hitting Random Targets.

    PubMed

    Bradley, Stuart

    2015-11-20

    Autonomous robotic systems are increasingly being used in a wide range of applications such as precision agriculture, medicine, and the military. These systems have common features which often includes an action by an "actuator" interacting with a target. While simulations and measurements exist for the success rate of hitting targets by some systems, there is a dearth of analytic models which can give insight into, and guidance on optimization, of new robotic systems. The present paper develops a simple model for estimation of the success rate for hitting random targets from a moving platform. The model has two main dimensionless parameters: the ratio of actuator spacing to target diameter; and the ratio of platform distance moved (between actuator "firings") to the target diameter. It is found that regions of parameter space having specified high success are described by simple equations, providing guidance on design. The role of a "cost function" is introduced which, when minimized, provides optimization of design, operating, and risk mitigation costs.

  1. The relationship study between image features and detection probability based on psychology experiments

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Chen, Yu-hua; Wang, Ji-yuan; Gao, Hong-sheng; Wang, Ji-jun; Su, Rong-hua; Mao, Wei

    2011-04-01

    Detection probability is an important index to represent and estimate target viability, which provides basis for target recognition and decision-making. But it will expend a mass of time and manpower to obtain detection probability in reality. At the same time, due to the different interpretation of personnel practice knowledge and experience, a great difference will often exist in the datum obtained. By means of studying the relationship between image features and perception quantity based on psychology experiments, the probability model has been established, in which the process is as following.Firstly, four image features have been extracted and quantified, which affect directly detection. Four feature similarity degrees between target and background were defined. Secondly, the relationship between single image feature similarity degree and perception quantity was set up based on psychological principle, and psychological experiments of target interpretation were designed which includes about five hundred people for interpretation and two hundred images. In order to reduce image features correlativity, a lot of artificial synthesis images have been made which include images with single brightness feature difference, images with single chromaticity feature difference, images with single texture feature difference and images with single shape feature difference. By analyzing and fitting a mass of experiments datum, the model quantitys have been determined. Finally, by applying statistical decision theory and experimental results, the relationship between perception quantity with target detection probability has been found. With the verification of a great deal of target interpretation in practice, the target detection probability can be obtained by the model quickly and objectively.

  2. The Hot Serial Cereal Experiment for modeling wheat response to temperature: field experiments and AgMIP-Wheat multi-model simulations

    USDA-ARS?s Scientific Manuscript database

    The data set reported here includes the part of a Hot Serial Cereal Experiment (HSC) experiment recently used in the AgMIP-Wheat project to analyze the uncertainty of 30 wheat models and quantify their response to temperature. The HSC experiment was conducted in an open-field in a semiarid environme...

  3. An electron fixed target experiment to search for a new vector boson A' decaying to e +e -

    DOE PAGES

    Rouven Essig; Schuster, Philip; Toro, Natalia; ...

    2011-02-02

    We describe an experiment to search for a new vector boson A' with weak coupling alpha' > 6 x 10 –8 α to electrons (α' = e 2/4π) in the mass range 65 MeV < m A' < 550 MeV. New vector bosons with such small couplings arise naturally from a small kinetic mixing of the "dark photon" A' with the photon -- one of the very few ways in which new forces can couple to the Standard Model -- and have received considerable attention as an explanation of various dark matter related anomalies. A' bosons are produced by radiationmore » off an electron beam, and could appear as narrow resonances with small production cross-section in the trident e +e - spectrum. We summarize the experimental approach described in a proposal submitted to Jefferson Laboratory's PAC35, PR-10-009. This experiment, the A' Experiment (APEX), uses the electron beam of the Continuous Electron Beam Accelerator Facility at Jefferson Laboratory (CEBAF) at energies of ~1-4 GeV incident on 0.5-10% radiation length Tungsten wire mesh targets, and measures the resulting e+e- pairs to search for the A' using the High Resolution Spectrometer and the septum magnet in Hall A. With a ~1 month run, APEX will achieve very good sensitivity because the statistics of e+e- pairs will be ~10,000 times larger in the explored mass range than any previous search for the A' boson. These statistics and the excellent mass resolution of the spectrometers allow sensitivity to α'/α one to three orders of magnitude below current limits, in a region of parameter space of great theoretical and phenomenological interest. Similar experiments could also be performed at other facilities, such as the Mainz Microtron.« less

  4. Modeling of intense pulsed ion beam heated masked targets for extreme materials characterization

    NASA Astrophysics Data System (ADS)

    Barnard, John J.; Schenkel, Thomas

    2017-11-01

    Intense, pulsed ion beams locally heat materials and deliver dense electronic excitations that can induce material modifications and phase transitions. Material properties can potentially be stabilized by rapid quenching. Pulsed ion beams with pulse lengths of order ns have recently become available for materials processing. Here, we optimize mask geometries for local modification of materials by intense ion pulses. The goal is to rapidly excite targets volumetrically to the point where a phase transition or local lattice reconstruction is induced followed by rapid cooling that stabilizes desired material's properties fast enough before the target is altered or damaged by, e.g., hydrodynamic expansion. By using a mask, the longitudinal dimension can be large compared to the transverse dimension, allowing the possibility of rapid transverse cooling. We performed HYDRA simulations that calculate peak temperatures for a series of excitation conditions and cooling rates of silicon targets with micro-structured masks and compare these to a simple analytical model. The model gives scaling laws that can guide the design of targets over a wide range of pulsed ion beam parameters.

  5. Mixture model normalization for non-targeted gas chromatography/mass spectrometry metabolomics data.

    PubMed

    Reisetter, Anna C; Muehlbauer, Michael J; Bain, James R; Nodzenski, Michael; Stevens, Robert D; Ilkayeva, Olga; Metzger, Boyd E; Newgard, Christopher B; Lowe, William L; Scholtens, Denise M

    2017-02-02

    Metabolomics offers a unique integrative perspective for health research, reflecting genetic and environmental contributions to disease-related phenotypes. Identifying robust associations in population-based or large-scale clinical studies demands large numbers of subjects and therefore sample batching for gas-chromatography/mass spectrometry (GC/MS) non-targeted assays. When run over weeks or months, technical noise due to batch and run-order threatens data interpretability. Application of existing normalization methods to metabolomics is challenged by unsatisfied modeling assumptions and, notably, failure to address batch-specific truncation of low abundance compounds. To curtail technical noise and make GC/MS metabolomics data amenable to analyses describing biologically relevant variability, we propose mixture model normalization (mixnorm) that accommodates truncated data and estimates per-metabolite batch and run-order effects using quality control samples. Mixnorm outperforms other approaches across many metrics, including improved correlation of non-targeted and targeted measurements and superior performance when metabolite detectability varies according to batch. For some metrics, particularly when truncation is less frequent for a metabolite, mean centering and median scaling demonstrate comparable performance to mixnorm. When quality control samples are systematically included in batches, mixnorm is uniquely suited to normalizing non-targeted GC/MS metabolomics data due to explicit accommodation of batch effects, run order and varying thresholds of detectability. Especially in large-scale studies, normalization is crucial for drawing accurate conclusions from non-targeted GC/MS metabolomics data.

  6. Comprehensive modeling of microRNA targets predicts functional non-conserved and non-canonical sites.

    PubMed

    Betel, Doron; Koppal, Anjali; Agius, Phaedra; Sander, Chris; Leslie, Christina

    2010-01-01

    mirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites.

  7. Dual Target Design for CLAS12

    NASA Astrophysics Data System (ADS)

    Alam, Omair; Gilfoyle, Gerard; Christo, Steve

    2015-10-01

    An experiment to measure the neutron magnetic form factor (GnM) is planned for the new CLAS12 detector in Hall B at Jefferson Lab. This form factor will be extracted from the ratio of the quasielastic electron-neutron to electron-proton scattering off a liquid deuterium (LD2) target. A collinear liquid hydrogen (LH2) target will be used to measure efficiencies at the same time as production data is collected from the LD2 target. To test target designs we have simulated CLAS12 and the target geometry. Electron-nucleon events are produced first with the QUasiElastic Event Generator (QUEEG) which models the internal motion of the nucleons in deuterium.1 The results are used as input to the CLAS12 Monte Caro code gemc; a Geant4-based program that simulates the particle's interactions with each component of CLAS12 including the target material. The dual target geometry has been added to gemc including support structures and cryogenic transport systems. A Perl script was written to define the target materials and geometries. The output of the script is a set of database entries read by gemc at runtime. An initial study of the impact of this dual-target structure revealed limited effects on the electron momentum and angular resolutions. Work supported by the University of Richmond and the US Department of Energy.

  8. The Study on Flow Velocity Measurement of Antarctic Krill Trawl Model Experiment in North Bay of South China Sea

    NASA Astrophysics Data System (ADS)

    Chen, Shuai; Wang, Lumin; Huang, Hongliang; Zhang, Xun

    2017-10-01

    From August 25 to 29, 2014, the project team carried out the experiment of Antarctic krill trawl in the Beihai Bay of the South China Sea. In order to understand the flow field of the network model in the course of the experiment, it is necessary to record the speed of the ship and to grasp the flow field of the ocean. Therefore, the ocean velocity is measured during the experiment. The flow rate in this experiment was measured using an acoustic Doppler flow meter (Vectoring Plus, Nortek, Norway). In order to compensate for the flow rate error caused by ship drift, the drift condition of the ship was also measured by the positioning device (Snapdragon MSM8274AB, Qualcomm, USA) used in the flow rate measurement. The results show that the actual velocity of the target sea area is in the range of 0.06-0.49 m / s and the direction is 216.17-351.70. And compared with the previous research, the influencing factors were analysed. This study proves that it is feasible to use point Doppler flow meter for velocity study in trawl model experiment.

  9. Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration

    NASA Astrophysics Data System (ADS)

    Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.

    2017-06-01

    Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.

  10. Modeling Patient-Specific Magnetic Drug Targeting Within the Intracranial Vasculature

    PubMed Central

    Patronis, Alexander; Richardson, Robin A.; Schmieschek, Sebastian; Wylie, Brian J. N.; Nash, Rupert W.; Coveney, Peter V.

    2018-01-01

    Drug targeting promises to substantially enhance future therapies, for example through the focussing of chemotherapeutic drugs at the site of a tumor, thus reducing the exposure of healthy tissue to unwanted damage. Promising work on the steering of medication in the human body employs magnetic fields acting on nanoparticles made of paramagnetic materials. We develop a computational tool to aid in the optimization of the physical parameters of these particles and the magnetic configuration, estimating the fraction of particles reaching a given target site in a large patient-specific vascular system for different physiological states (heart rate, cardiac output, etc.). We demonstrate the excellent computational performance of our model by its application to the simulation of paramagnetic-nanoparticle-laden flows in a circle of Willis geometry obtained from an MRI scan. The results suggest a strong dependence of the particle density at the target site on the strength of the magnetic forcing and the velocity of the background fluid flow. PMID:29725303

  11. Modeling Patient-Specific Magnetic Drug Targeting Within the Intracranial Vasculature.

    PubMed

    Patronis, Alexander; Richardson, Robin A; Schmieschek, Sebastian; Wylie, Brian J N; Nash, Rupert W; Coveney, Peter V

    2018-01-01

    Drug targeting promises to substantially enhance future therapies, for example through the focussing of chemotherapeutic drugs at the site of a tumor, thus reducing the exposure of healthy tissue to unwanted damage. Promising work on the steering of medication in the human body employs magnetic fields acting on nanoparticles made of paramagnetic materials. We develop a computational tool to aid in the optimization of the physical parameters of these particles and the magnetic configuration, estimating the fraction of particles reaching a given target site in a large patient-specific vascular system for different physiological states (heart rate, cardiac output, etc.). We demonstrate the excellent computational performance of our model by its application to the simulation of paramagnetic-nanoparticle-laden flows in a circle of Willis geometry obtained from an MRI scan. The results suggest a strong dependence of the particle density at the target site on the strength of the magnetic forcing and the velocity of the background fluid flow.

  12. Optical measurements and analytical modeling of magnetic field generated in a dieletric target

    NASA Astrophysics Data System (ADS)

    Yafeng, BAI; Shiyi, ZHOU; Yushan, ZENG; Yihan, LIANG; Rong, QI; Wentao, LI; Ye, TIAN; Xiaoya, LI; Jiansheng, LIU

    2018-01-01

    Polarization rotation of a probe pulse by the target is observed with the Faraday rotation method in the interaction of an intense laser pulse with a solid target. The rotation of the polarization plane of the probe pulse may result from a combined action of fused silica and diffused electrons. After the irradiation of the main pulse, the rotation angle changed significantly and lasted ∼2 ps. These phenomena may imply a persistent magnetic field inside the target. An analytical model is developed to explain the experimental observation. The model indicates that a strong toroidal magnetic field is induced by an energetic electron beam. Meanwhile, an ionization channel is observed in the shadowgraph and extends at the speed of light after the irradiation of the main beam. The formation of this ionization channel is complex, and a simple explanation is given.

  13. New support vector machine-based method for microRNA target prediction.

    PubMed

    Li, L; Gao, Q; Mao, X; Cao, Y

    2014-06-09

    MicroRNA (miRNA) plays important roles in cell differentiation, proliferation, growth, mobility, and apoptosis. An accurate list of precise target genes is necessary in order to fully understand the importance of miRNAs in animal development and disease. Several computational methods have been proposed for miRNA target-gene identification. However, these methods still have limitations with respect to their sensitivity and accuracy. Thus, we developed a new miRNA target-prediction method based on the support vector machine (SVM) model. The model supplies information of two binding sites (primary and secondary) for a radial basis function kernel as a similarity measure for SVM features. The information is categorized based on structural, thermodynamic, and sequence conservation. Using high-confidence datasets selected from public miRNA target databases, we obtained a human miRNA target SVM classifier model with high performance and provided an efficient tool for human miRNA target gene identification. Experiments have shown that our method is a reliable tool for miRNA target-gene prediction, and a successful application of an SVM classifier. Compared with other methods, the method proposed here improves the sensitivity and accuracy of miRNA prediction. Its performance can be further improved by providing more training examples.

  14. Modelling PET radionuclide production in tissue and external targets using Geant4

    NASA Astrophysics Data System (ADS)

    Amin, T.; Infantino, A.; Lindsay, C.; Barlow, R.; Hoehr, C.

    2017-07-01

    The Proton Therapy Facility in TRIUMF provides 74 MeV protons extracted from a 500 MeV H- cyclotron for ocular melanoma treatments. During treatment, positron emitting radionuclides such as 1C, 15O and 13N are produced in patient tissue. Using PET scanners, the isotopic activity distribution can be measured for in-vivo range verification. A second cyclotron, the TR13, provides 13 MeV protons onto liquid targets for the production of PET radionuclides such as 18F, 13N or 68Ga, for medical applications. The aim of this work was to validate Geant4 against FLUKA and experimental measurements for production of the above-mentioned isotopes using the two cyclotrons. The results show variable degrees of agreement. For proton therapy, the proton-range agreement was within 2 mm for 11C activity, whereas 13N disagreed. For liquid targets at the TR13 the average absolute deviation ratio between FLUKA and experiment was 1.9±2.7, whereas the average absolute deviation ratio between Geant4 and experiment was 0. 6±0.4. This is due to the uncertainties present in experimentally determined reaction cross sections.

  15. Identifying Molecular Targets for Chemoprevention in a Rat Model

    DTIC Science & Technology

    2005-12-01

    95616-8671 REPORT DATE: December 2005 TYPE OF REPORT: Annual PREPARED FOR: U.S. Army Medical Research and Materiel Command...California 95616-8671 9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) U.S. Army Medical Research and...addition, it induces high levels of oxidative damage in the target tissues. The scope of this research includes: 1) Generation of a rat model, 2) Analysis

  16. A Brownian Bridge Movement Model to Track Mobile Targets

    DTIC Science & Technology

    2016-09-01

    breakout of Chinese forces in the South China Sea. Probability heat maps, depicting the probability of a target location at discrete times, are...achieve a higher probability of detection, it is more effective to have sensors cover a wider area at fewer discrete points in time than to have a...greater number of discrete looks using sensors covering smaller areas. 14. SUBJECT TERMS Brownian bridge movement models, unmanned sensors

  17. High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.

    PubMed

    Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min

    2012-01-01

    The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.

  18. An analytical approach of thermodynamic behavior in a gas target system on a medical cyclotron.

    PubMed

    Jahangiri, Pouyan; Zacchia, Nicholas A; Buckley, Ken; Bénard, François; Schaffer, Paul; Martinez, D Mark; Hoehr, Cornelia

    2016-01-01

    An analytical model has been developed to study the thermo-mechanical behavior of gas targets used to produce medical isotopes, assuming that the system reaches steady-state. It is based on an integral analysis of the mass and energy balance of the gas-target system, the ideal gas law, and the deformation of the foil. The heat transfer coefficients for different target bodies and gases have been calculated. Excellent agreement is observed between experiments performed at TRIUMF's 13 MeV cyclotron and the model. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Targeting Neuroblastoma Cell Surface Proteins: Recommendations for Homology Modeling of hNET, ALK, and TrkB.

    PubMed

    Haddad, Yazan; Heger, Zbyněk; Adam, Vojtech

    2017-01-01

    Targeted therapy is a promising approach for treatment of neuroblastoma as evident from the large number of targeting agents employed in clinical practice today. In the absence of known crystal structures, researchers rely on homology modeling to construct template-based theoretical structures for drug design and testing. Here, we discuss three candidate cell surface proteins that are suitable for homology modeling: human norepinephrine transporter (hNET), anaplastic lymphoma kinase (ALK), and neurotrophic tyrosine kinase receptor 2 (NTRK2 or TrkB). When choosing templates, both sequence identity and structure quality are important for homology modeling and pose the first of many challenges in the modeling process. Homology modeling of hNET can be improved using template models of dopamine and serotonin transporters instead of the leucine transporter (LeuT). The extracellular domains of ALK and TrkB are yet to be exploited by homology modeling. There are several idiosyncrasies that require direct attention throughout the process of model construction, evaluation and refinement. Shifts/gaps in the alignment between the template and target, backbone outliers and side-chain rotamer outliers are among the main sources of physical errors in the structures. Low-conserved regions can be refined with loop modeling method. Residue hydrophobicity, accessibility to bound metals or glycosylation can aid in model refinement. We recommend resolving these idiosyncrasies as part of "good modeling practice" to obtain highest quality model. Decreasing physical errors in protein structures plays major role in the development of targeting agents and understanding of chemical interactions at the molecular level.

  20. Low, slow, small target recognition based on spatial vision network

    NASA Astrophysics Data System (ADS)

    Cheng, Zhao; Guo, Pei; Qi, Xin

    2018-03-01

    Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.

  1. Cratering and penetration experiments in Teflon targets at velocities from 1 to 7 km/s

    NASA Technical Reports Server (NTRS)

    Hoerz, Friedrich; Bernhard, Ronald P.; Cintala, Mark J.; See, Thomas H.

    1995-01-01

    Approximately 20 sq m of protective thermal blankets, largely composed of Teflon, were retrieved from the Long Duration Exposure Facility (LDEF) after the spacecraft had spent approximately 5.7 years in space. Examination of these blankets revealed that they contained thousands of hypervelocity impact features ranging from micron-sized craters to penetration holes several millimeters in diameter. We conducted impact experiments in an effort to reproduce such features and to -- hopefully -- understand the relationships between projectile size and the resulting crater or penetration-hole diameter over a wide range of impact velocity. Such relationships are needed to derive the size- and mass-frequency distribution and flux of natural and man-made particles in low-Earth orbit. Powder propellant and light-gas guns were used to launch soda-lime glass spheres of 3.175 mm (1/8 inch) nominal diameter (Dp) into pure Teflon FEP targets at velocities ranging from 1 to 7 km/s. Target thickness (T) was varied over more than three orders of magnitude from infinite halfspace targets (Dp/T less than 0.1) to very thin films (Dp/T greater than 100). Cratering and penetration of massive Teflon targets is dominated by brittle failure and the development of extensive spall zones at the target's front and, if penetrated, the target's rear side. Mass removal by spallation at the back side of Teflon targets may be so severe that the absolute penetration-hole diameter (Dh) can become larger than that of a standard crater (Dc) at relative target thicknesses of Dp/T = 0.6-0.9. The crater diameter is infinite halfspace Teflon targets increases -- at otherwise constant impact conditions -- with encounter velocity by a factor of V0.44. In contrast, the penetration-hole size is very thin foils (Dp/T greater than 50) is essentially unaffected by impact velocity. Penetrations at target thicknesses intermediate to these extremes will scale with variable exponents of V. Our experimental matrix is

  2. Cratering and penetration experiments in Teflon targets at velocities from 1 to 7 km/s

    NASA Astrophysics Data System (ADS)

    Hoerz, Friedrich; Bernhard, Ronald P.; Cintala, Mark J.; See, Thomas H.

    1995-02-01

    Approximately 20 sq m of protective thermal blankets, largely composed of Teflon, were retrieved from the Long Duration Exposure Facility (LDEF) after the spacecraft had spent approximately 5.7 years in space. Examination of these blankets revealed that they contained thousands of hypervelocity impact features ranging from micron-sized craters to penetration holes several millimeters in diameter. We conducted impact experiments in an effort to reproduce such features and to -- hopefully -- understand the relationships between projectile size and the resulting crater or penetration-hole diameter over a wide range of impact velocity. Such relationships are needed to derive the size- and mass-frequency distribution and flux of natural and man-made particles in low-Earth orbit. Powder propellant and light-gas guns were used to launch soda-lime glass spheres of 3.175 mm (1/8 inch) nominal diameter (Dp) into pure Teflon FEP targets at velocities ranging from 1 to 7 km/s. Target thickness (T) was varied over more than three orders of magnitude from infinite halfspace targets (Dp/T less than 0.1) to very thin films (Dp/T greater than 100). Cratering and penetration of massive Teflon targets is dominated by brittle failure and the development of extensive spall zones at the target's front and, if penetrated, the target's rear side. Mass removal by spallation at the back side of Teflon targets may be so severe that the absolute penetration-hole diameter (Dh) can become larger than that of a standard crater (Dc) at relative target thicknesses of Dp/T = 0.6-0.9. The crater diameter is infinite halfspace Teflon targets increases -- at otherwise constant impact conditions -- with encounter velocity by a factor of V0.44. In contrast, the penetration-hole size is very thin foils (Dp/T greater than 50) is essentially unaffected by impact velocity. Penetrations at target thicknesses intermediate to these extremes will scale with variable exponents of V. Our experimental matrix is

  3. Morphological and compositional study of 238U thin film targets for nuclear experiments

    NASA Astrophysics Data System (ADS)

    Sibbens, Goedele; Ernstberger, Markus; Gouder, Thomas; Marouli, Maria; Moens, André; Seibert, Alice; Vanleeuw, David; Zúñiga, Martin Vargas; Wiss, Thierry; Zampella, Mariavittoria; Zuleger, Evelyn

    2018-05-01

    The uncertainty in neutron cross section values strongly depends on the quality and characteristics of the deposited actinide films which are used as "targets" in the nuclear experiments. Until recently, at the Joint Research Centre in Geel (JRC-Geel), mass and areal densities of actinide layers were determined by measuring activity (using alpha-particle counting), isotopic composition (using thermal ionisation mass spectrometry) and diameter. In this study a series of 238U deposits, prepared by molecular plating and vacuum deposition on different substrates, were characterized with additional non-destructive and destructive analysis techniques. The quality of the deposits was investigated by autoradiography, high-resolution alpha-particle spectrometry, and scanning electron microscopy. The elemental composition was determined by x-ray photoelectron spectroscopy and inductively coupled plasma mass spectrometry. The latter technique was also applied on the U3O8 starting material and the converted UF4 powder. This paper compares the quality and morphology of deposited 238U films prepared by molecular plating and vacuum deposition on various backings, including their elemental composition determined by different characterization techniques. Also discussed are problems in target preparation and characterization.

  4. RNAi Experiments in D. melanogaster: Solutions to the Overlooked Problem of Off-Targets Shared by Independent dsRNAs

    PubMed Central

    Seinen, Erwin; Burgerhof, Johannes G. M.; Jansen, Ritsert C.; Sibon, Ody C. M.

    2010-01-01

    Background RNAi technology is widely used to downregulate specific gene products. Investigating the phenotype induced by downregulation of gene products provides essential information about the function of the specific gene of interest. When RNAi is applied in Drosophila melanogaster or Caenorhabditis elegans, often large dsRNAs are used. One of the drawbacks of RNAi technology is that unwanted gene products with sequence similarity to the gene of interest can be down regulated too. To verify the outcome of an RNAi experiment and to avoid these unwanted off-target effects, an additional non-overlapping dsRNA can be used to down-regulate the same gene. However it has never been tested whether this approach is sufficient to reduce the risk of off-targets. Methodology We created a novel tool to analyse the occurance of off-target effects in Drosophila and we analyzed 99 randomly chosen genes. Principal Findings Here we show that nearly all genes contain non-overlapping internal sequences that do show overlap in a common off-target gene. Conclusion Based on our in silico findings, off-target effects should not be ignored and our presented on-line tool enables the identification of two RNA interference constructs, free of overlapping off-targets, from any gene of interest. PMID:20957038

  5. Prediction of miRNA targets.

    PubMed

    Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis

    2015-01-01

    Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.

  6. Grants4Targets - an innovative approach to translate ideas from basic research into novel drugs.

    PubMed

    Lessl, Monika; Schoepe, Stefanie; Sommer, Anette; Schneider, Martin; Asadullah, Khusru

    2011-04-01

    Collaborations between industry and academia are steadily gaining importance. To combine expertises Bayer Healthcare has set up a novel open innovation approach called Grants4Targets. Ideas on novel drug targets can easily be submitted to http://www.grants4targets.com. After a review process, grants are provided to perform focused experiments to further validate the proposed targets. In addition to financial support specific know-how on target validation and drug discovery is provided. Experienced scientists are nominated as project partners and, depending on the project, tools or specific models are provided. Around 280 applications have been received and 41 projects granted. According to our experience, this type of bridging fund combined with joint efforts provides a valuable tool to foster drug discovery collaborations. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Distributed Peer-to-Peer Target Tracking in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Wang, Sheng; Bi, Dao-Wei; Ma, Jun-Jie

    2007-01-01

    Target tracking is usually a challenging application for wireless sensor networks (WSNs) because it is always computation-intensive and requires real-time processing. This paper proposes a practical target tracking system based on the auto regressive moving average (ARMA) model in a distributed peer-to-peer (P2P) signal processing framework. In the proposed framework, wireless sensor nodes act as peers that perform target detection, feature extraction, classification and tracking, whereas target localization requires the collaboration between wireless sensor nodes for improving the accuracy and robustness. For carrying out target tracking under the constraints imposed by the limited capabilities of the wireless sensor nodes, some practically feasible algorithms, such as the ARMA model and the 2-D integer lifting wavelet transform, are adopted in single wireless sensor nodes due to their outstanding performance and light computational burden. Furthermore, a progressive multi-view localization algorithm is proposed in distributed P2P signal processing framework considering the tradeoff between the accuracy and energy consumption. Finally, a real world target tracking experiment is illustrated. Results from experimental implementations have demonstrated that the proposed target tracking system based on a distributed P2P signal processing framework can make efficient use of scarce energy and communication resources and achieve target tracking successfully.

  8. PROPERTIES OF 42 SOLAR-TYPE KEPLER TARGETS FROM THE ASTEROSEISMIC MODELING PORTAL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metcalfe, T. S.; Mathur, S.; Creevey, O. L.

    2014-10-01

    Recently the number of main-sequence and subgiant stars exhibiting solar-like oscillations that are resolved into individual mode frequencies has increased dramatically. While only a few such data sets were available for detailed modeling just a decade ago, the Kepler mission has produced suitable observations for hundreds of new targets. This rapid expansion in observational capacity has been accompanied by a shift in analysis and modeling strategies to yield uniform sets of derived stellar properties more quickly and easily. We use previously published asteroseismic and spectroscopic data sets to provide a uniform analysis of 42 solar-type Kepler targets from the Asteroseismicmore » Modeling Portal. We find that fitting the individual frequencies typically doubles the precision of the asteroseismic radius, mass, and age compared to grid-based modeling of the global oscillation properties, and improves the precision of the radius and mass by about a factor of three over empirical scaling relations. We demonstrate the utility of the derived properties with several applications.« less

  9. Targetable vulnerabilities in T- and NK-cell lymphomas identified through preclinical models.

    PubMed

    Ng, Samuel Y; Yoshida, Noriaki; Christie, Amanda L; Ghandi, Mahmoud; Dharia, Neekesh V; Dempster, Joshua; Murakami, Mark; Shigemori, Kay; Morrow, Sara N; Van Scoyk, Alexandria; Cordero, Nicolas A; Stevenson, Kristen E; Puligandla, Maneka; Haas, Brian; Lo, Christopher; Meyers, Robin; Gao, Galen; Cherniack, Andrew; Louissaint, Abner; Nardi, Valentina; Thorner, Aaron R; Long, Henry; Qiu, Xintao; Morgan, Elizabeth A; Dorfman, David M; Fiore, Danilo; Jang, Julie; Epstein, Alan L; Dogan, Ahmet; Zhang, Yanming; Horwitz, Steven M; Jacobsen, Eric D; Santiago, Solimar; Ren, Jian-Guo; Guerlavais, Vincent; Annis, D Allen; Aivado, Manuel; Saleh, Mansoor N; Mehta, Amitkumar; Tsherniak, Aviad; Root, David; Vazquez, Francisca; Hahn, William C; Inghirami, Giorgio; Aster, Jon C; Weinstock, David M; Koch, Raphael

    2018-05-22

    T- and NK-cell lymphomas (TCL) are a heterogenous group of lymphoid malignancies with poor prognosis. In contrast to B-cell and myeloid malignancies, there are few preclinical models of TCLs, which has hampered the development of effective therapeutics. Here we establish and characterize preclinical models of TCL. We identify multiple vulnerabilities that are targetable with currently available agents (e.g., inhibitors of JAK2 or IKZF1) and demonstrate proof-of-principle for biomarker-driven therapies using patient-derived xenografts (PDXs). We show that MDM2 and MDMX are targetable vulnerabilities within TP53-wild-type TCLs. ALRN-6924, a stapled peptide that blocks interactions between p53 and both MDM2 and MDMX has potent in vitro activity and superior in vivo activity across 8 different PDX models compared to the standard-of-care agent romidepsin. ALRN-6924 induced a complete remission in a patient with TP53-wild-type angioimmunoblastic T-cell lymphoma, demonstrating the potential for rapid translation of discoveries from subtype-specific preclinical models.

  10. TARGETED DELIVERY OF INHALED PHARMACEUTICALS USING AN IN SILICO DOSIMETRY MODEL

    EPA Science Inventory

    We present an in silico dosimetry model which can be used for inhalation toxicology (risk assessment of inhaled air pollutants) and aerosol therapy ( targeted delivery of inhaled drugs). This work presents scientific and clinical advances beyond the development of the original in...

  11. Reduced-order model for underwater target identification using proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Ramesh, Sai Sudha; Lim, Kian Meng

    2017-03-01

    Research on underwater acoustics has seen major development over the past decade due to its widespread applications in domains such as underwater communication/navigation (SONAR), seismic exploration and oceanography. In particular, acoustic signatures from partially or fully buried targets can be used in the identification of buried mines for mine counter measures (MCM). Although there exist several techniques to identify target properties based on SONAR images and acoustic signatures, these methods first employ a feature extraction method to represent the dominant characteristics of a data set, followed by the use of an appropriate classifier based on neural networks or the relevance vector machine. The aim of the present study is to demonstrate the applications of proper orthogonal decomposition (POD) technique in capturing dominant features of a set of scattered pressure signals, and subsequent use of the POD modes and coefficients in the identification of partially buried underwater target parameters such as its location, size and material density. Several numerical examples are presented to demonstrate the performance of the system identification method based on POD. Although the present study is based on 2D acoustic model, the method can be easily extended to 3D models and thereby enables cost-effective representations of large-scale data.

  12. New prospects in fixed target searches for dark forces with the SeaQuest experiment at Fermilab

    DOE PAGES

    Gardner, S.; Holt, R. J.; Tadepalli, A. S.

    2016-06-10

    An intense 120 GeV proton beam incident on an extremely long iron target generates enormous numbers of light-mass particles that also decay within that target. If one of these particles decays to a final state with a hidden gauge boson, or if such a particle is produced as a result of the initial collision, then that weakly interacting hidden-sector particle may traverse the remainder of the target and be detected downstream through its possible decay to an e +e –, μ +μ –, or π +π – final state. These conditions can be realized through an extension of the SeaQuestmore » experiment at Fermilab, and in this initial investigation we consider how it can serve as an ultrasensitive probe of hidden vector gauge forces, both Abelian and non-Abelian. Here a light, weakly coupled hidden sector may well explain the dark matter established through astrophysical observations, and the proposed search can provide tangible evidence for its existence—or, alternatively, constrain a “sea” of possibilities.« less

  13. Modeling and numerical analysis of a magneto-inertial fusion concept with the target created through FRC merging

    NASA Astrophysics Data System (ADS)

    Li, Chenguang; Yang, Xianjun

    2016-10-01

    The Magnetized Plasma Fusion Reactor concept is proposed as a magneto-inertial fusion approach based on the target plasma created through the collision merging of two oppositely translating field reversed configuration plasmas, which is then compressed by the imploding liner driven by the pulsed-power driver. The target creation process is described by a two-dimensional magnetohydrodynamics model, resulting in the typical target parameters. The implosion process and the fusion reaction are modeled by a simple zero-dimensional model, taking into account the alpha particle heating and the bremsstrahlung radiation loss. The compression on the target can be 2D cylindrical or 2.4D with the additive axial contraction taken into account. The dynamics of the liner compression and fusion burning are simulated and the optimum fusion gain and the associated target parameters are predicted. The scientific breakeven could be achieved at the optimized conditions.

  14. Modeling to Support the Development of Habitat Targets for Piping Plovers on the Missouri River

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buenau, Kate E.

    2015-05-05

    Report on modeling and analyses done in support of developing quantative sandbar habitat targets for piping plovers, including assessment of reference, historical, dams present but not operated, and habitat construction calibrated to meet population viability targets.

  15. The ratio of microwaves to X-rays in solar flares: The case for the thick target model

    NASA Technical Reports Server (NTRS)

    Lu, Edward T.; Petrosian, Vahe

    1988-01-01

    The expected ratio of synchrotron microwave radiation to bremsstrahlung X-rays for thick target, thin target, and multithermal solar flare models is calculated. The calculations take into account the variation of the microwave to X-ray ratio with X-ray spectral index. The theoretical results are compared with observed ratios of a sample of 51 solar flares with well known spectral index. From this it is concluded that the nonthermal thick target model with a loop length of and order of 10 to the 9th power cm and a magnetic field of 500 + or - 200 G provides the best fit to the data. The thin target and multithermal models require unreasonably large density or pressure and/or low magnetic field to match the data.

  16. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  17. Modeling of intense pulsed ion beam heated masked targets for extreme materials characterization

    DOE PAGES

    Barnard, John J.; Schenkel, Thomas

    2017-11-15

    Intense, pulsed ion beams locally heat materials and deliver dense electronic excitations that can induce material modifications and phase transitions. Material properties can potentially be stabilized by rapid quenching. Pulsed ion beams with pulse lengths of order ns have recently become available for materials processing. Here, we optimize mask geometries for local modification of materials by intense ion pulses. The goal is to rapidly excite targets volumetrically to the point where a phase transition or local lattice reconstruction is induced followed by rapid cooling that stabilizes desired material's properties fast enough before the target is altered or damaged by, e.g.,more » hydrodynamic expansion. By using a mask, the longitudinal dimension can be large compared to the transverse dimension, allowing the possibility of rapid transverse cooling. We performed HYDRA simulations that calculate peak temperatures for a series of excitation conditions and cooling rates of silicon targets with micro-structured masks and compare these to a simple analytical model. In conclusion, the model gives scaling laws that can guide the design of targets over a wide range of pulsed ion beam parameters.« less

  18. Experiments and synthesis of bone-targeting epirubicin with the water-soluble macromolecular drug delivery systems of oxidized-dextran.

    PubMed

    Yu, Li; Cai, Lin; Hu, Hao; Zhang, Yi

    2014-05-01

    Epirubicin (EPI) is a broad spectrum antineoplastic drug, commonly used as a chemotherapy method to treat osteosarcoma. However, its application has been limited by many side-effects. Therefore, targeted drug delivery to bone has been the aim of current anti-bone-tumor drug studies. Due to the exceptional affinity of Bisphosphonates (BP) to bone, 1-amino-ethylene-1, 1-dephosphate acid (AEDP) was chosen as the bone targeting moiety for water-soluble macromolecular drug delivery systems of oxidized-dextran (OXD) to transport EPI to bone in this article. The bone targeting drug of AEDP-OXD-EPI was designed for the treatment of malignant bone tumors. The successful conjugation of AEDP-OXD-EPI was confirmed by analysis of FTIR and (1)H-NMR spectra. To study the bone-seeking potential of AEDP-OXD-EPI, an in vitro hydroxyapatite (HAp) binding assay and an in vivo experiment of bone-targeting capacity were established. The effectiveness of AEDP-OXD-EPI was demonstrated by inducing apoptosis and necrosis of MG-63 tumor cell line. The obtained experimental data indicated that AEDP-OXD-EPI is an ideal bone-targeting anti-tumor drug.

  19. Reprint of: Reaction measurements with the Jet Experiments in Nuclear Structure and Astrophysics (JENSA) gas jet target

    NASA Astrophysics Data System (ADS)

    Chipps, K. A.

    2018-01-01

    Explosive stellar environments are sometimes driven by nuclear reactions on short-lived, radioactive nuclei. These reactions often drive the stellar explosion, alter the observable light curves produced, and dictate the final abundances of the isotopes created. Unfortunately, many reaction rates at stellar temperatures cannot be directly measured in the laboratory, due to the physical limitations of ultra-low cross sections and high background rates. An additional complication arises because many of the important reactions involve radioactive nuclei which have lifetimes too short to be made into a target. As such, direct reactions require very intense and pure beams of exotic nuclei. Indirect approaches with both stable and radioactive beams can, however, provide crucial information on the nuclei involved in these astrophysical reactions. A major development toward both direct and indirect studies of nuclear reactions rates is the commissioning of the Jet Experiments in Nuclear Structure and Astrophysics (JENSA) supersonic gas jet target. The JENSA system provides a pure, homogeneous, highly localized, dense, and robust gaseous target for radioactive ion beam studies. Charged-particle reactions measurements made with gas jet targets can be cleaner and display better resolution than with traditional targets. With the availability of pure and localized gas jet targets in combination with developments in exotic radioactive ion beams and next-generation detector systems, the range of reaction studies that are experimentally possible is vastly expanded. Various representative cases will be discussed.

  20. Targeting CYP51 for drug design by the contributions of molecular modeling.

    PubMed

    Rabelo, Vitor W; Santos, Taísa F; Terra, Luciana; Santana, Marcos V; Castro, Helena C; Rodrigues, Carlos R; Abreu, Paula A

    2017-02-01

    CYP51 is an enzyme of sterol biosynthesis pathway present in animals, plants, protozoa and fungi. This enzyme is described as an important drug target that is still of interest. Therefore, in this work, we reviewed the structure and function of CYP51 and explored the molecular modeling approaches for the development of new antifungal and antiprotozoans that target this enzyme. Crystallographic structures of CYP51 of some organisms have already been described in the literature, which enable the construction of homology models of other organisms' enzymes and molecular docking studies of new ligands. The binding mode and interactions of some new series of azoles with antifungal or antiprotozoan activities has been studied and showed important residues of the active site. Molecular modeling is an important tool to be explored for the discovery and optimization of CYP51 inhibitors with better activities, pharmacokinetics, and toxicological profiles. © 2016 Société Française de Pharmacologie et de Thérapeutique.

  1. Extrapolation of vertical target motion through a brief visual occlusion.

    PubMed

    Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco

    2010-03-01

    It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects.

  2. Pathways to Mexico’s climate change mitigation targets: A multi-model analysis

    DOE PAGES

    Veysey, Jason; Octaviano, Claudia; Calvin, Katherine; ...

    2015-04-25

    Mexico’s climate policy sets ambitious national greenhouse gas (GHG) emission reduction targets—30% versus a business-as-usual baseline by 2020, 50% versus 2000 by 2050. However, these goals are at odds with recent energy and emission trends in the country. Both energy use and GHG emissions in Mexico have grown substantially over the last two decades. Here, we investigate how Mexico might reverse current trends and reach its mitigation targets by exploring results from energy system and economic models involved in the CLIMACAP-LAMP project. To meet Mexico’s emission reduction targets, all modeling groups agree that decarbonization of electricity is needed, along withmore » changes in the transport sector, either to more efficient vehicles or a combination of more efficient vehicles and lower carbon fuels. These measures reduce GHG emissions as well as emissions of other air pollutants. The models find different energy supply pathways, with some solutions based on renewable energy and others relying on biomass or fossil fuels with carbon capture and storage. The economy-wide costs of deep mitigation could range from 2% to 4% of GDP in 2030, and from 7% to 15% of GDP in 2050. Our results suggest that Mexico has some flexibility in designing deep mitigation strategies, and that technological options could allow Mexico to achieve its emission reduction targets, albeit at a cost to the country.« less

  3. Metal powder absorptivity: Modeling and experiment

    DOE PAGES

    Boley, C. D.; Mitchell, S. C.; Rubenchik, A. M.; ...

    2016-08-10

    Here, we present results of numerical modeling and direct calorimetric measurements of the powder absorptivity for a number of metals. The modeling results generally correlate well with experiment. We show that the powder absorptivity is determined, to a great extent, by the absorptivity of a flat surface at normal incidence. Our results allow the prediction of the powder absorptivity from normal flat-surface absorptivity measurements.

  4. Metal powder absorptivity: Modeling and experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boley, C. D.; Mitchell, S. C.; Rubenchik, A. M.

    Here, we present results of numerical modeling and direct calorimetric measurements of the powder absorptivity for a number of metals. The modeling results generally correlate well with experiment. We show that the powder absorptivity is determined, to a great extent, by the absorptivity of a flat surface at normal incidence. Our results allow the prediction of the powder absorptivity from normal flat-surface absorptivity measurements.

  5. Immunological Targeting of Tumor Initiating Prostate Cancer Cells

    DTIC Science & Technology

    2014-10-01

    clinically using well-accepted immuno-competent animal models. 2) Keywords: Prostate Cancer, Lymphocyte, Vaccine, Antibody 3) Overall Project Summary...castrate animals . Task 1: Identify and verify antigenic targets from CAstrate Resistant Luminal Epithelial Cells (CRLEC) (months 1-16... animals per group will be processed to derive sufficient RNA for microarray analysis; the experiment will be repeated x 3. Microarray analysis will

  6. Linking neocortical, cognitive, and genetic variability in autism with alterations of brain plasticity: the Trigger-Threshold-Target model.

    PubMed

    Mottron, Laurent; Belleville, Sylvie; Rouleau, Guy A; Collignon, Olivier

    2014-11-01

    The phenotype of autism involves heterogeneous adaptive traits (strengths vs. disabilities), different domains of alterations (social vs. non-social), and various associated genetic conditions (syndromic vs. nonsyndromic autism). Three observations suggest that alterations in experience-dependent plasticity are an etiological factor in autism: (1) the main cognitive domains enhanced in autism are controlled by the most plastic cortical brain regions, the multimodal association cortices; (2) autism and sensory deprivation share several features of cortical and functional reorganization; and (3) genetic mutations and/or environmental insults involved in autism all appear to affect developmental synaptic plasticity, and mostly lead to its upregulation. We present the Trigger-Threshold-Target (TTT) model of autism to organize these findings. In this model, genetic mutations trigger brain reorganization in individuals with a low plasticity threshold, mostly within regions sensitive to cortical reallocations. These changes account for the cognitive enhancements and reduced social expertise associated with autism. Enhanced but normal plasticity may underlie non-syndromic autism, whereas syndromic autism may occur when a triggering mutation or event produces an altered plastic reaction, also resulting in intellectual disability and dysmorphism in addition to autism. Differences in the target of brain reorganization (perceptual vs. language regions) account for the main autistic subgroups. In light of this model, future research should investigate how individual and sex-related differences in synaptic/regional brain plasticity influence the occurrence of autism. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Fluorescent CSC models evidence that targeted nanomedicines improve treatment sensitivity of breast and colon cancer stem cells.

    PubMed

    Gener, Petra; Gouveia, Luis Pleno; Sabat, Guillem Romero; de Sousa Rafael, Diana Fernandes; Fort, Núria Bergadà; Arranja, Alexandra; Fernández, Yolanda; Prieto, Rafael Miñana; Ortega, Joan Sayos; Arango, Diego; Abasolo, Ibane; Videira, Mafalda; Schwartz, Simo

    2015-11-01

    To be able to study the efficacy of targeted nanomedicines in marginal population of highly aggressive cancer stem cells (CSC), we have developed a novel in vitro fluorescent CSC model that allows us to visualize these cells in heterogeneous population and to monitor CSC biological performance after therapy. In this model tdTomato reporter gene is driven by CSC specific (ALDH1A1) promoter and contrary to other similar models, CSC differentiation and un-differentiation processes are not restrained and longitudinal studies are feasible. We used this model for preclinical validation of poly[(d,l-lactide-co-glycolide)-co-PEG] (PLGA-co-PEG) micelles loaded with paclitaxel. Further, active targeting against CD44 and EGFR receptors was validated in breast and colon cancer cell lines. Accordingly, specific active targeting toward surface receptors enhances the performance of nanomedicines and sensitizes CSC to paclitaxel based chemotherapy. Many current cancer therapies fail because of the failure to target cancer stem cells. This surviving population soon proliferates and differentiates into more cancer cells. In this interesting article, the authors designed an in vitro cancer stem cell model to study the effects of active targeting using antibody-labeled micelles containing chemotherapeutic agent. This new model should allow future testing of various drug/carrier platforms before the clinical phase. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Experiments and Modeling of G-Jitter Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Leslie, F. W.; Ramachandran, N.; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    While there is a general understanding of the acceleration environment onboard an orbiting spacecraft, past research efforts in the modeling and analysis area have still not produced a general theory that predicts the effects of multi-spectral periodic accelerations on a general class of experiments nor have they produced scaling laws that a prospective experimenter can use to assess how an experiment might be affected by this acceleration environment. Furthermore, there are no actual flight experimental data that correlates heat or mass transport with measurements of the periodic acceleration environment. The present investigation approaches this problem with carefully conducted terrestrial experiments and rigorous numerical modeling for better understanding the effect of residual gravity and gentler on experiments. The approach is to use magnetic fluids that respond to an imposed magnetic field gradient in much the same way as fluid density responds to a gravitational field. By utilizing a programmable power source in conjunction with an electromagnet, both static and dynamic body forces can be simulated in lab experiments. The paper provides an overview of the technique and includes recent results from the experiments.

  9. Cooling tower plume - model and experiment

    NASA Astrophysics Data System (ADS)

    Cizek, Jan; Gemperle, Jiri; Strob, Miroslav; Nozicka, Jiri

    The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  10. Multi-injector modeling of transverse combustion instability experiments

    NASA Astrophysics Data System (ADS)

    Shipley, Kevin J.

    Concurrent simulations and experiments are used to study combustion instabilities in a multiple injector element combustion chamber. The experiments employ a linear array of seven coaxial injector elements positioned atop a rectangular chamber. Different levels of instability are driven in the combustor by varying the operating and geometry parameters of the outer driving injector elements located near the chamber end-walls. The objectives of the study are to apply a reduced three-injector model to generate a computational test bed for the evaluation of injector response to transverse instability, to apply a full seven-injector model to investigate the inter-element coupling between injectors in response to transverse instability, and to further develop this integrated approach as a key element in a predictive methodology that relies heavily on subscale test and simulation. To measure the effects of the transverse wave on a central study injector element two opposing windows are placed in the chamber to allow optical access. The chamber is extensively instrumented with high-frequency pressure transducers. High-fidelity computational fluid dynamics simulations are used to model the experiment. Specifically three-dimensional, detached eddy simulations (DES) are used. Two computational approaches are investigated. The first approach models the combustor with three center injectors and forces transverse waves in the chamber with a wall velocity function at the chamber side walls. Different levels of pressure oscillation amplitudes are possible by varying the amplitude of the forcing function. The purpose of this method is to focus on the combustion response of the study element. In the second approach, all seven injectors are modeled and self-excited combustion instability is achieved. This realistic model of the chamber allows the study of inter-element flow dynamics, e.g., how the resonant motions in the injector tubes are coupled through the transverse pressure

  11. Kidney disease models: tools to identify mechanisms and potential therapeutic targets

    PubMed Central

    Bao, Yin-Wu; Yuan, Yuan; Chen, Jiang-Hua; Lin, Wei-Qiang

    2018-01-01

    Acute kidney injury (AKI) and chronic kidney disease (CKD) are worldwide public health problems affecting millions of people and have rapidly increased in prevalence in recent years. Due to the multiple causes of renal failure, many animal models have been developed to advance our understanding of human nephropathy. Among these experimental models, rodents have been extensively used to enable mechanistic understanding of kidney disease induction and progression, as well as to identify potential targets for therapy. In this review, we discuss AKI models induced by surgical operation and drugs or toxins, as well as a variety of CKD models (mainly genetically modified mouse models). Results from recent and ongoing clinical trials and conceptual advances derived from animal models are also explored. PMID:29515089

  12. Target Highlights in CASP9: Experimental Target Structures for the Critical Assessment of Techniques for Protein Structure Prediction

    PubMed Central

    Kryshtafovych, Andriy; Moult, John; Bartual, Sergio G.; Bazan, J. Fernando; Berman, Helen; Casteel, Darren E.; Christodoulou, Evangelos; Everett, John K.; Hausmann, Jens; Heidebrecht, Tatjana; Hills, Tanya; Hui, Raymond; Hunt, John F.; Jayaraman, Seetharaman; Joachimiak, Andrzej; Kennedy, Michael A.; Kim, Choel; Lingel, Andreas; Michalska, Karolina; Montelione, Gaetano T.; Otero, José M.; Perrakis, Anastassis; Pizarro, Juan C.; van Raaij, Mark J.; Ramelot, Theresa A.; Rousseau, Francois; Tong, Liang; Wernimont, Amy K.; Young, Jasmine; Schwede, Torsten

    2011-01-01

    One goal of the CASP Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction is to identify the current state of the art in protein structure prediction and modeling. A fundamental principle of CASP is blind prediction on a set of relevant protein targets, i.e. the participating computational methods are tested on a common set of experimental target proteins, for which the experimental structures are not known at the time of modeling. Therefore, the CASP experiment would not have been possible without broad support of the experimental protein structural biology community. In this manuscript, several experimental groups discuss the structures of the proteins which they provided as prediction targets for CASP9, highlighting structural and functional peculiarities of these structures: the long tail fibre protein gp37 from bacteriophage T4, the cyclic GMP-dependent protein kinase Iβ (PKGIβ) dimerization/docking domain, the ectodomain of the JTB (Jumping Translocation Breakpoint) transmembrane receptor, Autotaxin (ATX) in complex with an inhibitor, the DNA-Binding J-Binding Protein 1 (JBP1) domain essential for biosynthesis and maintenance of DNA base-J (β-D-glucosyl-hydroxymethyluracil) in Trypanosoma and Leishmania, an so far uncharacterized 73 residue domain from Ruminococcus gnavus with a fold typical for PDZ-like domains, a domain from the Phycobilisome (PBS) core-membrane linker (LCM) phycobiliprotein ApcE from Synechocystis, the Heat shock protein 90 (Hsp90) activators PFC0360w and PFC0270w from Plasmodium falciparum, and 2-oxo-3-deoxygalactonate kinase from Klebsiella pneumoniae. PMID:22020785

  13. A method for evaluating cognitively informed micro-targeted campaign strategies: An agent-based model proof of principle

    PubMed Central

    Pilditch, Toby D.

    2018-01-01

    In political campaigns, perceived candidate credibility influences the persuasiveness of messages. In campaigns aiming to influence people’s beliefs, micro-targeted campaigns (MTCs) that target specific voters using their psychological profile have become increasingly prevalent. It remains open how effective MTCs are, notably in comparison to population-targeted campaign strategies. Using an agent-based model, the paper applies recent insights from cognitive models of persuasion, extending them to the societal level in a novel framework for exploring political campaigning. The paper provides an initial treatment of the complex dynamics of population level political campaigning in a psychologically informed manner. Model simulations show that MTCs can take advantage of the psychology of the electorate by targeting voters favourable disposed towards the candidate. Relative to broad campaigning, MTCs allow for efficient and adaptive management of complex campaigns. Findings show that disliked MTC candidates can beat liked population-targeting candidates, pointing to societal questions concerning campaign regulations. PMID:29634722

  14. A method for evaluating cognitively informed micro-targeted campaign strategies: An agent-based model proof of principle.

    PubMed

    Madsen, Jens Koed; Pilditch, Toby D

    2018-01-01

    In political campaigns, perceived candidate credibility influences the persuasiveness of messages. In campaigns aiming to influence people's beliefs, micro-targeted campaigns (MTCs) that target specific voters using their psychological profile have become increasingly prevalent. It remains open how effective MTCs are, notably in comparison to population-targeted campaign strategies. Using an agent-based model, the paper applies recent insights from cognitive models of persuasion, extending them to the societal level in a novel framework for exploring political campaigning. The paper provides an initial treatment of the complex dynamics of population level political campaigning in a psychologically informed manner. Model simulations show that MTCs can take advantage of the psychology of the electorate by targeting voters favourable disposed towards the candidate. Relative to broad campaigning, MTCs allow for efficient and adaptive management of complex campaigns. Findings show that disliked MTC candidates can beat liked population-targeting candidates, pointing to societal questions concerning campaign regulations.

  15. Numerical modeling of laser-driven ion acceleration from near-critical gas targets

    NASA Astrophysics Data System (ADS)

    Tatomirescu, Dragos; Vizman, Daniel; d’Humières, Emmanuel

    2018-06-01

    In the past two decades, laser-accelerated ion sources and their applications have been intensely researched. Recently, it has been shown through experiments that proton beams with characteristics comparable to those obtained with solid targets can be obtained from gaseous targets. By means of particle-in-cell simulations, this paper studies in detail the effects of a near-critical density gradient on ion and electron acceleration after the interaction with ultra high intensity lasers. We can observe that the peak density of the gas jet has a significant influence on the spectrum features. As the gas jet density increases, so does the peak energy of the central quasi-monoenergetic ion bunch due to the increase in laser absorption while at the same time having a broadening effect on the electron angular distribution.

  16. Cryogenic Tank Modeling for the Saturn AS-203 Experiment

    NASA Technical Reports Server (NTRS)

    Grayson, Gary D.; Lopez, Alfredo; Chandler, Frank O.; Hastings, Leon J.; Tucker, Stephen P.

    2006-01-01

    A computational fluid dynamics (CFD) model is developed for the Saturn S-IVB liquid hydrogen (LH2) tank to simulate the 1966 AS-203 flight experiment. This significant experiment is the only known, adequately-instrumented, low-gravity, cryogenic self pressurization test that is well suited for CFD model validation. A 4000-cell, axisymmetric model predicts motion of the LH2 surface including boil-off and thermal stratification in the liquid and gas phases. The model is based on a modified version of the commercially available FLOW3D software. During the experiment, heat enters the LH2 tank through the tank forward dome, side wall, aft dome, and common bulkhead. In both model and test the liquid and gases thermally stratify in the low-gravity natural convection environment. LH2 boils at the free surface which in turn increases the pressure within the tank during the 5360 second experiment. The Saturn S-IVB tank model is shown to accurately simulate the self pressurization and thermal stratification in the 1966 AS-203 test. The average predicted pressurization rate is within 4% of the pressure rise rate suggested by test data. Ullage temperature results are also in good agreement with the test where the model predicts an ullage temperature rise rate within 6% of the measured data. The model is based on first principles only and includes no adjustments to bring the predictions closer to the test data. Although quantitative model validation is achieved or one specific case, a significant step is taken towards demonstrating general use of CFD for low-gravity cryogenic fluid modeling.

  17. Multiple lesion track structure model

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Cucinotta, Francis A.; Shinn, Judy L.

    1992-01-01

    A multilesion cell kinetic model is derived, and radiation kinetic coefficients are related to the Katz track structure model. The repair-related coefficients are determined from the delayed plating experiments of Yang et al. for the C3H10T1/2 cell system. The model agrees well with the x ray and heavy ion experiments of Yang et al. for the immediate plating, delaying plating, and fractionated exposure protocols employed by Yang. A study is made of the effects of target fragments in energetic proton exposures and of the repair-deficient target-fragment-induced lesions.

  18. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  19. Searching for animal models and potential target species for emerging pathogens: Experience gained from Middle East respiratory syndrome (MERS) coronavirus.

    PubMed

    Vergara-Alert, Júlia; Vidal, Enric; Bensaid, Albert; Segalés, Joaquim

    2017-06-01

    Emerging and re-emerging pathogens represent a substantial threat to public health, as demonstrated with numerous outbreaks over the past years, including the 2013-2016 outbreak of Ebola virus in western Africa. Coronaviruses are also a threat for humans, as evidenced in 2002/2003 with infection by the severe acute respiratory syndrome coronavirus (SARS-CoV), which caused more than 8000 human infections with 10% fatality rate in 37 countries. Ten years later, a novel human coronavirus (Middle East respiratory syndrome coronavirus, MERS-CoV), associated with severe pneumonia, arose in the Kingdom of Saudi Arabia. Until December 2016, MERS has accounted for more than 1800 cases and 35% fatality rate. Finding an animal model of disease is key to develop vaccines or antivirals against such emerging pathogens and to understand its pathogenesis. Knowledge of the potential role of domestic livestock and other animal species in the transmission of pathogens is of importance to understand the epidemiology of the disease. Little is known about MERS-CoV animal host range. In this paper, experimental data on potential hosts for MERS-CoV is reviewed. Advantages and limitations of different animal models are evaluated in relation to viral pathogenesis and transmission studies. Finally, the relevance of potential new target species is discussed.

  20. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants.

    PubMed

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-04-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.

  1. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants

    PubMed Central

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-01-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325

  2. Model fitting data from syllogistic reasoning experiments.

    PubMed

    Hattori, Masasi

    2016-12-01

    The data presented in this article are related to the research article entitled "Probabilistic representation in syllogistic reasoning: A theory to integrate mental models and heuristics" (M. Hattori, 2016) [1]. This article presents predicted data by three signature probabilistic models of syllogistic reasoning and model fitting results for each of a total of 12 experiments ( N =404) in the literature. Models are implemented in R, and their source code is also provided.

  3. Rodent Models of Experimental Endometriosis: Identifying Mechanisms of Disease and Therapeutic Targets

    PubMed Central

    Bruner-Tran, Kaylon L.; Mokshagundam, Shilpa; Herington, Jennifer L.; Ding, Tianbing; Osteen, Kevin G.

    2018-01-01

    Background: Although it has been more than a century since endometriosis was initially described in the literature, understanding the etiology and natural history of the disease has been challenging. However, the broad utility of murine and rat models of experimental endometriosis has enabled the elucidation of a number of potentially targetable processes which may otherwise promote this disease. Objective: To review a variety of studies utilizing rodent models of endometriosis to illustrate their utility in examining mechanisms associated with development and progression of this disease. Results: Use of rodent models of endometriosis has provided a much broader understanding of the risk factors for the initial development of endometriosis, the cellular pathology of the disease and the identification of potential therapeutic targets. Conclusion: Although there are limitations with any animal model, the variety of experimental endometriosis models that have been developed has enabled investigation into numerous aspects of this disease. Thanks to these models, our under-standing of the early processes of disease development, the role of steroid responsiveness, inflammatory processes and the peritoneal environment has been advanced. More recent models have begun to shed light on how epigenetic alterations con-tribute to the molecular basis of this disease as well as the multiple comorbidities which plague many patients. Continued de-velopments of animal models which aid in unraveling the mechanisms of endometriosis development provide the best oppor-tunity to identify therapeutic strategies to prevent or regress this enigmatic disease.

  4. Predictive model of outcome of targeted nodal assessment in colorectal cancer.

    PubMed

    Nissan, Aviram; Protic, Mladjan; Bilchik, Anton; Eberhardt, John; Peoples, George E; Stojadinovic, Alexander

    2010-02-01

    Improvement in staging accuracy is the principal aim of targeted nodal assessment in colorectal carcinoma. Technical factors independently predictive of false negative (FN) sentinel lymph node (SLN) mapping should be identified to facilitate operative decision making. To define independent predictors of FN SLN mapping and to develop a predictive model that could support surgical decisions. Data was analyzed from 2 completed prospective clinical trials involving 278 patients with colorectal carcinoma undergoing SLN mapping. Clinical outcome of interest was FN SLN(s), defined as one(s) with no apparent tumor cells in the presence of non-SLN metastases. To assess the independent predictive effect of a covariate for a nominal response (FN SLN), a logistic regression model was constructed and parameters estimated using maximum likelihood. A probabilistic Bayesian model was also trained and cross validated using 10-fold train-and-test sets to predict FN SLN mapping. Area under the curve (AUC) from receiver operating characteristics curves of these predictions was calculated to determine the predictive value of the model. Number of SLNs (<3; P = 0.03) and tumor-replaced nodes (P < 0.01) independently predicted FN SLN. Cross validation of the model created with Bayesian Network Analysis effectively predicted FN SLN (area under the curve = 0.84-0.86). The positive and negative predictive values of the model are 83% and 97%, respectively. This study supports a minimum threshold of 3 nodes for targeted nodal assessment in colorectal cancer, and establishes sufficient basis to conclude that SLN mapping and biopsy cannot be justified in the presence of clinically apparent tumor-replaced nodes.

  5. A Production System Model of Capturing Reactive Moving Targets. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Plamondon, B. D.; Miller, R. A.

    1984-01-01

    Subjects manipulated a control stick to position a cursor over a moving target that reacted with a computer-generated escape strategy. The cursor movements were described at two levels of abstraction. At the upper level, a production system described transitions among four modes of activity; rapid acquisition, close following, a predictive mode, and herding. Within each mode, differential equations described trajectory-generating mechanisms. A simulation of this two-level model captures the targets in a manner resembling the episodic time histories of human subjects.

  6. A global parallel model based design of experiments method to minimize model output uncertainty.

    PubMed

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  7. Detecting ship targets in spaceborne infrared image based on modeling radiation anomalies

    NASA Astrophysics Data System (ADS)

    Wang, Haibo; Zou, Zhengxia; Shi, Zhenwei; Li, Bo

    2017-09-01

    Using infrared imaging sensors to detect ship target in the ocean environment has many advantages compared to other sensor modalities, such as better thermal sensitivity and all-weather detection capability. We propose a new ship detection method by modeling radiation anomalies for spaceborne infrared image. The proposed method can be decomposed into two stages, where in the first stage, a test infrared image is densely divided into a set of image patches and the radiation anomaly of each patch is estimated by a Gaussian Mixture Model (GMM), and thereby target candidates are obtained from anomaly image patches. In the second stage, target candidates are further checked by a more discriminative criterion to obtain the final detection result. The main innovation of the proposed method is inspired by the biological mechanism that human eyes are sensitive to the unusual and anomalous patches among complex background. The experimental result on short wavelength infrared band (1.560 - 2.300 μm) and long wavelength infrared band (10.30 - 12.50 μm) of Landsat-8 satellite shows the proposed method achieves a desired ship detection accuracy with higher recall than other classical ship detection methods.

  8. Modeling determinants of growth: evidence for a community-based target in height?

    PubMed

    Aßmann, Christian; Hermanussen, Michael

    2013-07-01

    Human growth is traditionally envisaged as a target-seeking process regulated by genes, nutrition, health, and the state of an individual's social and economic environment; it is believed that under optimal physical conditions, an individual will achieve his or her full genetic potential. Using a panel data set on individual height increments, we suggest a statistical modeling approach that characterizes growth as first-order trend stationary and allows for controlling individual growth tempo via observable measures of individual maturity. A Bayesian framework and corresponding Markov-chain Monte Carlo techniques allowing for a conceptually stringent treatment of missing values are adapted for parameter estimation. The model provides evidence for the adjustment of the individual growth rate toward average height of the population. The increase in adult body height during the past 150 y has been explained by the steady improvement of living conditions that are now being considered to have reached an optimum in Western societies. The current investigation questions the notion that the traditional concept in the understanding of this target-seeking process is sufficient. We consider an additional regulator that possibly points at community-based target seeking in growth.

  9. Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.

    PubMed

    Chadderdon, George L; Neymotin, Samuel A; Kerr, Cliff C; Lytton, William W

    2012-01-01

    Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1), no learning (0), or punishment (-1), corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.

  10. Cryogenic target system for hydrogen layering

    DOE PAGES

    Parham, T.; Kozioziemski, B.; Atkinson, D.; ...

    2015-11-24

    Here, a cryogenic target positioning system was designed and installed on the National Ignition Facility (NIF) target chamber. This instrument incorporates the ability to fill, form, and characterize the NIF targets with hydrogen isotopes needed for ignition experiments inside the NIF target bay then transport and position them in the target chamber. This effort brought to fruition years of research in growing and metrologizing high-quality hydrogen fuel layers and landed it in an especially demanding operations environment in the NIF facility. D-T (deuterium-tritium) layers for NIF ignition experiments have extremely tight specifications and must be grown in a very highlymore » constrained environment: a NIF ignition target inside a cryogenic target positioner inside the NIF target bay. Exquisite control of temperature, pressure, contaminant level, and thermal uniformity are necessary throughout seed formation and layer growth to create an essentially-groove-free single crystal layer.« less

  11. A Diverse Community To Study Communities: Integration of Experiments and Mathematical Models To Study Microbial Consortia.

    PubMed

    Succurro, Antonella; Moejes, Fiona Wanjiku; Ebenhöh, Oliver

    2017-08-01

    The last few years have seen the advancement of high-throughput experimental techniques that have produced an extraordinary amount of data. Bioinformatics and statistical analyses have become instrumental to interpreting the information coming from, e.g., sequencing data and often motivate further targeted experiments. The broad discipline of "computational biology" extends far beyond the well-established field of bioinformatics, but it is our impression that more theoretical methods such as the use of mathematical models are not yet as well integrated into the research studying microbial interactions. The empirical complexity of microbial communities presents challenges that are difficult to address with in vivo / in vitro approaches alone, and with microbiology developing from a qualitative to a quantitative science, we see stronger opportunities arising for interdisciplinary projects integrating theoretical approaches with experiments. Indeed, the addition of in silico experiments, i.e., computational simulations, has a discovery potential that is, unfortunately, still largely underutilized and unrecognized by the scientific community. This minireview provides an overview of mathematical models of natural ecosystems and emphasizes that one critical point in the development of a theoretical description of a microbial community is the choice of problem scale. Since this choice is mostly dictated by the biological question to be addressed, in order to employ theoretical models fully and successfully it is vital to implement an interdisciplinary view at the conceptual stages of the experimental design. Copyright © 2017 Succurro et al.

  12. Online control of reaching and pointing to visual, auditory, and multimodal targets: Effects of target modality and method of determining correction latency.

    PubMed

    Holmes, Nicholas P; Dakwar, Azar R

    2015-12-01

    Movements aimed towards objects occasionally have to be adjusted when the object moves. These online adjustments can be very rapid, occurring in as little as 100ms. More is known about the latency and neural basis of online control of movements to visual than to auditory target objects. We examined the latency of online corrections in reaching-to-point movements to visual and auditory targets that could change side and/or modality at movement onset. Visual or auditory targets were presented on the left or right sides, and participants were instructed to reach and point to them as quickly and as accurately as possible. On half of the trials, the targets changed side at movement onset, and participants had to correct their movements to point to the new target location as quickly as possible. Given different published approaches to measuring the latency for initiating movement corrections, we examined several different methods systematically. What we describe here as the optimal methods involved fitting a straight-line model to the velocity of the correction movement, rather than using a statistical criterion to determine correction onset. In the multimodal experiment, these model-fitting methods produced significantly lower latencies for correcting movements away from the auditory targets than away from the visual targets. Our results confirm that rapid online correction is possible for auditory targets, but further work is required to determine whether the underlying control system for reaching and pointing movements is the same for auditory and visual targets. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Application of plug-plug technique to ACE experiments for discovery of peptides binding to a larger target protein: a model study of calmodulin-binding fragments selected from a digested mixture of reduced BSA.

    PubMed

    Saito, Kazuki; Nakato, Mamiko; Mizuguchi, Takaaki; Wada, Shinji; Uchimura, Hiromasa; Kataoka, Hiroshi; Yokoyama, Shigeyuki; Hirota, Hiroshi; Kiso, Yoshiaki

    2014-03-01

    To discover peptide ligands that bind to a target protein with a higher molecular mass, a concise screening methodology has been established, by applying a "plug-plug" technique to ACE experiments. Exploratory experiments using three mixed peptides, mastoparan-X, β-endorphin, and oxytocin, as candidates for calmodulin-binding ligands, revealed that the technique not only reduces the consumption of the protein sample, but also increases the flexibility of the experimental conditions, by allowing the use of MS detection in the ACE experiments. With the plug-plug technique, the ACE-MS screening methodology successfully selected calmodulin-binding peptides from a random library with diverse constituents, such as protease digests of BSA. Three peptides with Kd values between 8-147 μM for calmodulin were obtained from a Glu-C endoprotease digest of reduced BSA, although the digest showed more than 70 peaks in its ACE-MS electropherogram. The method established here will be quite useful for the screening of peptide ligands, which have only low affinities due to their flexible chain structures but could potentially provide primary information for designing inhibitors against the target protein. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Effects of azimuth-symmetric acceptance cutoffs on the measured asymmetry in unpolarized Drell-Yan fixed-target experiments

    NASA Astrophysics Data System (ADS)

    Bianconi, A.; Bussa, M. P.; Destefanis, M.; Ferrero, L.; Greco, M.; Maggiora, M.; Spataro, S.

    2013-04-01

    Fixed-target unpolarized Drell-Yan experiments often feature an acceptance depending on the polar angle of the lepton tracks in the laboratory frame. Typically leptons are detected in a defined angular range, with a dead zone in the forward region. If the cutoffs imposed by the angular acceptance are independent of the azimuth, at first sight they do not appear dangerous for a measurement of the cos(2 φ) asymmetry, which is relevant because of its association with the violation of the Lam-Tung rule and with the Boer-Mulders function. On the contrary, direct simulations show that up to 10 percent asymmetries are produced by these cutoffs. These artificial asymmetries present qualitative features that allow them to mimic the physical ones. They introduce some model dependence in the measurements of the cos(2 φ) asymmetry, since a precise reconstruction of the acceptance in the Collins-Soper frame requires a Monte Carlo simulation, that in turn requires some detailed physical input to generate event distributions. Although experiments in the eighties seem to have been aware of this problem, the possibility of using the Boer-Mulders function as an input parameter in the extraction of transversity has much increased the requirements of precision on this measurement. Our simulations show that the safest approach to these measurements is a strong cutoff on the Collins-Soper polar angle. This reduces statistics, but does not necessarily decrease the precision in a measurement of the Boer-Mulders function.

  15. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model.

    PubMed

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander

    2015-04-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.

  16. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model

    PubMed Central

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher

    2015-01-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106

  17. Understanding Coupled Earth-Surface Processes through Experiments and Models (Invited)

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Kim, W.

    2013-12-01

    Traditionally, both numerical models and experiments have been purposefully designed to ';isolate' singular components or certain processes of a larger mountain to deep-ocean interconnected source-to-sink (S2S) transport system. Controlling factors driven by processes outside of the domain of immediate interest were treated and simplified as input or as boundary conditions. Increasingly, earth surface processes scientists appreciate feedbacks and explore these feedbacks with more dynamically coupled approaches to their experiments and models. Here, we discuss key concepts and recent advances made in coupled modeling and experimental setups. In addition, we emphasize challenges and new frontiers to coupled experiments. Experiments have highlighted the important role of self-organization; river and delta systems do not always need to be forced by external processes to change or develop characteristic morphologies. Similarly modeling f.e. has shown that intricate networks in tidal deltas are stable because of the interplay between river avulsions and the tidal current scouring with both processes being important to develop and maintain the dentritic networks. Both models and experiment have demonstrated that seemingly stable systems can be perturbed slightly and show dramatic responses. Source-to-sink models were developed for both the Fly River System in Papua New Guinea and the Waipaoa River in New Zealand. These models pointed to the importance of upstream-downstream effects and enforced our view of the S2S system as a signal transfer and dampening conveyor belt. Coupled modeling showed that deforestation had extreme effects on sediment fluxes draining from the catchment of the Waipaoa River in New Zealand, and that this increase in sediment production rapidly shifted the locus of offshore deposition. The challenge in designing coupled models and experiments is both technological as well as intellectual. Our community advances to make numerical model coupling more

  18. A linear-encoding model explains the variability of the target morphology in regeneration

    PubMed Central

    Lobo, Daniel; Solano, Mauricio; Bubenik, George A.; Levin, Michael

    2014-01-01

    A fundamental assumption of today's molecular genetics paradigm is that complex morphology emerges from the combined activity of low-level processes involving proteins and nucleic acids. An inherent characteristic of such nonlinear encodings is the difficulty of creating the genetic and epigenetic information that will produce a given self-assembling complex morphology. This ‘inverse problem’ is vital not only for understanding the evolution, development and regeneration of bodyplans, but also for synthetic biology efforts that seek to engineer biological shapes. Importantly, the regenerative mechanisms in deer antlers, planarian worms and fiddler crabs can solve an inverse problem: their target morphology can be altered specifically and stably by injuries in particular locations. Here, we discuss the class of models that use pre-specified morphological goal states and propose the existence of a linear encoding of the target morphology, making the inverse problem easy for these organisms to solve. Indeed, many model organisms such as Drosophila, hydra and Xenopus also develop according to nonlinear encodings producing linear encodings of their final morphologies. We propose the development of testable models of regeneration regulation that combine emergence with a top-down specification of shape by linear encodings of target morphology, driving transformative applications in biomedicine and synthetic bioengineering. PMID:24402915

  19. Characterization of deuterium clusters mixed with helium gas for an application in beam-target-fusion experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bang, W.; Quevedo, H. J.; Bernstein, A. C.

    We measured the average deuterium cluster size within a mixture of deuterium clusters and helium gas by detecting Rayleigh scattering signals. The average cluster size from the gas mixture was comparable to that from a pure deuterium gas when the total backing pressure and temperature of the gas mixture were the same as those of the pure deuterium gas. According to these measurements, the average size of deuterium clusters depends on the total pressure and not the partial pressure of deuterium in the gas mixture. To characterize the cluster source size further, a Faraday cup was used to measure themore » average kinetic energy of the ions resulting from Coulomb explosion of deuterium clusters upon irradiation by an intense ultrashort pulse. The deuterium ions indeed acquired a similar amount of energy from the mixture target, corroborating our measurements of the average cluster size. As the addition of helium atoms did not reduce the resulting ion kinetic energies, the reported results confirm the utility of using a known cluster source for beam-target-fusion experiments by introducing a secondary target gas.« less

  20. Characterization of deuterium clusters mixed with helium gas for an application in beam-target-fusion experiments

    DOE PAGES

    Bang, W.; Quevedo, H. J.; Bernstein, A. C.; ...

    2014-12-10

    We measured the average deuterium cluster size within a mixture of deuterium clusters and helium gas by detecting Rayleigh scattering signals. The average cluster size from the gas mixture was comparable to that from a pure deuterium gas when the total backing pressure and temperature of the gas mixture were the same as those of the pure deuterium gas. According to these measurements, the average size of deuterium clusters depends on the total pressure and not the partial pressure of deuterium in the gas mixture. To characterize the cluster source size further, a Faraday cup was used to measure themore » average kinetic energy of the ions resulting from Coulomb explosion of deuterium clusters upon irradiation by an intense ultrashort pulse. The deuterium ions indeed acquired a similar amount of energy from the mixture target, corroborating our measurements of the average cluster size. As the addition of helium atoms did not reduce the resulting ion kinetic energies, the reported results confirm the utility of using a known cluster source for beam-target-fusion experiments by introducing a secondary target gas.« less

  1. Over Target Baseline: Lessons Learned from the NASA SLS Booster Element

    NASA Technical Reports Server (NTRS)

    Carroll, Truman J.

    2016-01-01

    Goal of the presentation is to teach, and then model, the steps necessary to implement an Over Target Baseline (OTB). More than a policy and procedure session, participants will learn from recent first hand experience the challenges and benefits that come from successfully executing an OTB.

  2. A Search Model for Imperfectly Detected Targets

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert

    2012-01-01

    Under the assumptions that 1) the search region can be divided up into N non-overlapping sub-regions that are searched sequentially, 2) the probability of detection is unity if a sub-region is selected, and 3) no information is available to guide the search, there are two extreme case models. The search can be done perfectly, leading to a uniform distribution over the number of searches required, or the search can be done with no memory, leading to a geometric distribution for the number of searches required with a success probability of 1/N. If the probability of detection P is less than unity, but the search is done otherwise perfectly, the searcher will have to search the N regions repeatedly until detection occurs. The number of searches is thus the sum two random variables. One is N times the number of full searches (a geometric distribution with success probability P) and the other is the uniform distribution over the integers 1 to N. The first three moments of this distribution were computed, giving the mean, standard deviation, and the kurtosis of the distribution as a function of the two parameters. The model was fit to the data presented last year (Ahumada, Billington, & Kaiwi, 2 required to find a single pixel target on a simulated horizon. The model gave a good fit to the three moments for all three observers.

  3. Design and Analysis of AN Static Aeroelastic Experiment

    NASA Astrophysics Data System (ADS)

    Hou, Ying-Yu; Yuan, Kai-Hua; Lv, Ji-Nan; Liu, Zi-Qiang

    2016-06-01

    Static aeroelastic experiments are very common in the United States and Russia. The objective of static aeroelastic experiments is to investigate deformation and loads of elastic structure in flow field. Generally speaking, prerequisite of this experiment is that the stiffness distribution of structure is known. This paper describes a method for designing experimental models, in the case where the stiffness distribution and boundary condition of a real aircraft are both uncertain. The stiffness distribution form of the structure can be calculated via finite element modeling and simulation calculation and F141 steels and rigid foam are used to make elastic model. In this paper, the design and manufacturing process of static aeroelastic models is presented and a set of experiment model was designed to simulate the stiffness of the designed wings, a set of experiments was designed to check the results. The test results show that the experimental method can effectively complete the design work of elastic model. This paper introduces the whole process of the static aeroelastic experiment, and the experimental results are analyzed. This paper developed a static aeroelasticity experiment technique and established an experiment model targeting at the swept wing of a certain kind of large aspect ratio aircraft.

  4. Optimization of vascular-targeting drugs in a computational model of tumor growth

    NASA Astrophysics Data System (ADS)

    Gevertz, Jana

    2012-04-01

    A biophysical tool is introduced that seeks to provide a theoretical basis for helping drug design teams assess the most promising drug targets and design optimal treatment strategies. The tool is grounded in a previously validated computational model of the feedback that occurs between a growing tumor and the evolving vasculature. In this paper, the model is particularly used to explore the therapeutic effectiveness of two drugs that target the tumor vasculature: angiogenesis inhibitors (AIs) and vascular disrupting agents (VDAs). Using sensitivity analyses, the impact of VDA dosing parameters is explored, as is the effects of administering a VDA with an AI. Further, a stochastic optimization scheme is utilized to identify an optimal dosing schedule for treatment with an AI and a chemotherapeutic. The treatment regimen identified can successfully halt simulated tumor growth, even after the cessation of therapy.

  5. High energy density physics effects predicted in simulations of the CERN HiRadMat beam-target interaction experiments

    NASA Astrophysics Data System (ADS)

    Tahir, N. A.; Burkart, F.; Schmidt, R.; Shutov, A.; Wollmann, D.; Piriz, A. R.

    2016-12-01

    Experiments have been done at the CERN HiRadMat (High Radiation to Materials) facility in which large cylindrical copper targets were irradiated with 440 GeV proton beam generated by the Super Proton Synchrotron (SPS). The primary purpose of these experiments was to confirm the existence of hydrodynamic tunneling of ultra-relativistic protons and their hadronic shower in solid materials, that was predicted by previous numerical simulations. The experimental measurements have shown very good agreement with the simulation results. This provides confidence in our simulations of the interaction of the 7 TeV LHC (Large Hadron Collider) protons and the 50 TeV Future Circular Collider (FCC) protons with solid materials, respectively. This work is important from the machine protection point of view. The numerical simulations have also shown that in the HiRadMat experiments, a significant part of thetarget material is be converted into different phases of High Energy Density (HED) matter, including two-phase solid-liquid mixture, expanded as well as compressed hot liquid phases, two-phase liquid-gas mixture and gaseous state. The HiRadMat facility is therefore a unique ion beam facility worldwide that is currently available for studying the thermophysical properties of HED matter. In the present paper we discuss the numerical simulation results and present a comparison with the experimental measurements.

  6. Aerogel Algorithm for Shrapnel Penetration Experiments

    NASA Astrophysics Data System (ADS)

    Tokheim, R. E.; Erlich, D. C.; Curran, D. R.; Tobin, M.; Eder, D.

    2004-07-01

    To aid in assessing shrapnel produced by laser-irradiated targets, we have performed shrapnel collection "BB gun" experiments in aerogel and have developed a simple analytical model for deceleration of the shrapnel particles in the aerogel. The model is similar in approach to that of Anderson and Ahrens (J. Geophys. Res., 99 El, 2063-2071, Jan. 1994) and accounts for drag, aerogel compaction heating, and the velocity threshold for shrapnel ablation due to conductive heating. Model predictions are correlated with the BB gun results at impact velocities up to a few hundred m/s and with NASA data for impact velocities up to 6 km/s. The model shows promising agreement with the data and will be used to plan and interpret future experiments.

  7. Finding a Target with an Accessible Global Positioning System

    ERIC Educational Resources Information Center

    Ponchillia, Paul E.; MacKenzie, Nancy; Long, Richard G.; Denton-Smith, Pamela; Hicks, Thomas L.; Miley, Priscilla

    2007-01-01

    This article presents two target-location experiments. In the first experiment, 19 participants located a 25-foot chalk circle 93% of the time with a Global Positioning System (GPS) compared to 12% of the time without it. In a single-subject follow-up experiment, the participant came within 1 foot of the target on all GPS trials. Target-location…

  8. Electrophysiological correlates of target eccentricity in texture segmentation.

    PubMed

    Schaffer, Susann; Schubö, Anna; Meinecke, Cristina

    2011-06-01

    Event-related potentials and behavioural performance as a function of target eccentricity were measured while subjects performed a texture segmentation task. Fit-of-structures, i.e. easiness of target detection was varied: in Experiment 1, a texture with peripheral fit (easier detection of peripheral presented targets) and in Experiment 2, a texture with foveal fit (easier detection of foveal presented targets) was used. In the two experiments, the N2p was sensitive to target eccentricity showing larger amplitudes for foveal targets compared to peripheral targets, and at the foveal position, a reversal of the N2p differential amplitude effect was found. The anterior P2 seemed sensitive to the easiness of target detection. In both experiments the N2pc varied as a function of eccentricity. However, the P3 was neither sensitive to target eccentricity nor to the fit-of-structures. Results show the existence of a P2/N2 complex (Potts and Tucker, 2001) indicating executive functions located in the anterior cortex and perceptual processes located in the posterior cortex. Furthermore, the N2p might indicate the existence of a foveal vs. peripheral subsystem in visual processing. 2011 Elsevier B.V. All rights reserved.

  9. CCM-C,Collins checks the middeck experiment

    NASA Image and Video Library

    1999-07-24

    S93-E-5016 (23 July 1999) --- Astronaut Eileen M. Collins, mission commander, checks on an experiment on Columbia's middeck during Flight Day 1 activity. The experiment is called the Cell Culture Model, Configuration C. Objectives of it are to validate cell culture models for muscle, bone and endothelial cell biochemical and functional loss induced by microgravity stress; to evaluate cytoskeleton, metabolism, membrane integrity and protease activity in target cells; and to test tissue loss pharmaceuticals for efficacy. The photo was recorded with an electronic still camera (ESC).

  10. Model Experiment of Two-Dimentional Brownian Motion by Microcomputer.

    ERIC Educational Resources Information Center

    Mishima, Nobuhiko; And Others

    1980-01-01

    Describes the use of a microcomputer in studying a model experiment (Brownian particles colliding with thermal particles). A flow chart and program for the experiment are provided. Suggests that this experiment may foster a deepened understanding through mutual dialog between the student and computer. (SK)

  11. Hydrocode predictions of collisional outcomes: Effects of target size

    NASA Technical Reports Server (NTRS)

    Ryan, Eileen V.; Asphaug, Erik; Melosh, H. J.

    1991-01-01

    Traditionally, laboratory impact experiments, designed to simulate asteroid collisions, attempted to establish a predictive capability for collisional outcomes given a particular set of initial conditions. Unfortunately, laboratory experiments are restricted to using targets considerably smaller than the modelled objects. It is therefore necessary to develop some methodology for extrapolating the extensive experimental results to the size regime of interest. Results are reported obtained through the use of two dimensional hydrocode based on 2-D SALE and modified to include strength effects and the fragmentation equations. The hydrocode was tested by comparing its predictions for post-impact fragment size distributions to those observed in laboratory impact experiments.

  12. Integrated multiscale biomaterials experiment and modelling: a perspective

    PubMed Central

    Buehler, Markus J.; Genin, Guy M.

    2016-01-01

    Advances in multiscale models and computational power have enabled a broad toolset to predict how molecules, cells, tissues and organs behave and develop. A key theme in biological systems is the emergence of macroscale behaviour from collective behaviours across a range of length and timescales, and a key element of these models is therefore hierarchical simulation. However, this predictive capacity has far outstripped our ability to validate predictions experimentally, particularly when multiple hierarchical levels are involved. The state of the art represents careful integration of multiscale experiment and modelling, and yields not only validation, but also insights into deformation and relaxation mechanisms across scales. We present here a sampling of key results that highlight both challenges and opportunities for integrated multiscale experiment and modelling in biological systems. PMID:28981126

  13. Fabrication, characterization, and modeling of comixed films for NXS calibration targets [Fabrication and metrology of the NXS calibration targets

    DOE PAGES

    Jaquez, Javier; Farrell, Mike; Huang, Haibo; ...

    2016-08-01

    In 2014/2015 at the Omega laser facility, several experiments took place to calibrate the National Ignition Facility (NIF) X-ray spectrometer (NXS), which is used for high-resolution time-resolved spectroscopic experiments at NIF. The spectrometer allows experimentalists to measure the X-ray energy emitted from high-energy targets, which is used to understand key data such as mixing of materials in highly compressed fuel. The purpose of the experiments at Omega was to obtain information on the instrument performance and to deliver an absolute photometric calibration of the NXS before it was deployed at NIF. The X-ray emission sources fabricated for instrument calibration weremore » 1-mm fused silica spheres with precisely known alloy composition coatings of Si/Ag/Mo, Ti/Cr/Ag, Cr/Ni/Zn, and Zn/Zr, which have emission in the 2- to 18-keV range. Critical to the spectrometer calibration is a known atomic composition of elements with low uncertainty for each calibration sphere. This study discusses the setup, fabrication, and precision metrology of these spheres as well as some interesting findings on the ternary magnetron-sputtered alloy structure.« less

  14. Fabrication, characterization, and modeling of comixed films for NXS calibration targets [Fabrication and metrology of the NXS calibration targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaquez, Javier; Farrell, Mike; Huang, Haibo

    In 2014/2015 at the Omega laser facility, several experiments took place to calibrate the National Ignition Facility (NIF) X-ray spectrometer (NXS), which is used for high-resolution time-resolved spectroscopic experiments at NIF. The spectrometer allows experimentalists to measure the X-ray energy emitted from high-energy targets, which is used to understand key data such as mixing of materials in highly compressed fuel. The purpose of the experiments at Omega was to obtain information on the instrument performance and to deliver an absolute photometric calibration of the NXS before it was deployed at NIF. The X-ray emission sources fabricated for instrument calibration weremore » 1-mm fused silica spheres with precisely known alloy composition coatings of Si/Ag/Mo, Ti/Cr/Ag, Cr/Ni/Zn, and Zn/Zr, which have emission in the 2- to 18-keV range. Critical to the spectrometer calibration is a known atomic composition of elements with low uncertainty for each calibration sphere. This study discusses the setup, fabrication, and precision metrology of these spheres as well as some interesting findings on the ternary magnetron-sputtered alloy structure.« less

  15. Exploding Pusher Targets for Electron-Ion Coupling Measurements

    NASA Astrophysics Data System (ADS)

    Whitley, Heather D.; Pino, Jesse; Schneider, Marilyn; Shepherd, Ronnie; Benedict, Lorin; Bauer, Joseph; Graziani, Frank; Garbett, Warren

    2015-11-01

    Over the past several years, we have conducted theoretical investigations of electron-ion coupling and electronic transport in plasmas. In the regime of weakly coupled plasmas, we have identified models that we believe describe the physics well, but experimental data is still needed to validate the models. We are currently designing spectroscopic experiments to study electron-ion equilibration and/or electron heat transport using exploding pusher (XP) targets for experiments at the National Ignition Facility. Two platforms are being investigated: an indirect drive XP (IDXP) with a plastic ablator and a polar-direct drive XP (PDXP) with a glass ablator. The fill gas for both designs is D2. We propose to use a higher-Z dopant, such as Ar, as a spectroscopic tracer for time-resolved electron and ion temperature measurements. We perform 1D simulations using the ARES hydrodynamic code, in order to produce the time-resolved plasma conditions, which are then post-processed with CRETIN to assess the feasibility of a spectroscopic measurement. We examine target performance with respect to variations in gas fill pressure, ablator thickness, atom fraction of the Ar dopant, and drive energy, and assess the sensitivity of the predicted spectra to variations in the models for electron-ion equilibration and thermal conductivity. Prepared by LLNL under Contract DE-AC52-07NA27344. LLNL-ABS-675219.

  16. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    ERIC Educational Resources Information Center

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  17. Human target acquisition performance

    NASA Astrophysics Data System (ADS)

    Teaney, Brian P.; Du Bosq, Todd W.; Reynolds, Joseph P.; Thompson, Roger; Aghera, Sameer; Moyer, Steven K.; Flug, Eric; Espinola, Richard; Hixson, Jonathan

    2012-06-01

    The battlefield has shifted from armored vehicles to armed insurgents. Target acquisition (identification, recognition, and detection) range performance involving humans as targets is vital for modern warfare. The acquisition and neutralization of armed insurgents while at the same time minimizing fratricide and civilian casualties is a mounting concern. U.S. Army RDECOM CERDEC NVESD has conducted many experiments involving human targets for infrared and reflective band sensors. The target sets include human activities, hand-held objects, uniforms & armament, and other tactically relevant targets. This paper will define a set of standard task difficulty values for identification and recognition associated with human target acquisition performance.

  18. Decision-Making under Ambiguity Is Modulated by Visual Framing, but Not by Motor vs. Non-Motor Context. Experiments and an Information-Theoretic Ambiguity Model

    PubMed Central

    Grau-Moya, Jordi; Ortega, Pedro A.; Braun, Daniel A.

    2016-01-01

    A number of recent studies have investigated differences in human choice behavior depending on task framing, especially comparing economic decision-making to choice behavior in equivalent sensorimotor tasks. Here we test whether decision-making under ambiguity exhibits effects of task framing in motor vs. non-motor context. In a first experiment, we designed an experience-based urn task with varying degrees of ambiguity and an equivalent motor task where subjects chose between hitting partially occluded targets. In a second experiment, we controlled for the different stimulus design in the two tasks by introducing an urn task with bar stimuli matching those in the motor task. We found ambiguity attitudes to be mainly influenced by stimulus design. In particular, we found that the same subjects tended to be ambiguity-preferring when choosing between ambiguous bar stimuli, but ambiguity-avoiding when choosing between ambiguous urn sample stimuli. In contrast, subjects’ choice pattern was not affected by changing from a target hitting task to a non-motor context when keeping the stimulus design unchanged. In both tasks subjects’ choice behavior was continuously modulated by the degree of ambiguity. We show that this modulation of behavior can be explained by an information-theoretic model of ambiguity that generalizes Bayes-optimal decision-making by combining Bayesian inference with robust decision-making under model uncertainty. Our results demonstrate the benefits of information-theoretic models of decision-making under varying degrees of ambiguity for a given context, but also demonstrate the sensitivity of ambiguity attitudes across contexts that theoretical models struggle to explain. PMID:27124723

  19. Decision-Making under Ambiguity Is Modulated by Visual Framing, but Not by Motor vs. Non-Motor Context. Experiments and an Information-Theoretic Ambiguity Model.

    PubMed

    Grau-Moya, Jordi; Ortega, Pedro A; Braun, Daniel A

    2016-01-01

    A number of recent studies have investigated differences in human choice behavior depending on task framing, especially comparing economic decision-making to choice behavior in equivalent sensorimotor tasks. Here we test whether decision-making under ambiguity exhibits effects of task framing in motor vs. non-motor context. In a first experiment, we designed an experience-based urn task with varying degrees of ambiguity and an equivalent motor task where subjects chose between hitting partially occluded targets. In a second experiment, we controlled for the different stimulus design in the two tasks by introducing an urn task with bar stimuli matching those in the motor task. We found ambiguity attitudes to be mainly influenced by stimulus design. In particular, we found that the same subjects tended to be ambiguity-preferring when choosing between ambiguous bar stimuli, but ambiguity-avoiding when choosing between ambiguous urn sample stimuli. In contrast, subjects' choice pattern was not affected by changing from a target hitting task to a non-motor context when keeping the stimulus design unchanged. In both tasks subjects' choice behavior was continuously modulated by the degree of ambiguity. We show that this modulation of behavior can be explained by an information-theoretic model of ambiguity that generalizes Bayes-optimal decision-making by combining Bayesian inference with robust decision-making under model uncertainty. Our results demonstrate the benefits of information-theoretic models of decision-making under varying degrees of ambiguity for a given context, but also demonstrate the sensitivity of ambiguity attitudes across contexts that theoretical models struggle to explain.

  20. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems

  1. A new Geoengineering Model Intercomparison Project (GeoMIP) experiment designed for climate and chemistry models

    DOE PAGES

    Tilmes, S.; Mills, Mike; Niemeier, Ulrike; ...

    2015-01-15

    A new Geoengineering Model Intercomparison Project (GeoMIP) experiment "G4 specified stratospheric aerosols" (short name: G4SSA) is proposed to investigate the impact of stratospheric aerosol geoengineering on atmosphere, chemistry, dynamics, climate, and the environment. In contrast to the earlier G4 GeoMIP experiment, which requires an emission of sulfur dioxide (SO₂) into the model, a prescribed aerosol forcing file is provided to the community, to be consistently applied to future model experiments between 2020 and 2100. This stratospheric aerosol distribution, with a total burden of about 2 Tg S has been derived using the ECHAM5-HAM microphysical model, based on a continuous annualmore » tropical emission of 8 Tg SO₂ yr⁻¹. A ramp-up of geoengineering in 2020 and a ramp-down in 2070 over a period of 2 years are included in the distribution, while a background aerosol burden should be used for the last 3 decades of the experiment. The performance of this experiment using climate and chemistry models in a multi-model comparison framework will allow us to better understand the impact of geoengineering and its abrupt termination after 50 years in a changing environment. The zonal and monthly mean stratospheric aerosol input data set is available at https://www2.acd.ucar.edu/gcm/geomip-g4-specified-stratospheric-aerosol-data-set.« less

  2. Nike Experiment to Observe Strong Areal Mass Oscillations in a Rippled Target Hit by a Short Laser Pulse

    NASA Astrophysics Data System (ADS)

    Aglitskiy, Y.; Karasik, M.; Velikovich, A. L.; Serlin, V.; Weaver, J. L.; Kessler, T. J.; Schmitt, A. J.; Obenschain, S. P.; Metzler, N.; Oh, J.

    2010-11-01

    When a short (sub-ns) laser pulse deposits finite energy in a target, the shock wave launched into it is immediately followed by a rarefaction wave. If the irradiated surface is rippled, theory and simulations predict strong oscillations of the areal mass perturbation amplitude in the target [A. L. Velikovich et al., Phys. Plasmas 10, 3270 (2003).] The first experiment designed to observe this effect has become possible by adding short-driving-pulse capability to the Nike laser, and has been scheduled for the fall of 2010. Simulations show that while the driving pulse of 0.3 ns is on, the areal mass perturbation amplitude grows by a factor ˜2 due to ablative Richtmyer-Meshkov instability. It then decreases, reverses phase, and reaches another maximum, also about twice its initial value, shortly after the shock breakout at the rear target surface. This signature behavior is observable with the monochromatic x-ray imaging diagnostics fielded on Nike.

  3. In vivo potency revisited - Keep the target in sight.

    PubMed

    Gabrielsson, Johan; Peletier, Lambertus A; Hjorth, Stephan

    2018-04-01

    , and the derived Michaelis-Menten parameter K m (target-ligand binding and complex removal) across a set of literature data. It is evident from a comparison between parameters derived from in vitro vs. in vivo experiments that L 50 can be either numerically greater or smaller than the K d (or K m ) parameter, primarily depending on the ratio of k deg -to-k e(RL) . Contrasting the limit values of target R and target-ligand complex RL for ligand concentrations approaching infinity demonstrates that the outcome of the three models differs to a great extent. Based on the analysis we propose that a better understanding of in vivo pharmacological potency requires simultaneous assessment of the impact of its underlying determinants in the open system setting. We propose that L 50 will be a useful parameter guiding predictions of the effective concentration range, for translational purposes, and assessment of in vivo target occupancy/suppression by ligand, since it also encompasses target turnover - in turn also subject to influence by pathophysiology and drug treatment. Different compounds may have similar binding affinity for a target in vitro (same K d ), but vastly different potencies in vivo. L 50 points to what parameters need to be taken into account, and particularly that closed-system (in vitro) parameters should not be first choice when ranking compounds in vivo (open system). Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Diffuse characteristics study of laser target board using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Yang, Pengling; Wu, Yong; Wang, Zhenbao; Tao, Mengmeng; Wu, Junjie; Wang, Ping; Yan, Yan; Zhang, Lei; Feng, Gang; Zhu, Jinghui; Feng, Guobin

    2013-05-01

    In this paper, Torrance-Sparrow and Oren-Nayar model is adopt to study diffuse characteristics of laser target board. The model which based on geometric optics, assumes that rough surfaces are made up of a series of symmetric V-groove cavities with different slopes at microscopic level. The distribution of the slopes of the V-grooves are modeled as beckman distribution function, and every microfacet of the V-groove cavity is assumed to behave like a perfect mirror, which means the reflected ray follows Fresnel law at the microfacet. The masking and shadowing effects of rough surface are also taken into account through geometric attenuation factor. Monte Carlo method is used to simulate the diffuse reflectance distribution of the laser target board with different materials and processing technology, and all the calculated results are verified by experiment. It is shown that the profile of bidirectional reflectance distribution curve is lobe-shaped with the maximum lies along the mirror reflection direction. The width of the profile is narrower for a lower roughness value, and broader for a higher roughness value. The refractive index of target material will also influence the intensity and distribution of diffuse reflectance of laser target surface.

  5. Cognitive Modeling of Video Game Player User Experience

    NASA Technical Reports Server (NTRS)

    Bohil, Corey J.; Biocca, Frank A.

    2010-01-01

    This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.

  6. Vapor shielding models and the energy absorbed by divertor targets during transient events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skovorodin, D. I., E-mail: dskovorodin@gmail.com; Arakcheev, A. S.; Pshenov, A. A.

    2016-02-15

    The erosion of divertor targets caused by high heat fluxes during transients is a serious threat to ITER operation, as it is going to be the main factor determining the divertor lifetime. Under the influence of extreme heat fluxes, the surface temperature of plasma facing components can reach some certain threshold, leading to an onset of intense material evaporation. The latter results in formation of cold dense vapor and secondary plasma cloud. This layer effectively absorbs the energy of the incident plasma flow, turning it into its own kinetic and internal energy and radiating it. This so called vapor shieldingmore » is a phenomenon that may help mitigating the erosion during transient events. In particular, the vapor shielding results in saturation of energy (per unit surface area) accumulated by the target during single pulse of heat load at some level E{sub max}. Matching this value is one of the possible tests to verify complicated numerical codes, developed to calculate the erosion rate during abnormal events in tokamaks. The paper presents three very different models of vapor shielding, demonstrating that E{sub max} depends strongly on the heat pulse duration, thermodynamic properties, and evaporation energy of the irradiated target material. While its dependence on the other shielding details such as radiation capabilities of material and dynamics of the vapor cloud is logarithmically weak. The reason for this is a strong (exponential) dependence of the target material evaporation rate, and therefore the “strength” of vapor shield on the target surface temperature. As a result, the influence of the vapor shielding phenomena details, such as radiation transport in the vapor cloud and evaporated material dynamics, on the E{sub max} is virtually completely masked by the strong dependence of the evaporation rate on the target surface temperature. However, the very same details define the amount of evaporated particles, needed to provide an effective

  7. Resource implications of a national health target: The New Zealand experience of a Shorter Stays in Emergency Departments target.

    PubMed

    Jones, Peter; Sopina, Elizaveta; Ashton, Toni

    2014-12-01

    The Shorter Stays in Emergency Departments health target was introduced in New Zealand in 2009. District Health Boards (DHBs) are expected to meet the target with no additional funding or incentives. The costs of implementing such targets have not previously been studied. A survey of clinical/service managers in ED throughout New Zealand determined the type and cost of resources used for the target. Responses to the target were classified according to their impact in ED, the hospital and the community. Quantifiable resource changes were assigned a financial value and grouped into categories: structure (facilities/beds), staff and processes. Simple statistics were used to describe the data, and the correlation between expenditure and target performance was determined. There was 100% response to the survey. Most DHBs reported some expenditure specifically on the target, with estimated total expenditure of over NZ$52 m. The majority of expenditure occurred in ED (60.8%) and hospital (38.7%) with little spent in the community. New staff accounted for 76.5% of expenditure. Per capita expenditure in the ED was associated with improved target performance (r = 0.48, P = 0.03), whereas expenditure in the hospital was not (r = 0.08, P = 0.75). The fact that estimated expenditure on the target was over $50 million without additional funding suggests that DHBs were able to make savings through improved efficiencies and/or that funds were reallocated from other services. The majority of expenditure occurred in the ED. Most of the funds were spent on staff, and this was associated with improved target performance. © 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  8. Beryllium Ignition Targets for Indirect Drive NIF Experiments

    NASA Astrophysics Data System (ADS)

    Simakov, A. N.; Wilson, D. C.; Yi, S. A.; Kline, J. L.; Salmonson, J. D.; Clark, D. S.; Milovich, J. L.; Marinak, M. M.; Callahan, D. A.

    2013-10-01

    Current NIF plastic capsules are under-performing, and alternate ablators are being investigated. Beryllium presents an attractive option, since it has lower opacity and therefore higher ablation rate, pressure, and velocity. Previous NIF Be designs assumed significantly better hohlraum performance than recently observed (e.g., 7.5 vs. 15-17% of back-scattered power and 1.0 vs. 0.85 main pulse's power multipliers) and employed less accurate atomic configuration models than currently used (XSN vs. DCA), and thus an updated design is required. We present a new, Rev. 6 Be ignition target design that employs the full NIF capacity (1.8 MJ, 520 TW) and uses a standard 5.75 mm gold hohlraum with 1.5 mg/cm3 of helium gas fill. The 1051 μm capsule features 180 μm of layered copper-doped (with the maximum of 3 atom-%) Be ablator and 90 μm of cryogenic deuterium-tritium fuel. The peak implosion velocity of 367 μm/ns results in 4.1 keV of no-burn ion temperature, 1.6 and 1.9 g/cm2 of fuel and total areal densities, respectively, and 20.6 MJ of fusion yield. The capsule demonstrates robust performance with surface/interface roughnesses up to 1.6 times larger that Rev. 3 specs. Work supported by the US Department of Energy.

  9. A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon

    NASA Technical Reports Server (NTRS)

    Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.

    2017-01-01

    The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.

  10. Analytic model of a laser-accelerated composite plasma target and its stability

    NASA Astrophysics Data System (ADS)

    Khudik, Vladimir; Shvets, Gennady

    2013-10-01

    A self-consistent analytical model of monoenergetic acceleration of a one and two-species ultrathin target irradiated by a circularly polarized laser pulse is developed. In the accelerated reference frame, the bulk plasma in the target is neutral and its parameters are assumed to be stationary. It is found that the structure of the target depends strongly on the temperatures of electrons and ions, which are both strongly influenced by the laser pulse pedestal. When the electron temperature is large, the hot electrons bounce back and forth inside the potential well formed by ponderomotive and electrostatic potentials while the heavy and light ions are forced-balanced by the electrostatic and non-inertial fields forming two separated layers. In the opposite limiting case when the ion temperature is large, the hot ions are trapped in the potential well formed by the ion-sheath's electric and non-inertial potentials while the cold electrons are forced-balanced by the electrostatic and ponderomotive fields. Using PIC simulations we have determined which scenario is realized in practice depending on the initial target structure and laser intensity. Target stability with respect to Rayleigh-Taylor instability will also be discussed. This work is supported by the US DOE grants DE-FG02-04ER41321 and DE-FG02-07ER54945.

  11. An In-Silico Model of Lipoprotein Metabolism and Kinetics for the Evaluation of Targets and Biomarkers in the Reverse Cholesterol Transport Pathway

    PubMed Central

    Lu, James; Hübner, Katrin; Nanjee, M. Nazeem; Brinton, Eliot A.; Mazer, Norman A.

    2014-01-01

    High-density lipoprotein (HDL) is believed to play an important role in lowering cardiovascular disease (CVD) risk by mediating the process of reverse cholesterol transport (RCT). Via RCT, excess cholesterol from peripheral tissues is carried back to the liver and hence should lead to the reduction of atherosclerotic plaques. The recent failures of HDL-cholesterol (HDL-C) raising therapies have initiated a re-examination of the link between CVD risk and the rate of RCT, and have brought into question whether all target modulations that raise HDL-C would be atheroprotective. To help address these issues, a novel in-silico model has been built to incorporate modern concepts of HDL biology, including: the geometric structure of HDL linking the core radius with the number of ApoA-I molecules on it, and the regeneration of lipid-poor ApoA-I from spherical HDL due to remodeling processes. The ODE model has been calibrated using data from the literature and validated by simulating additional experiments not used in the calibration. Using a virtual population, we show that the model provides possible explanations for a number of well-known relationships in cholesterol metabolism, including the epidemiological relationship between HDL-C and CVD risk and the correlations between some HDL-related lipoprotein markers. In particular, the model has been used to explore two HDL-C raising target modulations, Cholesteryl Ester Transfer Protein (CETP) inhibition and ATP-binding cassette transporter member 1 (ABCA1) up-regulation. It predicts that while CETP inhibition would not result in an increased RCT rate, ABCA1 up-regulation should increase both HDL-C and RCT rate. Furthermore, the model predicts the two target modulations result in distinct changes in the lipoprotein measures. Finally, the model also allows for an evaluation of two candidate biomarkers for in-vivo whole-body ABCA1 activity: the absolute concentration and the % lipid-poor ApoA-I. These findings illustrate the

  12. Heavy Ion Fusion Science Virtual National Laboratory 4th Quarter 2009 Milestone Report: Measure and simulate target temperature and dynamic response in optimized NDCX-I configurations with initial diagnostics suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bieniosek, F.M.; Barnard, J.J.; Henestroza, E.

    2009-09-30

    This milestone has been met. The effort contains two main components: (1) Experimental results of warm dense matter target experiments on optimized NDCX-I configurations that include measurements of target temperature and transient target behavior. (2) A theoretical model of the target response to beam heating that includes an equilibrium heating model of the target foil and a model for droplet formation in the target for comparison with experimental results. The experiments on ion-beam target heating use a 300-350-keV K{sup +} pulsed beam from the Neutralized Compression Drift Experiment (NDCX-I) accelerator at LBNL. The NDCX-I accelerator delivers an uncompressed pulse beammore » of several microseconds with a typical power density of >100 kW/cm{sup 2} over a final focus spot size of about 1 mm. An induction bunching module the NDCX-I compresses a portion of the beam pulse to reach a much higher power density over 2 nanoseconds. Under these conditions the free-standing foil targets are rapidly heated to temperatures to over 4000 K. We model the target thermal dynamics using the equation of heat conduction for the temperature T(x,t) as a function of time (t) and spatial dimension along the beam direction (x). The competing cooling processes release energy from the surface of the foil due to evaporation, radiation, and thermionic (Richardson) emission. A description of the experimental configuration of the target chamber and results from initial beam-target experiments are reported in our FY08 4th Quarter and FY09 2nd Quarter Milestone Reports. The WDM target diagnostics include a high-speed multichannel optical pyrometer, optical streak camera, VISAR, and high-speed gated cameras. The fast optical pyrometer is a unique and significant new diagnostic which provides valuable information on the temperature evolution of the heated target.« less

  13. The Nike KrF laser facility: Performance and initial target experiments

    NASA Astrophysics Data System (ADS)

    Obenschain, S. P.; Bodner, S. E.; Colombant, D.; Gerber, K.; Lehmberg, R. H.; McLean, E. A.; Mostovych, A. N.; Pronko, M. S.; Pawley, C. J.; Schmitt, A. J.; Sethian, J. D.; Serlin, V.; Stamper, J. A.; Sullivan, C. A.; Dahlburg, J. P.; Gardner, J. H.; Chan, Y.; Deniz, A. V.; Hardgrove, J.; Lehecka, T.; Klapisch, M.

    1996-05-01

    Krypton-fluoride (KrF) lasers are of interest to laser fusion because they have both the large bandwidth capability (≳THz) desired for rapid beam smoothing and the short laser wavelength (1/4 μm) needed for good laser-target coupling. Nike is a recently completed 56-beam KrF laser and target facility at the Naval Research Laboratory. Because of its bandwidth of 1 THz FWHM (full width at half-maximum), Nike produces more uniform focal distributions than any other high-energy ultraviolet laser. Nike was designed to study the hydrodynamic instability of ablatively accelerated planar targets. First results show that Nike has spatially uniform ablation pressures (Δp/p<2%). Targets have been accelerated for distances sufficient to study hydrodynamic instability while maintaining good planarity. In this review we present the performance of the Nike laser in producing uniform illumination, and its performance in correspondingly uniform acceleration of targets.

  14. Multiresponse modeling of variably saturated flow and isotope tracer transport for a hillslope experiment at the Landscape Evolution Observatory

    NASA Astrophysics Data System (ADS)

    Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter

    2016-10-01

    This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.

  15. Optimization of training periods for the estimation model of three-dimensional target positions using an external respiratory surrogate.

    PubMed

    Iramina, Hiraku; Nakamura, Mitsuhiro; Iizuka, Yusuke; Mitsuyoshi, Takamasa; Matsuo, Yukinori; Mizowaki, Takashi; Kanno, Ikuo

    2018-04-19

    During therapeutic beam irradiation, an unvisualized three-dimensional (3D) target position should be estimated using an external surrogate with an estimation model. Training periods for the developed model with no additional imaging during beam irradiation were optimized using clinical data. Dual-source 4D-CBCT projection data for 20 lung cancer patients were used for validation. Each patient underwent one to three scans. The actual target positions of each scan were equally divided into two equal parts: one for the modeling and the other for the validating session. A quadratic target position estimation equation was constructed during the modeling session. Various training periods for the session-i.e., modeling periods (T M )-were employed: T M  ∈ {5,10,15,25,35} [s]. First, the equation was used to estimate target positions in the validating session of the same scan (intra-scan estimations). Second, the equation was then used to estimate target positions in the validating session of another temporally different scan (inter-scan estimations). The baseline drift of the surrogate and target between scans was corrected. Various training periods for the baseline drift correction-i.e., correction periods (T C s)-were employed: T C  ∈ {5,10,15; T C  ≤ T M } [s]. Evaluations were conducted with and without the correction. The difference between the actual and estimated target positions was evaluated by the root-mean-square error (RMSE). The range of mean respiratory period and 3D motion amplitude of the target was 2.4-13.0 s and 2.8-34.2 mm, respectively. On intra-scan estimation, the median 3D RMSE was within 1.5-2.1 mm, supported by previous studies. On inter-scan estimation, median elapsed time between scans was 10.1 min. All T M s exhibited 75th percentile 3D RMSEs of 5.0-6.4 mm due to baseline drift of the surrogate and the target. After the correction, those for each T M s fell by 1.4-2.3 mm. The median 3D RMSE for both the 10-s T M and

  16. Infrared and visible image fusion with the target marked based on multi-resolution visual attention mechanisms

    NASA Astrophysics Data System (ADS)

    Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue

    2016-03-01

    During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.

  17. Using queuing models to aid design and guide research effort for multimodality buried target detection systems

    NASA Astrophysics Data System (ADS)

    Malof, Jordan M.; Collins, Leslie M.

    2016-05-01

    Many remote sensing modalities have been developed for buried target detection (BTD), each one offering relative advantages over the others. There has been interest in combining several modalities into a single BTD system that benefits from the advantages of each constituent sensor. Recently an approach was developed, called multi-state management (MSM), that aims to achieve this goal by separating BTD system operation into discrete states, each with different sensor activity and system velocity. Additionally, a modeling approach, called Q-MSM, was developed to quickly analyze multi-modality BTD systems operating with MSM. This work extends previous work by demonstrating how Q-MSM modeling can be used to design BTD systems operating with MSM, and to guide research to yield the most performance benefits. In this work an MSM system is considered that combines a forward-looking infrared (FLIR) camera and a ground penetrating radar (GPR). Experiments are conducted using a dataset of real, field-collected, data which demonstrates how the Q-MSM model can be used to evaluate performance benefits of altering, or improving via research investment, various characteristics of the GPR and FLIR systems. Q-MSM permits fast analysis that can determine where system improvements will have the greatest impact, and can therefore help guide BTD research.

  18. Heterogeneity effects in visual search predicted from the group scanning model.

    PubMed

    Macquistan, A D

    1994-12-01

    The group scanning model of feature integration theory (Treisman & Gormican, 1988) suggests that subjects search visual displays serially by groups, but process items within each group in parallel. The size of these groups is determined by the discriminability of the targets in the background of distractors. When the target is poorly discriminable, the size of the scanned group will be small, and search will be slow. The model predicts that group size will be smallest when targets of an intermediate value on a perceptual dimension are presented in a heterogeneous background of distractors that have higher and lower values on the same dimension. Experiment 1 demonstrates this effect. Experiment 2 controls for a possible confound of decision complexity in Experiment 1. For simple feature targets, the group scanning model provides a good account of the visual search process.

  19. Ground Target Modeling and Validation Conference (10th) Held in Houghton, Michigan, on 17-19 August 1999

    DTIC Science & Technology

    1999-08-01

    electrically small or only have a greater size in one dimension will not have a significant impact on the total RCS. At 1000 MHz, the components on the model ...7^/43- L"^y 16 % 6 ^Ly Cc>v y to-*^ r*r+r g,^\\oS^ Proceedings ? Tenth Annual Ground Target Modeling and Validation Conference August 1999...of the Tenth Annual Ground Target Modeling and Validation Conference (Unclassified) \\2. PERSONAL AUTHOR(S) William R Reynolds and Tracy T. Maki 13a

  20. Integrated nanotechnology platform for tumor-targeted multimodal imaging and therapeutic cargo release

    PubMed Central

    Hosoya, Hitomi; Dobroff, Andrey S.; Driessen, Wouter H. P.; Cristini, Vittorio; Brinker, Lina M.; Staquicini, Fernanda I.; Cardó-Vila, Marina; D’Angelo, Sara; Ferrara, Fortunato; Proneth, Bettina; Lin, Yu-Shen; Dunphy, Darren R.; Dogra, Prashant; Melancon, Marites P.; Stafford, R. Jason; Miyazono, Kohei; Gelovani, Juri G.; Kataoka, Kazunori; Brinker, C. Jeffrey; Sidman, Richard L.; Arap, Wadih; Pasqualini, Renata

    2016-01-01

    A major challenge of targeted molecular imaging and drug delivery in cancer is establishing a functional combination of ligand-directed cargo with a triggered release system. Here we develop a hydrogel-based nanotechnology platform that integrates tumor targeting, photon-to-heat conversion, and triggered drug delivery within a single nanostructure to enable multimodal imaging and controlled release of therapeutic cargo. In proof-of-concept experiments, we show a broad range of ligand peptide-based applications with phage particles, heat-sensitive liposomes, or mesoporous silica nanoparticles that self-assemble into a hydrogel for tumor-targeted drug delivery. Because nanoparticles pack densely within the nanocarrier, their surface plasmon resonance shifts to near-infrared, thereby enabling a laser-mediated photothermal mechanism of cargo release. We demonstrate both noninvasive imaging and targeted drug delivery in preclinical mouse models of breast and prostate cancer. Finally, we applied mathematical modeling to predict and confirm tumor targeting and drug delivery. These results are meaningful steps toward the design and initial translation of an enabling nanotechnology platform with potential for broad clinical applications. PMID:26839407

  1. Integrated nanotechnology platform for tumor-targeted multimodal imaging and therapeutic cargo release.

    PubMed

    Hosoya, Hitomi; Dobroff, Andrey S; Driessen, Wouter H P; Cristini, Vittorio; Brinker, Lina M; Staquicini, Fernanda I; Cardó-Vila, Marina; D'Angelo, Sara; Ferrara, Fortunato; Proneth, Bettina; Lin, Yu-Shen; Dunphy, Darren R; Dogra, Prashant; Melancon, Marites P; Stafford, R Jason; Miyazono, Kohei; Gelovani, Juri G; Kataoka, Kazunori; Brinker, C Jeffrey; Sidman, Richard L; Arap, Wadih; Pasqualini, Renata

    2016-02-16

    A major challenge of targeted molecular imaging and drug delivery in cancer is establishing a functional combination of ligand-directed cargo with a triggered release system. Here we develop a hydrogel-based nanotechnology platform that integrates tumor targeting, photon-to-heat conversion, and triggered drug delivery within a single nanostructure to enable multimodal imaging and controlled release of therapeutic cargo. In proof-of-concept experiments, we show a broad range of ligand peptide-based applications with phage particles, heat-sensitive liposomes, or mesoporous silica nanoparticles that self-assemble into a hydrogel for tumor-targeted drug delivery. Because nanoparticles pack densely within the nanocarrier, their surface plasmon resonance shifts to near-infrared, thereby enabling a laser-mediated photothermal mechanism of cargo release. We demonstrate both noninvasive imaging and targeted drug delivery in preclinical mouse models of breast and prostate cancer. Finally, we applied mathematical modeling to predict and confirm tumor targeting and drug delivery. These results are meaningful steps toward the design and initial translation of an enabling nanotechnology platform with potential for broad clinical applications.

  2. Transorbital target localization in the porcine model

    NASA Astrophysics Data System (ADS)

    DeLisi, Michael P.; Mawn, Louise A.; Galloway, Robert L.

    2013-03-01

    Current pharmacological therapies for the treatment of chronic optic neuropathies such as glaucoma are often inadequate due to their inability to directly affect the optic nerve and prevent neuron death. While drugs that target the neurons have been developed, existing methods of administration are not capable of delivering an effective dose of medication along the entire length of the nerve. We have developed an image-guided system that utilizes a magnetically tracked flexible endoscope to navigate to the back of the eye and administer therapy directly to the optic nerve. We demonstrate the capabilities of this system with a series of targeted surgical interventions in the orbits of live pigs. Target objects consisted of NMR microspherical bulbs with a volume of 18 μL filled with either water or diluted gadolinium-based contrast, and prepared with either the presence or absence of a visible coloring agent. A total of 6 pigs were placed under general anesthesia and two microspheres of differing color and contrast content were blindly implanted in the fat tissue of each orbit. The pigs were scanned with T1-weighted MRI, image volumes were registered, and the microsphere containing gadolinium contrast was designated as the target. The surgeon was required to navigate the flexible endoscope to the target and identify it by color. For the last three pigs, a 2D/3D registration was performed such that the target's coordinates in the image volume was noted and its location on the video stream was displayed with a crosshair to aid in navigation. The surgeon was able to correctly identify the target by color, with an average intervention time of 20 minutes for the first three pigs and 3 minutes for the last three.

  3. Informing pedagogy through the brain-targeted teaching model.

    PubMed

    Hardiman, Mariale

    2012-01-01

    Improving teaching to foster creative thinking and problem-solving for students of all ages will require two essential changes in current educational practice. First, to allow more time for deeper engagement with material, it is critical to reduce the vast number of topics often required in many courses. Second, and perhaps more challenging, is the alignment of pedagogy with recent research on cognition and learning. With a growing focus on the use of research to inform teaching practices, educators need a pedagogical framework that helps them interpret and apply research findings. This article describes the Brain-Targeted Teaching Model, a scheme that relates six distinct aspects of instruction to research from the neuro- and cognitive sciences.

  4. a Plutonium Ceramic Target for Masha

    NASA Astrophysics Data System (ADS)

    Wilk, P. A.; Shaughnessy, D. A.; Moody, K. J.; Kenneally, J. M.; Wild, J. F.; Stoyer, M. A.; Patin, J. B.; Lougheed, R. W.; Ebbinghaus, B. B.; Landingham, R. L.; Oganessian, Yu. Ts.; Yeremin, A. V.; Dmitriev, S. N.

    2005-09-01

    We are currently developing a plutonium ceramic target for the MASHA mass separator. The MASHA separator will use a thick plutonium ceramic target capable of tolerating temperatures up to 2000 °C. Promising candidates for the target include oxides and carbides, although more research into their thermodynamic properties will be required. Reaction products will diffuse out of the target into an ion source, where they will then be transported through the separator to a position-sensitive focal-plane detector array. Experiments on MASHA will allow us to make measurements that will cement our identification of element 114 and provide for future experiments where the chemical properties of the heaviest elements are studied.

  5. A Plutonium Ceramic Target for MASHA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilk, P A; Shaughnessy, D A; Moody, K J

    2004-07-06

    We are currently developing a plutonium ceramic target for the MASHA mass separator. The MASHA separator will use a thick plutonium ceramic target capable of tolerating temperatures up to 2000 C. Promising candidates for the target include oxides and carbides, although more research into their thermodynamic properties will be required. Reaction products will diffuse out of the target into an ion source, where they will then be transported through the separator to a position-sensitive focal-plane detector array. Experiments on MASHA will allow us to make measurements that will cement our identification of element 114 and provide for future experiments wheremore » the chemical properties of the heaviest elements are studied.« less

  6. Likelihood of achieving air quality targets under model uncertainties.

    PubMed

    Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W

    2011-01-01

    Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.

  7. Intercepting beats in predesignated target zones.

    PubMed

    Craig, Cathy; Pepping, Gert-Jan; Grealy, Madeleine

    2005-09-01

    Moving to a rhythm necessitates precise timing between the movement of the chosen limb and the timing imposed by the beats. However, the temporal information specifying the moment when a beat will sound (the moment onto which one must synchronise one's movement) is not continuously provided by the acoustic array. Because of this informational void, the actors need some form of prospective information that will allow them to act sufficiently ahead of time in order to get their hand in the right place at the right time. In this acoustic interception study, where participants were asked to move between two targets in such a way that they arrived and stopped in the target zone at the same time as a beat sounded, we tested a model derived from tau-coupling theory (Lee DN (1998) Ecol Psychol 10:221-250). This model attempts to explain the form of a potential timing guide that specifies the duration of the inter-beat intervals and also describes how this informational guide can be used in the timing and guidance of movements. The results of our first experiment show that, for inter-beat intervals of less than 3 s, a large proportion of the movement (over 70%) can be explained by the proposed model. However, a second experiment, which augments the time between beats so that it surpasses 3 s, shows a marked decline in the percentage of information/movement coupling. A close analysis of the movement kinematics indicates a lack of control and anticipation in the participants' movements. The implications of these findings, in light of other research studies, are discussed.

  8. Mathematical modeling of antibody drug conjugates with the target and tubulin dynamics to predict AUC.

    PubMed

    Byun, Jong Hyuk; Jung, Il Hyo

    2018-04-14

    Antibody drug conjugates (ADCs)are one of the most recently developed chemotherapeutics to treat some types of tumor cells. They consist of monoclonal antibodies (mAbs), linkers, and potent cytotoxic drugs. Unlike common chemotherapies, ADCs combine selectively with a target at the surface of the tumor cell, and a potent cytotoxic drug (payload) effectively prevents microtubule polymerization. In this work, we construct an ADC model that considers both the target of antibodies and the receptor (tubulin) of the cytotoxic payloads. The model is simulated with brentuximab vedotin, one of ADCs, and used to investigate the pharmacokinetic (PK) characteristics of ADCs in vivo. It also predicts area under the curve (AUC) of ADCs and the payloads by identifying the half-life. The results show that dynamical behaviors fairly coincide with the observed data and half-life and capture AUC. Thus, the model can be used for estimating some parameters, fitting experimental observations, predicting AUC, and exploring various dynamical behaviors of the target and the receptor. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Discovery of Transcriptional Targets Regulated by Nuclear Receptors Using a Probabilistic Graphical Model

    PubMed Central

    Lee, Mikyung; Huang, Ruili; Tong, Weida

    2016-01-01

    Nuclear receptors (NRs) are ligand-activated transcriptional regulators that play vital roles in key biological processes such as growth, differentiation, metabolism, reproduction, and morphogenesis. Disruption of NRs can result in adverse health effects such as NR-mediated endocrine disruption. A comprehensive understanding of core transcriptional targets regulated by NRs helps to elucidate their key biological processes in both toxicological and therapeutic aspects. In this study, we applied a probabilistic graphical model to identify the transcriptional targets of NRs and the biological processes they govern. The Tox21 program profiled a collection of approximate 10 000 environmental chemicals and drugs against a panel of human NRs in a quantitative high-throughput screening format for their NR disruption potential. The Japanese Toxicogenomics Project, one of the most comprehensive efforts in the field of toxicogenomics, generated large-scale gene expression profiles on the effect of 131 compounds (in its first phase of study) at various doses, and different durations, and their combinations. We applied author-topic model to these 2 toxicological datasets, which consists of 11 NRs run in either agonist and/or antagonist mode (18 assays total) and 203 in vitro human gene expression profiles connected by 52 shared drugs. As a result, a set of clusters (topics), which consists of a set of NRs and their associated target genes were determined. Various transcriptional targets of the NRs were identified by assays run in either agonist or antagonist mode. Our results were validated by functional analysis and compared with TRANSFAC data. In summary, our approach resulted in effective identification of associated/affected NRs and their target genes, providing biologically meaningful hypothesis embedded in their relationships. PMID:26643261

  10. X-ray burst studies with the JENSA gas jet target

    NASA Astrophysics Data System (ADS)

    Schmidt, Konrad; Chipps, Kelly A.; Ahn, Sunghoon; Allen, Jacob M.; Ayoub, Sara; Bardayan, Daniel W.; Blackmon, Jeffrey C.; Blankstein, Drew; Browne, Justin; Cha, Soomi; Chae, Kyung YUK; Cizewski, Jolie; Deibel, Catherine M.; Deleeuw, Eric; Gomez, Orlando; Greife, Uwe; Hager, Ulrike; Hall, Matthew R.; Jones, Katherine L.; Kontos, Antonios; Kozub, Raymond L.; Lee, Eunji; Lepailleur, Alex; Linhardt, Laura E.; Matos, Milan; Meisel, Zach; Montes, Fernando; O'Malley, Patrick D.; Ong, Wei Jia; Pain, Steven D.; Sachs, Alison; Schatz, Hendrik; Schmitt, Kyle T.; Smith, Karl; Smith, Michael S.; Soares de Bem, Natã F.; Thompson, Paul J.; Toomey, Rebecca; Walter, David

    2018-01-01

    When a neutron star accretes hydrogen and helium from the outer layers of its companion star, thermonuclear burning enables the αp-process as a break out mechanism from the hot CNO cycle. Model calculations predict (α, p) reaction rates significantly affect both the light curves and elemental abundances in the burst ashes. The Jet Experiments in Nuclear Structure and Astrophysics (JENSA) gas jet target enables the direct measurement of previously inaccessible (α,p) reactions with radioactive beams provided by the rare isotope re-accelerator ReA3 at the National Superconducting Cyclotron Laboratory (NSCL), USA. JENSA is going to be the main target for the Recoil Separator for Capture Reactions (SECAR) at the Facility for Rare Isotope Beams (FRIB). Commissioning of JENSA and first experiments at Oak Ridge National Laboratory (ORNL) showed a highly localized, pure gas target with a density of ˜1019 atoms per square centimeter. Preliminary results are presented from the first direct cross section measurement of the 34Ar(α, p)37 K reaction at NSCL.

  11. Systems-level modeling of mycobacterial metabolism for the identification of new (multi-)drug targets.

    PubMed

    Rienksma, Rienk A; Suarez-Diez, Maria; Spina, Lucie; Schaap, Peter J; Martins dos Santos, Vitor A P

    2014-12-01

    Systems-level metabolic network reconstructions and the derived constraint-based (CB) mathematical models are efficient tools to explore bacterial metabolism. Approximately one-fourth of the Mycobacterium tuberculosis (Mtb) genome contains genes that encode proteins directly involved in its metabolism. These represent potential drug targets that can be systematically probed with CB models through the prediction of genes essential (or the combination thereof) for the pathogen to grow. However, gene essentiality depends on the growth conditions and, so far, no in vitro model precisely mimics the host at the different stages of mycobacterial infection, limiting model predictions. These limitations can be circumvented by combining expression data from in vivo samples with a validated CB model, creating an accurate description of pathogen metabolism in the host. To this end, we present here a thoroughly curated and extended genome-scale CB metabolic model of Mtb quantitatively validated using 13C measurements. We describe some of the efforts made in integrating CB models and high-throughput data to generate condition specific models, and we will discuss challenges ahead. This knowledge and the framework herein presented will enable to identify potential new drug targets, and will foster the development of optimal therapeutic strategies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Molecular Composition Analysis of Distant Targets

    NASA Technical Reports Server (NTRS)

    Hughes, Gary B.; Lubin, Philip

    2017-01-01

    This document is the Final Report for NASA Innovative Advanced Concepts (NIAC) Phase I Grant 15-NIAC16A-0145, titled Molecular Composition Analysis of Distant Targets. The research was focused on developing a system concept for probing the molecular composition of cold solar system targets, such as Asteroids, Comets, Planets and Moons from a distant vantage, for example from a spacecraft that is orbiting the target (Hughes et al., 2015). The orbiting spacecraft is equipped with a high-power laser, which is run by electricity from photovoltaic panels. The laser is directed at a spot on the target. Materials on the surface of the target are heated by the laser beam, and begin to melt and then evaporate, forming a plume of asteroid molecules in front of the heated spot. The heated spot glows, producing blackbody illumination that is visible from the spacecraft, via a path through the evaporated plume. As the blackbody radiation from the heated spot passes through the plume of evaporated material, molecules in the plume absorb radiation in a manner that is specific to the rotational and vibrational characteristics of the specific molecules. A spectrometer aboard the spacecraft is used to observe absorption lines in the blackbody signal. The pattern of absorption can be used to estimate the molecular composition of materials in the plume, which originated on the target. Focusing on a single spot produces a borehole, and shallow subsurface profiling of the targets bulk composition is possible. At the beginning of the Phase I research, the estimated Technology Readiness Level (TRL) of the system was TRL-1. During the Phase I research, an end-to-end theoretical model of the sensor system was developed from first principles. The model includes laser energy and optical propagation, target heating, melting and evaporation of target material, plume density, thermal radiation from the heated spot, molecular cross section of likely asteroid materials, and estimation of the

  13. TIde: a software for the systematic scanning of drug targets in kinetic network models

    PubMed Central

    Schulz, Marvin; Bakker, Barbara M; Klipp, Edda

    2009-01-01

    Background During the stages of the development of a potent drug candidate compounds can fail for several reasons. One of them, the efficacy of a candidate, can be estimated in silico if an appropriate ordinary differential equation model of the affected pathway is available. With such a model at hand it is also possible to detect reactions having a large effect on a certain variable such as a substance concentration. Results We show an algorithm that systematically tests the influence of activators and inhibitors of different type and strength acting at different positions in the network. The effect on a quantity to be selected (e.g. a steady state flux or concentration) is calculated. Moreover, combinations of two inhibitors or one inhibitor and one activator targeting different network positions are analysed. Furthermore, we present TIde (Target Identification), an open source, platform independent tool to investigate ordinary differential equation models in the common systems biology markup language format. It automatically assigns the respectively altered kinetics to the inhibited or activated reactions, performs the necessary calculations, and provides a graphical output of the analysis results. For illustration, TIde is used to detect optimal inhibitor positions in simple branched networks, a signalling pathway, and a well studied model of glycolysis in Trypanosoma brucei. Conclusion Using TIde, we show in the branched models under which conditions inhibitions in a certain pathway can affect a molecule concentrations in a different. In the signalling pathway we illuminate which inhibitions have an effect on the signalling characteristics of the last active kinase. Finally, we compare our set of best targets in the glycolysis model with a similar analysis showing the applicability of our tool. PMID:19840374

  14. Establishment of a biophysical model to optimize endoscopic targeting of magnetic nanoparticles for cancer treatment.

    PubMed

    Roeth, Anjali A; Slabu, Ioana; Baumann, Martin; Alizai, Patrick H; Schmeding, Maximilian; Guentherodt, Gernot; Schmitz-Rode, Thomas; Neumann, Ulf P

    2017-01-01

    Superparamagnetic iron oxide nanoparticles (SPION) may be used for local tumor treatment by coupling them to a drug and accumulating them locally with magnetic field traps, that is, a combination of permanent magnets and coils. Thereafter, an alternating magnetic field generates heat which may be used to release the thermosensitively bound drug and for hyperthermia. Until today, only superficial tumors can be treated with this method. Our aim was to transfer this method into an endoscopic setting to also reach the majority of tumors located inside the body. To find the ideal endoscopic magnetic field trap, which accumulates the most SPION, we first developed a biophysical model considering anatomical as well as physical conditions. Entities of choice were esophageal and prostate cancer. The magnetic susceptibilities of different porcine and rat tissues were measured with a superconducting quantum interference device. All tissues showed diamagnetic behavior. The evaluation of clinical data (computed tomography scan, endosonography, surgical reports, pathological evaluation) of patients gave insight into the topographical relationship between the tumor and its surroundings. Both were used to establish the biophysical model of the tumors and their surroundings, closely mirroring the clinical situation, in which we could virtually design, place and evaluate different electromagnetic coil configurations to find optimized magnetic field traps for each tumor entity. By simulation, we could show that the efficiency of the magnetic field traps can be enhanced by 38-fold for prostate and 8-fold for esophageal cancer. Therefore, our approach of endoscopic targeting is an improvement of the magnetic drug-targeting setups for SPION tumor therapy as it holds the possibility of reaching tumors inside the body in a minimal-invasive way. Future animal experiments must prove these findings in vivo.

  15. Combining wet and dry research: experience with model development for cardiac mechano-electric structure-function studies

    PubMed Central

    Quinn, T. Alexander; Kohl, Peter

    2013-01-01

    Since the development of the first mathematical cardiac cell model 50 years ago, computational modelling has become an increasingly powerful tool for the analysis of data and for the integration of information related to complex cardiac behaviour. Current models build on decades of iteration between experiment and theory, representing a collective understanding of cardiac function. All models, whether computational, experimental, or conceptual, are simplified representations of reality and, like tools in a toolbox, suitable for specific applications. Their range of applicability can be explored (and expanded) by iterative combination of ‘wet’ and ‘dry’ investigation, where experimental or clinical data are used to first build and then validate computational models (allowing integration of previous findings, quantitative assessment of conceptual models, and projection across relevant spatial and temporal scales), while computational simulations are utilized for plausibility assessment, hypotheses-generation, and prediction (thereby defining further experimental research targets). When implemented effectively, this combined wet/dry research approach can support the development of a more complete and cohesive understanding of integrated biological function. This review illustrates the utility of such an approach, based on recent examples of multi-scale studies of cardiac structure and mechano-electric function. PMID:23334215

  16. Competitive hybridization models

    NASA Astrophysics Data System (ADS)

    Cherepinsky, Vera; Hashmi, Ghazala; Mishra, Bud

    2010-11-01

    Microarray technology, in its simplest form, allows one to gather abundance data for target DNA molecules, associated with genomes or gene-expressions, and relies on hybridizing the target to many short probe oligonucleotides arrayed on a surface. While for such multiplexed reactions conditions are optimized to make the most of each individual probe-target interaction, subsequent analysis of these experiments is based on the implicit assumption that a given experiment yields the same result regardless of whether it was conducted in isolation or in parallel with many others. It has been discussed in the literature that this assumption is frequently false, and its validity depends on the types of probes and their interactions with each other. We present a detailed physical model of hybridization as a means of understanding probe interactions in a multiplexed reaction. Ultimately, the model can be derived from a system of ordinary differential equations (ODE’s) describing kinetic mass action with conservation-of-mass equations completing the system. We examine pairwise probe interactions in detail and present a model of “competition” between the probes for the target—especially, when the target is effectively in short supply. These effects are shown to be predictable from the affinity constants for each of the four probe sequences involved, namely, the match and mismatch sequences for both probes. These affinity constants are calculated from the thermodynamic parameters such as the free energy of hybridization, which are in turn computed according to the nearest neighbor (NN) model for each probe and target sequence. Simulations based on the competitive hybridization model explain the observed variability in the signal of a given probe when measured in parallel with different groupings of other probes or individually. The results of the simulations can be used for experiment design and pooling strategies, based on which probes have been shown to have a strong

  17. Impact cratering experiments in brittle targets with variable thickness: Implications for deep pit craters on Mars

    NASA Astrophysics Data System (ADS)

    Michikami, T.; Hagermann, A.; Miyamoto, H.; Miura, S.; Haruyama, J.; Lykawka, P. S.

    2014-06-01

    High-resolution images reveal that numerous pit craters exist on the surface of Mars. For some pit craters, the depth-to-diameter ratios are much greater than for ordinary craters. Such deep pit craters are generally considered to be the results of material drainage into a subsurface void space, which might be formed by a lava tube, dike injection, extensional fracturing, and dilational normal faulting. Morphological studies indicate that the formation of a pit crater might be triggered by the impact event, and followed by collapse of the ceiling. To test this hypothesis, we carried out laboratory experiments of impact cratering into brittle targets with variable roof thickness. In particular, the effect of the target thickness on the crater formation is studied to understand the penetration process by an impact. For this purpose, we produced mortar targets with roof thickness of 1-6 cm, and a bulk density of 1550 kg/m3 by using a mixture of cement, water and sand (0.2 mm) in the ratio of 1:1:10, by weight. The compressive strength of the resulting targets is 3.2±0.9 MPa. A spherical nylon projectile (diameter 7 mm) is shot perpendicularly into the target surface at the nominal velocity of 1.2 km/s, using a two-stage light-gas gun. Craters are formed on the opposite side of the impact even when no target penetration occurs. Penetration of the target is achieved when craters on the opposite sides of the target connect with each other. In this case, the cross section of crater somehow attains a flat hourglass-like shape. We also find that the crater diameter on the opposite side is larger than that on the impact side, and more fragments are ejected from the crater on the opposite side than from the crater on the impact side. This result gives a qualitative explanation for the observation that the Martian deep pit craters lack a raised rim and have the ejecta deposit on their floor instead. Craters are formed on the opposite impact side even when no penetration

  18. Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images

    NASA Astrophysics Data System (ADS)

    Yao, Shoukui; Qin, Xiaojuan

    2018-02-01

    Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.

  19. Design of ligand-targeted nanoparticles for enhanced cancer targeting

    NASA Astrophysics Data System (ADS)

    Stefanick, Jared F.

    Ligand-targeted nanoparticles are increasingly used as drug delivery vehicles for cancer therapy, yet have not consistently produced successful clinical outcomes. Although these inconsistencies may arise from differences in disease models and target receptors, nanoparticle design parameters can significantly influence therapeutic efficacy. By employing a multifaceted synthetic strategy to prepare peptide-targeted nanoparticles with high purity, reproducibility, and precisely controlled stoichiometry of functionalities, this work evaluates the roles of polyethylene glycol (PEG) coating, ethylene glycol (EG) peptide-linker length, peptide hydrophilicity, peptide density, and nanoparticle size on tumor targeting in a systematic manner. These parameters were analyzed in multiple disease models by targeting human epidermal growth factor receptor 2 (HER2) in breast cancer and very late antigen-4 (VLA-4) in multiple myeloma to demonstrate the widespread applicability of this approach. By increasing the hydrophilicity of the targeting peptide sequence and simultaneously optimizing the EG peptide-linker length, the in vitro cellular uptake of targeted liposomes was significantly enhanced. Specifically, including a short oligolysine chain adjacent to the targeting peptide sequence effectively increased cellular uptake ~80-fold using an EG6 peptide-linker compared to ~10-fold using an EG45 linker. In vivo, targeted liposomes prepared in a traditional manner lacking the oligolysine chain demonstrated similar biodistribution and tumor uptake to non-targeted liposomes. However, by including the oligolysine chain, targeted liposomes using an EG45 linker significantly improved tumor uptake ~8-fold over non-targeted liposomes, while the use of an EG6 linker decreased tumor accumulation and uptake, owing to differences in cellular uptake kinetics, clearance mechanisms, and binding site barrier effects. To further improve tumor targeting and enhance the selectivity of targeted

  20. Pliocene Model Intercomparison Project (PlioMIP): Experimental Design and Boundary Conditions (Experiment 2)

    NASA Technical Reports Server (NTRS)

    Haywood, A. M.; Dowsett, H. J.; Robinson, M. M.; Stoll, D. K.; Dolan, A. M.; Lunt, D. J.; Otto-Bliesner, B.; Chandler, M. A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere only climate models. The second (Experiment 2) utilizes fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.