Modeling Nucleon Generalized Parton Distributions
Radyushkin, Anatoly V.
2013-05-01
We discuss building models for nucleon generalized parton distributions (GPDs) H and E that are based on the formalism of double distributions (DDs). We find that the usual "DD+D-term'' construction should be amended by an extra term, generated by GPD E(x,\\xi). Unlike the $D$-term, this function has support in the whole -1 < x< 1 region, and in general does not vanish at the border points|x|=\\xi.
Modeling Nucleon Generalized Parton Distributions
Radyushkin, Anatoly V.
2013-05-01
We discuss building models for nucleon generalized parton distributions (GPDs) H and E that are based on the formalism of double distributions (DDs). We found that the usual "DD+D-term" construction should be amended by an extra term, xiE^1_+ (x,xi) built from the alpha/Beta moment of the DD e(Beta,alpha) that generates GPD E(x,xi). Unlike the D-term, this function has support in the whole -1< x<1 region, and in general does not vanish at the border points |x|=xi.
Parton branching in the color mutation model
NASA Astrophysics Data System (ADS)
Hwa, Rudolph C.; Wu, Yuanfang
1999-11-01
The soft production problem in hadronic collisions as described in the eikonal color mutation branching model is improved in the way that the initial parton distribution is treated. Furry branching of the partons is considered as a means of describing the nonperturbative process of parton reproduction in the soft interaction. The values of all the moments,
QCD parton model at collider energies
Ellis, R.K.
1984-09-01
Using the example of vector boson production, the application of the QCD improved parton model at collider energies is reviewed. The reliability of the extrapolation to SSC energies is assessed. Predictions at ..sqrt..S = 0.54 TeV are compared with data. 21 references.
Evolution and models for skewed parton distribution
Musatov, I.C.; Radyushkin, A.V.
1999-05-17
The authors discuss the structure of the ''forward visible'' (FW) parts of double and skewed distributions related to usual distributions through reduction relations. They use factorized models for double distributions (DDs) {tilde f}(x,{alpha}) in which one factor coincides with the usual (forward) parton distribution and another specifies the profile characterizing the spread of the longitudinal momentum transfer. The model DDs are used to construct skewed parton distributions (SPDs). For small skewedness, the FW parts of SPDs H ({tilde x},{xi}) can be obtained by averaging forward parton densities f({tilde x}-{xi}{alpha}) with the weight {rho}({alpha}) coinciding with the profile function of the double distribution {tilde f}(x, {alpha}) at small x. They show that if the x{sup n} moments {tilde f}{sub n}({alpha}) of DDs have the asymptotic (1-{alpha}{sup 2}){sup n+1} profile, then the {alpha}-profile of {tilde f}(x,{alpha}) for small x is completely determined by small-x behavior of the usual parton distribution. They demonstrate that, for small {xi}, the model with asymptotic profiles for {tilde f}{sub n}({alpha}) is equivalent to that proposed recently by Shuvaev et al., in which the Gegenbauer moments of SPDs do not depend on {xi}. They perform a numerical investigation of the evolution patterns of SPDs and give interpretation of the results of these studies within the formalism of double distributions.
Modeling the Pion Generalized Parton Distribution
NASA Astrophysics Data System (ADS)
Mezrag, C.
2016-02-01
We compute the pion Generalized Parton Distribution (GPD) in a valence dressed quarks approach. We model the Mellin moments of the GPD using Ansätze for Green functions inspired by the numerical solutions of the Dyson-Schwinger Equations (DSE) and the Bethe-Salpeter Equation (BSE). Then, the GPD is reconstructed from its Mellin moment using the Double Distribution (DD) formalism. The agreement with available experimental data is very good.
Evolution and models for skewed parton distributions
Musatov, I. V.; Radyushkin, A. V.
2000-04-01
We discuss the structure of the ''forward visible'' (FV) parts of double and skewed distributions related to the usual distributions through reduction relations. We use factorized models for double distributions (DD's) f(tilde sign)(x,{alpha}) in which one factor coincides with the usual (forward) parton distribution and another specifies the profile characterizing the spread of the longitudinal momentum transfer. The model DD's are used to construct skewed parton distributions (SPD's). For small skewedness, the FV parts of SPD's H(x(tilde sign),{xi}) can be obtained by averaging forward parton densities f(x(tilde sign)-{xi}{alpha}) with the weight {rho}({alpha}) coinciding with the profile function of the double distribution f(tilde sign)(x,{alpha}) at small x. We show that if the x{sup n} moments f(tilde sign){sub n}({alpha}) of DD's have the asymptotic (1-{alpha}{sup 2}){sup n+1} profile, then the {alpha} profile of f(tilde sign)(x,{alpha}) for small x is completely determined by the small-x behavior of the usual parton distribution. We demonstrate that, for small {xi}, the model with asymptotic profiles for f(tilde sign){sub n}({alpha}) is equivalent to that proposed recently by Shuvaev et al., in which the Gegenbauer moments of SPD's do not depend on {xi}. We perform a numerical investigation of the evolution patterns of SPD's and give an interpretation of the results of these studies within the formalism of double distributions. (c) 2000 The American Physical Society.
The Polarized TMDs in the covariant parton model approach
A.V. Efremov, P. Schweitzer, O.V. Teryaev, P. Zavada
2011-05-01
We derive relations between polarized transverse momentum dependent distribution functions (TMDs) and the usual parton distribution functions (PDFs) in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known PDF $g_{1}^{q}(x)$ as input we predict the $x$- and $\\mathbf{p}_{T}$-dependence of all polarized twist-2 naively time-reversal even (T-even) TMDs.
New model for nucleon generalized parton distributions
Radyushkin, Anatoly V.
2014-01-01
We describe a new type of models for nucleon generalized parton distributions (GPDs) H and E. They are heavily based on the fact nucleon GPDs require to use two forms of double distribution (DD) representations. The outcome of the new treatment is that the usual DD+D-term construction should be amended by an extra term, {xi} E{sub +}{sup 1} (x,{xi}) which has the DD structure {alpha}/{beta} e({beta},{alpha}, with e({beta},{alpha}) being the DD that generates GPD E(x,{xi}). We found that this function, unlike the D-term, has support in the whole -1 <= x <= 1 region. Furthermore, it does not vanish at the border points |x|={xi}.
New parton structure functions and minijets in the two-component dual parton model
Bopp, F.W.; Pertermann, D. ); Engel, R. ); Ranft, J. )
1994-04-01
We use new fits to parton structure functions, including structure functions with Lipatov behavior at small [ital x] values and discuss the minijet component in the two-component dual parton model with a supercritical Pomeron as demanded by the fits to cross-section data. We find that a consistent model can only be formulated with a [ital p][sub [perpendicular]hr] cutoff for the minijets increasing with energy. The implications for particle production in hadronic collisions at TeV energies are discussed.
A.V. Efremov, P. Schweitzer, O.V. Teryaev, P. Zavada
2011-03-01
We derive relations between transverse momentum dependent distribution functions (TMDs) and the usual parton distribution functions (PDFs) in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known PDFs f_1(x) and g_1(x) as input we predict the x- and pT-dependence of all twist-2 T-even TMDs.
Backward dilepton production in color dipole and parton models
Gay Ducati, Maria Beatriz; Graeve de Oliveira, Emmanuel
2010-03-01
The Drell-Yan dilepton production at backward rapidities is studied in proton-nucleus collisions at Relativistic Heavy Ion Collider and LHC energies by comparing two different approaches: the k{sub T} factorization at next-to-leading order with intrinsic transverse momentum and the same process formulated in the target rest frame, i.e., the color dipole approach. Our results are expressed in terms of the ratio between p(d)-A and p-p collisions as a function of transverse momentum and rapidity. Three nuclear parton distribution functions are used: EKS (Eskola, Kolhinen, and Ruuskanen), EPS08, and EPS09 and, in both approaches, dileptons show sensitivity to nuclear effects, specially regarding the intrinsic transverse momentum. Also, there is room to discriminate between formalisms: the color dipole approach lacks soft effects introduced by the intrinsic k{sub T}. Geometric scaling GBW (Golec-Biernat and Wusthoff) and BUW (Boer, Utermann, and Wessels) color dipole cross section models and also a DHJ (Dumitru, Hayashigaki, and Jalilian-Marian) model, which breaks geometric scaling, are used. No change in the ratio between collisions is observed, showing that this observable is not changed by the particular shape of the color dipole cross section. Furthermore, our k{sub T} factorization results are compared with color glass condensate results at forward rapidities: the results agree at Relativistic Heavy Ion Collider although disagree at LHC, mainly due to the different behavior of target gluon and quark shadowing.
Projective symmetry of partons in Kitaev's honeycomb model
NASA Astrophysics Data System (ADS)
Mellado, Paula
2015-03-01
Low-energy states of quantum spin liquids are thought to involve partons living in a gauge-field background. We study the spectrum of Majorana fermions of Kitaev's honeycomb model on spherical clusters. The gauge field endows the partons with half-integer orbital angular momenta. As a consequence, the multiplicities reflect not the point-group symmetries of the cluster, but rather its projective symmetries, operations combining physical and gauge transformations. The projective symmetry group of the ground state is the double cover of the point group. We acknowledge Fondecyt under Grant No. 11121397, Conicyt under Grant No. 79112004, and the Simons Foundation (P.M.); the Max Planck Society and the Alexander von Humboldt Foundation (O.P.); and the US DOE Grant No. DE-FG02-08ER46544 (O.T.).
Multiparticle production in a two-component dual parton model
Aurenche, P. ); Bopp, F.W. ); Capella, A. ); Kwiecinski, J. ); Maire, M. ); Ranft, J.; Tran Thanh Van, J. )
1992-01-01
The dual parton model (DPM) describes soft and semihard multiparticle production. The version of the DPM presented in this paper includes soft and hard mechanisms as well as diffractive processes. The model is formulated as a Monte Carlo event generator. We calculate in this model, in the energy range of the hadron colliders, rapidity distributions and the rise of the rapidity plateau with the collision energy, transverse-momentum distributions and the rise of average transverse momenta with the collision energy, multiplicity distributions in different pseudorapidity regions, and transverse-energy distributions. For most of these quantities we find a reasonable agreement with experimental data.
Implementing the LPM effect in a parton cascade model
NASA Astrophysics Data System (ADS)
Coleman-Smith, C. E.; Bass, S. A.; Srivastava, D. K.
2011-07-01
Parton Cascade Models (PCM [K. Geiger, B. Muller, Nucl. Phys. B369 (1992) 600-654; S. A. Bass, B. Muller, D. K. Srivastava, Phys. Lett. B551 (2003) 277-283; Z. Xu and C. Greiner, Phys. Rev. C 76, 024911 (2007); D. Molnar and M. Gyulassy, Phys. Rev. C 62, 054907 (2000)]), which describe the full time-evolution of a system of quarks and gluons using pQCD interactions are ideally suited for the description of jet production, including the emission, evolution and energy-loss of the full parton shower in a hot and dense QCD medium. The Landau-Pomeranchuk-Migdal (LPM) effect [L. D. Landau, I. J. Pomeranchuk, Dolk. Akad. Nauk. SSSR 92 (92); A. B. Migdal, Phys. Rev. 103 (6) (1956) 1811-1820], the quantum interference of parton wave functions due to repeated scatterings against the background medium, is likely the dominant in-medium effect affecting jet suppression. We have implemented a probabilistic implementation of the LPM effect [K. Zapp, J. Stachel, U. A. Wiedemann, Phys. Rev. Lett. 103 (2009) 152302] within the PCM which can be validated against previously derived analytical calculations by Baier et al (BDMPS-Z) [R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne, D. Schiff, Nucl. Phys. B478 (1996) 577-597; R. Baier, Y. L. Dokshitzer, S. Peigne, D. Schiff, Phys. Lett. B345 (1995) 277-286; R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne, D. Schiff, Nucl. Phys. B483 (1997) 291-320; B. Zakharov, JETP Lett. 63 (1996) 952-957; B. Zakharov, JETP Lett. 65 (1997) 615-620]. Presented at the 6th International Conference on Physics and Astrophysics of Quark Gluon Plasma (ICPAQGP 2010).
Frederico, T.; Pace, E.; Pasquini, B.; Salme, G.
2010-08-05
Longitudinal and transverse parton distributions for pion and nucleon are calculated from hadron vertexes obtained by a study of form factors within relativistic quark models. The relevance of the one-gluon-exchange dominance at short range for the behavior of the form factors at large momentum transfer and of the parton distributions at the end points is stressed.
Structure functions and parton distributions
Olness, F.; Tung, Wu-Ki
1991-04-01
Activities of the structure functions and parton distributions group is summarized. The impact of scheme-dependence of parton distributions (especially sea-quarks and gluons) on the quantitative formulation of the QCD parton model is highlighted. Recent progress on the global analysis of parton distributions is summarized. Issues on the proper use of the next-to-leading parton distributions are stressed.
Towards a model of pion generalized parton distributions from Dyson-Schwinger equations
Moutarde, H.
2015-04-10
We compute the pion quark Generalized Parton Distribution H{sup q} and Double Distributions F{sup q} and G{sup q} in a coupled Bethe-Salpeter and Dyson-Schwinger approach. We use simple algebraic expressions inspired by the numerical resolution of Dyson-Schwinger and Bethe-Salpeter equations. We explicitly check the support and polynomiality properties, and the behavior under charge conjugation or time invariance of our model. We derive analytic expressions for the pion Double Distributions and Generalized Parton Distribution at vanishing pion momentum transfer at a low scale. Our model compares very well to experimental pion form factor or parton distribution function data.
Comparing multiparticle production within a two-component dual parton model with collider data
Hahn, K.; Ranft, J. )
1990-03-01
The dual parton model (DPM) is very successful in describing hadronic multiparticle production. The version of DPM presented includes both soft and hard mechanisms. The hard component is described according to the lowest-order perturbative QCD--parton-model cross section. The model is formulated in the form of a Monte Carlo event generator. Results obtained with this event generator are compared with data on inclusive reactions in the TeV energy range of the CERN and Fermilab hadron colliders.
Generalized parton distributions of the pion in chiral quark models and their QCD evolution
Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof
2008-02-01
We evaluate generalized parton distributions of the pion in two chiral quark models: the spectral quark model and the Nambu-Jona-Lasinio model with a Pauli-Villars regularization. We proceed by the evaluation of double distributions through the use of a manifestly covariant calculation based on the {alpha} representation of propagators. As a result polynomiality is incorporated automatically and calculations become simple. In addition, positivity and normalization constraints, sum rules, and soft-pion theorems are fulfilled. We obtain explicit formulas, holding at the low-energy quark-model scale. The expressions exhibit no factorization in the t-dependence. The QCD evolution of those parton distributions is carried out to experimentally or lattice accessible scales. We argue for the need of evolution by comparing the parton distribution function and the parton distribution amplitude of the pion to the available experimental and lattice data, and confirm that the quark-model scale is low, about 320 MeV.
Parton branching model for pp¯ collisions
NASA Astrophysics Data System (ADS)
Chan, A. H.; Chew, C. K.
1990-02-01
A detailed analysis of the behavior of the initial numbers of gluons and quarks in the generalized multiplicity distribution (GMD) is presented. Two special cases of GMD, namely, the negative-binomial distribution and the Furry-Yule distribution, are also discussed in relation to the non-single-diffractive data at 200, 546, and 900 GeV c.m.-system energies and pseudorapidity intervals ηc. The GMD may provide an alternate distribution to understand parton action for future pp¯ collisions at high TeV energies.
Transverse-momentum-dependent parton distributions in a spectator diquark model
F Conti, A Bacchetta, M Radici
2009-09-01
Within the framework of a spectator diquark model of the nucleon, involving both scalar and axial-vector diquarks, we calculate all the leading-twist transverse-momentum-dependent parton distribution functions (TMDs). Naive Time-odd densities are generated through a one-gluon-loop rescattering mechanism, simulating the final state interactions required for these functions to exist. Analytic results are obtained for all the TMDs, and a connection with the light-cone wave functions formalism is also established. The model parameters are fixed by reproducing the phenomenological parametrizations of unpolarized and helicity parton distributions at the lowest available scale. Predictions for the other parton densities are given and, whenever possible, compared with available parametrizations.
Pion transverse momentum dependent parton distributions in the Nambu and Jona-Lasinio model
NASA Astrophysics Data System (ADS)
Noguera, Santiago; Scopetta, Sergio
2015-11-01
An explicit evaluation of the two pion transverse momentum dependent parton distributions at leading twist is presented, in the framework of the Nambu-Jona Lasinio model with Pauli-Villars regularization. The transverse momentum dependence of the obtained distributions is generated solely by the dynamics of the model. Using these results, the so called generalized Boer-Mulders shift is studied and compared with recent lattice data. The obtained agreement is very encouraging, in particular because no additional parameter has been introduced. A more conclusive comparison would require a precise knowledge of the QCD evolution of the transverse momentum dependent parton distributions under scrutiny.
Engel, R.; Bopp, F.W.; Pertermann, D.; Ranft, J. )
1992-12-01
In the framework of a two-component dual parton model we perform a fit to {ital p{bar p}} total, elastic, inelastic, and single-diffractive cross-section data at collider energies. The fit including diffractive data gives better results using the supercritical soft Pomeron instead of the critical one. Because of the different structure function parametrizations the predictions of cross sections at supercollider energies are subject to large uncertainties.
Energy loss in a partonic transport model including bremsstrahlung processes
Fochler, Oliver; Greiner, Carsten; Xu Zhe
2010-08-15
A detailed investigation of the energy loss of gluons that traverse a thermal gluonic medium simulated within the perturbative QCD-based transport model BAMPS (a Boltzmann approach to multiparton scatterings) is presented in the first part of this work. For simplicity the medium response is neglected in these calculations. The energy loss from purely elastic interactions is compared with the case where radiative processes are consistently included based on the matrix element by Gunion and Bertsch. From this comparison, gluon multiplication processes gg{yields}ggg are found to be the dominant source of energy loss within the approach employed here. The consequences for the quenching of gluons with high transverse momentum in fully dynamic simulations of Au+Au collisions at the BNL Relativistic Heavy Ion Collider (RHIC) energy of {radical}(s)=200A GeV are discussed in the second major part of this work. The results for central collisions as discussed in a previous publication are revisited, and first results on the nuclear modification factor R{sub AA} for noncentral Au+Au collisions are presented. They show a decreased quenching compared to central collisions while retaining the same shape. The investigation of the elliptic flow v{sub 2} is extended up to nonthermal transverse momenta of 10 GeV, exhibiting a maximum v{sub 2} at roughly 4 to 5 GeV and a subsequent decrease. Finally the sensitivity of the aforementioned results on the specific implementation of the effective modeling of the Landau-Pomeranchuk-Migdal (LPM) effect via a formation-time-based cutoff is explored.
A dynamical picture of hadron-hadron collisions with the string-parton model
Dean, D.J. Vanderbilt Univ., Nashville, TN . Dept. of Physics and Astronomy); Umar, A.S. . Dept. of Physics and Astronomy); Wu, J.S.; Strayer, M.R. )
1991-01-01
We introduce a dynamical model for the description of hadron-hadron collisions at relativistic energies. The model is based on classical Nambu-Goto strings. The string motion is performed in unrestricted four-dimensional space-time. The string endpoints are interpreted as partons which carry energy and momentum. We study e{sup +}e{sup {minus}}, e -- p, and p -- p collisions at various center of mass energies. The three basic features of our model are as follows. An ensemble of strings with different endpoint dynamics is used to approximately reproduce the valence quark structure functions. We introduce an adiabatic hadronization mechanism for string breakup via q{bar q} pair production. The interaction between strings is formulated in terms of a quark-quark scattering amplitude and exchange. This model will be used to describe relativistic heavy-ion collisions in future work. 28 refs., 3 figs., 1 tab.
Accardi, Alberto; Owens, Jeff F.
2013-07-01
Three new sets of next-to-leading order parton distribution functions (PDFs) are presented, determined by global fits to a wide variety of data for hard scattering processes. The analysis includes target mass and higher twist corrections needed for the description of deep-inelastic scattering data at large x and low Q^2, and nuclear corrections for deuterium targets. The PDF sets correspond to three different models for the nuclear effects, and provide a more realistic uncertainty range for the d quark PDF compared with previous fits. Applications to weak boson production at colliders are also discussed.
Thermalization of parton spectra in the colour-flux-tube model
NASA Astrophysics Data System (ADS)
Ryblewski, Radoslaw
2016-09-01
A detailed study of thermalization of the momentum spectra of partons produced via decays of colour flux tubes due to the Schwinger tunnelling mechanism is presented. The collisions between particles are included in the relaxation-time approximation specified by different values of the shear viscosity to entropy density ratio. At first we show that, to a good approximation, the transverse-momentum spectra of the produced partons are exponential, irrespective of the assumed value of the viscosity of the system and the freeze-out time. This thermal-like behaviour may be attributed to specific properties of the Schwinger tunnelling process. In the next step, in order to check the approach of the system towards genuine local equilibrium, we compare the local slope of the model transverse-momentum spectra with the local slope of the fully equilibrated reference spectra characterized by the effective temperature that reproduces the energy density of the system. We find that the viscosity corresponding to the anti-de Sitter/conformal field theory lower bound is necessary for thermalization of the system within about two fermis.
Charge-exchange reactions from the standpoint of the parton model
NASA Astrophysics Data System (ADS)
Nekrasov, M. L.
2015-11-01
Using simple arguments, we show that charge-exchange reactions at high energies go through the hard scattering of fast quarks. On this basis we describe π-p→ M0n and K-p→ M0Λ, M0=π0,η,η', in a combined approach which defines hard contributions in the parton model and soft ones in Regge phenomenology. The disappearance of a dip according to recent GAMS- 4π data in the differential cross-section K-p→ηΛ at \\vert t\\vert≈ 0.4-0.5 (GeV/c)2 at transition to relatively high momenta, is explained as a manifestation of a mode change of summation of hard contributions from coherent to incoherent. Other manifestations of the mentioned mode change are discussed. Constraints on the η- η{^' mixing and gluonium admixture in η{^' are obtained.
Charge symmetry at the partonic level
Londergan, J. T.; Peng, J. C.; Thomas, A. W.
2010-07-01
This review article discusses the experimental and theoretical status of partonic charge symmetry. It is shown how the partonic content of various structure functions gets redefined when the assumption of charge symmetry is relaxed. We review various theoretical and phenomenological models for charge symmetry violation in parton distribution functions. We summarize the current experimental upper limits on charge symmetry violation in parton distributions. A series of experiments are presented, which might reveal partonic charge symmetry violation, or alternatively might lower the current upper limits on parton charge symmetry violation.
Access to generalized parton distributions at COMPASS
Nowak, Wolf-Dieter
2015-04-10
A brief experimentalist's introduction to Generalized Parton Distributions (GPDs) is given. Recent COMPASS results are shown on transverse target-spin asymmetries in hard exclusive ρ{sup 0} production and their interpretation in terms of a phenomenological model as indication for chiral-odd, transverse GPDs is discussed. For deeply virtual Compton scattering, it is briefly outlined how to access GPDs and projections are shown for future COMPASS measurements.
Are partons confined tachyons?
Noyes, H.P.
1996-03-01
The author notes that if hadrons are gravitationally stabilized ``black holes``, as discrete physics suggests, it is possible that partons, and in particular quarks, could be modeled as tachyons, i.e. particles having v{sup 2} > c{sup 2}, without conflict with the observational fact that neither quarks nor tachyons have appeared as ``free particles``. Some consequences of this model are explored.
Gaining analytic control of parton showers
Tackmann, Frank; Bauer, Christian W.; Tackmann, Frank J.
2007-05-14
Parton showers are widely used to generate fully exclusive final states needed to compare theoretical models to experimental observations. While, in general, parton showers give a good description of the experimental data, the precise functional form of the probability distribution underlying the event generation is generally not known. The reason is that realistic parton showers are required to conserve four-momentum at each vertex. In this paper we investigate in detail how four-momentum conservation is enforced in a standard parton shower and why this destroysthe analytic control of the probability distribution. We show how to modify a parton shower algorithm such that it conserves four-momentum at each vertex, but for which the full analytic form of the probability distribution is known. We then comment how this analytic control can be used to match matrix element calculations with parton showers, and to estimate effects of power corrections and other uncertainties in parton showers.
NASA Astrophysics Data System (ADS)
Tuppan, Sam; Budnik, Garrett; Fox, Jordan
2014-09-01
The Meson Cloud Model (MCM) has proven to be a natural explanation for strangeness in the proton because of meson-baryon splitting into kaon-hyperon pairs. Total strangeness is predicted by integrated splitting functions, which represent the probability that the proton will fluctuate into a given meson-baryon pair. However, the momentum distributions s (x) and s (x) in the proton are determined from convolution integrals that depend on the parton distribution functions (PDFs) used for the mesons and baryons in the MCM. Theoretical calculations of these momentum distributions use many different forms for these PDFs. In our investigation, we calculate PDFs for K, K*, Λ, and Σ from two-body wave functions in a Light Cone Model (LCM) of the hadrons. We use these PDFs in conjunction with the MCM to create a hybrid model and compare our results to other theoretical calculations, experimental data from NuTeV, HERMES, ATLAS, and global parton distribution analyses. The Meson Cloud Model (MCM) has proven to be a natural explanation for strangeness in the proton because of meson-baryon splitting into kaon-hyperon pairs. Total strangeness is predicted by integrated splitting functions, which represent the probability that the proton will fluctuate into a given meson-baryon pair. However, the momentum distributions s (x) and s (x) in the proton are determined from convolution integrals that depend on the parton distribution functions (PDFs) used for the mesons and baryons in the MCM. Theoretical calculations of these momentum distributions use many different forms for these PDFs. In our investigation, we calculate PDFs for K, K*, Λ, and Σ from two-body wave functions in a Light Cone Model (LCM) of the hadrons. We use these PDFs in conjunction with the MCM to create a hybrid model and compare our results to other theoretical calculations, experimental data from NuTeV, HERMES, ATLAS, and global parton distribution analyses. This research has been supported in part by the
Asaturyan, R.; Ent, R.; Mkrtchyan, H.; Navasardyan, T.; Tadevosyan, V.; Adams, G. S.; Ahmidouch, A.; Angelescu, T.; Arrington, J.; Asaturyan, A.; et al
2012-01-01
A large set of cross sections for semi-inclusive electroproduction of charged pions (π±) from both proton and deuteron targets was measured. The data are in the deep-inelastic scattering region with invariant mass squared W2 > 4 GeV2 and range in four-momentum transfer squared 2 < Q2 < 4 (GeV/c)2, and cover a range in the Bjorken scaling variable 0.2 < x < 0.6. The fractional energy of the pions spans a range 0.3 < z < 1, with small transverse momenta with respect to the virtual-photon direction, Pt2 < 0.2 (GeV/c)2. The invariant mass that goes undetected, Mx or W',more » is in the nucleon resonance region, W' < 2 GeV. The new data conclusively show the onset of quark-hadron duality in this process, and the relation of this phenomenon to the high-energy factorization ansatz of electron-quark scattering and subsequent quark → pion production mechanisms. The x, z and Pt2 dependences of several ratios (the ratios of favored-unfavored fragmentation functions, charged pion ratios, deuteron-hydrogen and aluminum-deuteron ratios for π+ and π-) have been studied. The ratios are found to be in good agreement with expectations based upon a high-energy quark-parton model description. We find the azimuthal dependences to be small, as compared to exclusive pion electroproduction, and consistent with theoretical expectations based on tree-level factorization in terms of transverse-momentum-dependent parton distribution and fragmentation functions. In the context of a simple model, the initial transverse momenta of d quarks are found to be slightly smaller than for u quarks, while the transverse momentum width of the favored fragmentation function is about the same as for the unfavored one, and both fragmentation widths are larger than the quark widths.« less
NASA Astrophysics Data System (ADS)
Bellm, Johannes; Plätzer, Simon; Richardson, Peter; Siódmok, Andrzej; Webster, Stephen
2016-08-01
We report on the possibility of reweighting parton-shower Monte Carlo predictions for scale variations in the parton-shower algorithm. The method is based on a generalization of the Sudakov veto algorithm. We demonstrate the feasibility of this approach using example physical distributions. Implementations are available for both of the parton-shower modules in the Herwig 7 event generator.
Kovalenko, V. N.
2013-10-15
The soft part of proton-proton interaction is considered within a phenomenological model that involves the formation of color strings. Under the assumption that an elementary collision is associated with the interaction of two color dipoles, the total inelastic cross section and the multiplicity of charged particles are estimated in order to fix model parameters. Particular attention is given to modeling of exclusive parton distributions with allowance for the energy-conservation law and for fixing the center of mass, which are necessary for describing correlations. An algorithm that describes the fusion of strings in the transverse plane and which takes into account their finite rapidity width is developed. The influence of string-fusion effects on long-range correlations is found within this mechanism.
PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0
NASA Astrophysics Data System (ADS)
Sa, Ben-Hao; Zhou, Dai-Mei; Yan, Yu-Liang; Dong, Bao-Guo; Cai, Xu
2013-05-01
We have updated the parton and hadron cascade model PACIAE 2.0 (cf. Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Xiao-Mei Li, Sheng-Qin Feng, Bao-Guo Dong, Xu Cai, Comput. Phys. Comm. 183 (2012) 333.) to the new issue of PACIAE 2.1. The PACIAE model is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum pT is randomly sampled in the string fragmentation, the px and py components are originally put on the circle with radius pT randomly. Now it is put on the circumference of ellipse with half major and minor axes of pT(1+δp) and pT(1-δp), respectively, in order to better investigate the final state transverse momentum anisotropy. New version program summaryManuscript title: PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0 Authors: Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Bao-Guo Dong, and Xu Cai Program title: PACIAE version 2.1 Journal reference: Catalogue identifier: Licensing provisions: none Programming language: FORTRAN 77 or GFORTRAN Computer: DELL Studio XPS and others with a FORTRAN 77 or GFORTRAN compiler Operating system: Linux or Windows with FORTRAN 77 or GFORTRAN compiler RAM: ≈ 1GB Number of processors used: Supplementary material: Keywords: relativistic nuclear collision; PYTHIA model; PACIAE model Classification: 11.1, 17.8 External routines/libraries: Subprograms used: Catalogue identifier of previous version: aeki_v1_0* Journal reference of previous version: Comput. Phys. Comm. 183(2012)333. Does the new version supersede the previous version?: Yes* Nature of problem: PACIAE is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum(pT)is randomly sampled in the string fragmentation, thepxandpycomponents are randomly placed on the circle with radius ofpT. This strongly cancels the final state transverse momentum asymmetry developed dynamically. Solution method: Thepxandpycomponent of hadron in the string fragmentation is now randomly placed on the circumference of an ellipse with
Extractions of polarized and unpolarized parton distribution functions
Jimenez-Delgado, Pedro
2014-01-01
An overview of our ongoing extractions of parton distribution functions of the nucleon is given. First JAM results on the determination of spin-dependent parton distribution functions from world data on polarized deep-inelastic scattering are presented first, and followed by a short report on the status of the JR unpolarized parton distributions. Different aspects of PDF analysis are briefly discussed, including effects of the nuclear structure of targets, target-mass corrections and higher twist contributions to the structure functions.
Nuclear Parton Distribution Functions
I. Schienbein, J.Y. Yu, C. Keppel, J.G. Morfin, F. Olness, J.F. Owens
2009-06-01
We study nuclear effects of charged current deep inelastic neutrino-iron scattering in the framework of a {chi}{sup 2} analysis of parton distribution functions (PDFs). We extract a set of iron PDFs which are used to compute x{sub Bj}-dependent and Q{sup 2}-dependent nuclear correction factors for iron structure functions which are required in global analyses of free nucleon PDFs. We compare our results with nuclear correction factors from neutrino-nucleus scattering models and correction factors for charged-lepton--iron scattering. We find that, except for very high x{sub Bj}, our correction factors differ in both shape and magnitude from the correction factors of the models and charged-lepton scattering.
Asaturyan, R.; Ent, R.; Mkrtchyan, H.; Navasardyan, T.; Tadevosyan, V.; Adams, G. S.; Ahmidouch, A.; Angelescu, T.; Arrington, J.; Asaturyan, A.; Baker, O. K.; Benmouna, N.; Bertoncini, C.; Blok, H. P.; Boeglin, W. U.; Bosted, P. E.; Breuer, H.; Christy, M. E.; Connell, S. H.; Cui, Y.; Dalton, M. M.; Danagoulian, S.; Day, D.; Dunne, J. A.; Dutta, D.; El Khayari, N.; Fenker, H. C.; Frolov, V. V.; Gan, L.; Gaskell, D.; Hafidi, K.; Hinton, W.; Holt, R. J.; Horn, T.; Huber, G. M.; Hungerford, E.; Jiang, X.; Jones, M.; Joo, K.; Kalantarians, N.; Kelly, J. J.; Keppel, C. E.; Kubarovsky, V.; Li, Y.; Liang, Y.; Mack, D.; Malace, S. P.; Markowitz, P.; McGrath, E.; McKee, P.; Meekins, D. G.; Mkrtchyan, A.; Moziak, B.; Niculescu, G.; Niculescu, I.; Opper, A. K.; Ostapenko, T.; Reimer, P. E.; Reinhold, J.; Roche, J.; Rock, S. E.; Schulte, E.; Segbefia, E.; Smith, C.; Smith, G. R.; Stoler, P.; Tang, L.; Ungaro, M.; Uzzle, A.; Vidakovic, S.; Villano, A.; Vulcan, W. F.; Wang, M.; Warren, G.; Wesselmann, F. R.; Wojtsekhowski, B.; Wood, S. A.; Xu, C.; Yuan, L.; Zheng, X.
2012-01-01
A large set of cross sections for semi-inclusive electroproduction of charged pions (π^{±}) from both proton and deuteron targets was measured. The data are in the deep-inelastic scattering region with invariant mass squared W^{2} > 4 GeV^{2} and range in four-momentum transfer squared 2 < Q^{2} < 4 (GeV/c)^{2}, and cover a range in the Bjorken scaling variable 0.2 < x < 0.6. The fractional energy of the pions spans a range 0.3 < z < 1, with small transverse momenta with respect to the virtual-photon direction, P_{t}^{2} < 0.2 (GeV/c)^{2}. The invariant mass that goes undetected, M_{x} or W', is in the nucleon resonance region, W' < 2 GeV. The new data conclusively show the onset of quark-hadron duality in this process, and the relation of this phenomenon to the high-energy factorization ansatz of electron-quark scattering and subsequent quark → pion production mechanisms. The x, z and P_{t}^{2} dependences of several ratios (the ratios of favored-unfavored fragmentation functions, charged pion ratios, deuteron-hydrogen and aluminum-deuteron ratios for π^{+} and π^{-}) have been studied. The ratios are found to be in good agreement with expectations based upon a high-energy quark-parton model description. We find the azimuthal dependences to be small, as compared to exclusive pion electroproduction, and consistent with theoretical expectations based on tree-level factorization in terms of transverse-momentum-dependent parton distribution and fragmentation functions. In the context of a simple model, the initial transverse momenta of d quarks are found to be slightly smaller than for u quarks, while the transverse momentum width of the favored fragmentation function is about the same as for the unfavored one, and both fragmentation widths are larger than the quark widths.
Parton Distributions Working Group
de Barbaro, L.; Keller, S. A.; Kuhlmann, S.; Schellman, H.; Tung, W.-K.
2000-07-20
This report summarizes the activities of the Parton Distributions Working Group of the QCD and Weak Boson Physics workshop held in preparation for Run II at the Fermilab Tevatron. The main focus of this working group was to investigate the different issues associated with the development of quantitative tools to estimate parton distribution functions uncertainties. In the conclusion, the authors introduce a Manifesto that describes an optimal method for reporting data.
Unraveling hadron structure with generalized parton distributions
Andrei Belitsky; Anatoly Radyushkin
2004-10-01
The recently introduced generalized parton distributions have emerged as a universal tool to describe hadrons in terms of quark and gluonic degrees of freedom. They combine the features of form factors, parton densities and distribution amplitudes - the functions used for a long time in studies of hadronic structure. Generalized parton distributions are analogous to the phase-space Wigner quasi-probability function of non-relativistic quantum mechanics which encodes full information on a quantum-mechanical system. We give an extensive review of main achievements in the development of this formalism. We discuss physical interpretation and basic properties of generalized parton distributions, their modeling and QCD evolution in the leading and next-to-leading orders. We describe how these functions enter a wide class of exclusive reactions, such as electro- and photo-production of photons, lepton pairs, or mesons.
From many body wee partons dynamics to perfect fluid: a standard model for heavy ion collisions
Venugopalan, R.
2010-07-22
We discuss a standard model of heavy ion collisions that has emerged both from experimental results of the RHIC program and associated theoretical developments. We comment briefly on the impact of early results of the LHC program on this picture. We consider how this standard model of heavy ion collisions could be solidified or falsified in future experiments at RHIC, the LHC and a future Electro-Ion Collider.
Measurement of parton shower observables with OPAL
NASA Astrophysics Data System (ADS)
Fischer, N.; Gieseke, S.; Kluth, S.; Plätzer, S.; Skands, P.
2016-07-01
A study of QCD coherence is presented based on a sample of about 397,000 e+e- hadronic annihilation events collected at √s = 91 GeV with the OPAL detector at LEP. The study is based on four recently proposed observables that are sensitive to coherence effects in the perturbative regime. The measurement of these observables is presented, along with a comparison with the predictions of different parton shower models. The models include both conventional parton shower models and dipole antenna models. Different ordering variables are used to investigate their influence on the predictions.
Generalized parton distributions of the pion
Broniowski, Wojciech; Arriola, Enrique Ruiz; Golec-Biernat, Krzysztof
2008-08-31
Generalized Parton Distributions of the pion are evaluated in chiral quark models with the help of double distributions. As a result the polynomiality conditions are automatically satisfied. In addition, positivity constraints, proper normalization and support, sum rules, and soft pion theorems are fulfilled. We obtain explicit expressions holding at the low-energy quark-model scale, which exhibit no factorization in the t-dependence. The crucial QCD evolution of the quark-model distributions is carried out up to experimental or lattice scales. The obtained results for the Parton Distribution Function and the Parton Distribution Amplitude describe the available experimental and lattice data, confirming that the quark-model scale is low, around 320 MeV.
KOVCHEGOV,Y.V.
2000-04-25
The authors derive an equation determining the small-x evolution of the F{sub 2} structure function of a large nucleus which resumes a cascade of gluons in the leading logarithmic approximation using Mueller's color dipole model. In the traditional language it corresponds to resummation of the pomeron fan diagrams, originally conjectured in the GLR equation. The authors show that the solution of the equation describes the physics of structure functions at high partonic densities, thus allowing them to gain some understanding of the most interesting and challenging phenomena in small-x physics--saturation.
Generalized parton distributions from deep virtual compton scattering at CLAS
Guidal, M.
2010-04-24
Here, we have analyzed the beam spin asymmetry and the longitudinally polarized target spin asymmetry of the Deep Virtual Compton Scattering process, recently measured by the Jefferson Lab CLAS collaboration. Our aim is to extract information about the Generalized Parton Distributions of the proton. By fitting these data, in a largely model-independent procedure, we are able to extract numerical values for the two Compton Form Factors $H_{Im}$ and $\\tilde{H}_{Im}$ with uncertainties, in average, of the order of 30%.
Generalized parton distributions from deep virtual compton scattering at CLAS
Guidal, M.
2010-04-24
Here, we have analyzed the beam spin asymmetry and the longitudinally polarized target spin asymmetry of the Deep Virtual Compton Scattering process, recently measured by the Jefferson Lab CLAS collaboration. Our aim is to extract information about the Generalized Parton Distributions of the proton. By fitting these data, in a largely model-independent procedure, we are able to extract numerical values for the two Compton Form Factorsmore » $$H_{Im}$$ and $$\\tilde{H}_{Im}$$ with uncertainties, in average, of the order of 30%.« less
Partonic Transverse Momentum Distributions
Rossi, Patrizia
2010-08-04
In recent years parton distributions have been generalized to account also for transverse degrees of freedom and new sets of more general distributions, Transverse Momentum Dependent (TMD) parton distributions and fragmentation functions were introduced. Different experiments worldwide (HERMES, COMPASS, CLAS, JLab-Hall A) have measurements of TMDs in semi-inclusive DIS processes as one of their main focuses of research. TMD studies are also an important part of the present and future Drell-Yan experiments at RICH and JPARC and GSI, respectively, Studies of TMDs are also one of the main driving forces of the Jefferson Lab (JLab) 12 GeV upgrade project. Progress in phenomenology and theory is flourishing as well. In this talk an overview of the latest developments in studies of TMDs will be given and newly released results, ongoing activities, as well as planned near term and future measurements will be discussed.
Prospects For Measurements Of Generalized Parton Distributions At COMPASS
Neyret, Damien
2007-06-13
The concept of Generalized Parton Distributions extends classical parton distributions by giving a '3-dimensional' view of the nucleons, allowing to study correlations between the parton longitudinal momentum and its transverse position in the nucleon. Measurements of such generalized distributions can be done with the COMPASS experiment, in particular using Deeply Virtual Compton Scattering events. They require to modify the set-up of COMPASS by introducing a recoil proton detector, an additional electromagnetic calorimeter and a new liquid hydrogen target. These upgrades are presently under study, and the first data taking could take place in 2010.
Parton Distributions in the Impact Parameter Space
Matthias Burkardt
2009-08-01
Parton distributions in impact parameter space, which are obtained by Fourier transforming GPDs, exhibit a significant deviation from axial symmetry when the target and/or quark is transversely polarized. In combination with the final state interactions, this transverse deformation provides a natural mechanism for naive-T odd transverse single-spin asymmetries in semi-inclusive DIS. The deformation can also be related to the transverse force acting on the active quark in polarized DIS at higher twist.
Double parton scattering: Impact of nonperturbative parton correlations
NASA Astrophysics Data System (ADS)
Ostapchenko, Sergey; Bleicher, Marcus
2016-02-01
We apply the phenomenological Reggeon field theory framework to investigate the relative importance of perturbative and nonperturbative multiparton correlations for the treatment of double parton scattering in proton-proton collisions. We obtain a significant correction to the so-called effective cross section for double parton scattering due to nonperturbative parton splitting. When combined with the corresponding perturbative contribution, this results in a rather weak energy and transverse momentum dependence of the effective cross section, in agreement with experimental observations at the Tevatron and the Large Hadron Collider. In addition, we observe that color fluctuations have a sizable impact on the calculated rate of double parton scattering and on the relative importance of the perturbative parton splitting mechanism.
NASA Astrophysics Data System (ADS)
Yeung, Raymond Yiu-Man
An experiment triggering on single high P _{t} pi^+, pi ^-, K^+, K ^-, p, and p in proton-proton collisions has been performed by the Ames-Bologna-CERN-Dortmund-Heidelberg -Warsaw collaboration using the Split-Field-Magnet detector at the CERN-ISR. The parton model based on Quantum Chromodynamics is compared to the observed events. The single particle inclusive cross sections for centre-of-mass energies up to sqrt{S} = 62 GeV at trigger polar angles theta ~ 45^circ and 90^circ are accurately predicted by the parton model calculations. A programme using the Monte Carlo method to simulate complete events is subsequently developed. The importance sampling technique is applied to enhance the efficiency in the fixed-angle high P_{rm t} trigger. The simulated events are compared to the data. Events triggered by pi^+, pi^-, K^+, and K^- mesons of P_{t} > 4 GeV/c and polar angle theta ~ 45^circ at the highest ISR energy sqrt{S} = 62 GeV are used. Detailed analyses on the transverse and the longitudinal directions of the trigger jet, in addition to the correlations between the trigger and the spectator jets show excellent agreement between the parton model predictions and the data. However, a similar study on the away side demonstrates the necessity of higher-order QCD corrections. Encouraging improvements on the away side are observed when the higher-order QCD corrections are implemented in the parton model using the LLA parton shower formalism. Thus, the effects of higher-order QCD corrections are shown to be important in deep inelastic hadronic processes.
Nonperturbative parton distributions and the proton spin problem
NASA Astrophysics Data System (ADS)
Simonov, Yu. A.
2016-05-01
The Lorentz contracted form of the static wave functions is used to calculate the valence parton distributions for mesons and baryons, boosting the rest frame solutions of the path integral Hamiltonian. It is argued that nonperturbative parton densities are due to excitedmultigluon baryon states. A simplemodel is proposed for these states ensuring realistic behavior of valence and sea quarks and gluon parton densities at Q 2 = 10 (GeV/ c)2. Applying the same model to the proton spin problem one obtains Σ3 = 0.18 for the same Q 2.
Generalized parton distributions at CLAS
Silvia Pisano
2010-11-01
The understanding of the hadron structure in terms of QCD degrees of freedom is one of the main challenges of hadron physics. Indeed, despite the large amount of theoretical and experimental activity devoted to the subject in the last years, a full comprehension of mesons and baryons in terms of quark and gluon fields is still lacking. In order to operate a more detailed investigation of the hadron structure, new quantities have been introduced ten years ago, the Generalized Parton Distributions (GPDs), defined as bilocal, off-forward matrix elements of quark and gluon operators. From an experimental point of view, GPDs are accessible through two main processes: Deeply Virtual Compton Scattering (DVCS) and Deeply Virtual Meson Electroproduction (DVMP). Depending on the polarization degrees of freedom acting in the process (like, for example, the simultaneous presence of a polarization in the beam and in the target, or the usage of a polarized beam with an unpolarized target), various combinations of GPDs can be accessed. In the case of DVCS, for example, the measurement of the Single Spin Asymmetry of processes like View the MathML source – realized by using a longitudinally polarized target - gives access to a combination of the GDPs H and H̃. The CEBAF Large Acceptance Spectrometer (CLAS) installed in the Hall-B at JLab, is particularly suited for the extraction of these quantities. Its large acceptance implies a good capability in the reconstruction of exclusive final states, allowing the investigation of the aforementioned processes in a wide range of kinematics. In this presentation, an overview of the main GPD measurements performed by CLAS will be given. In particular, the first DVCS measurements realized both with unpolarized and polarized target, together with the measurements of some exclusive meson electroproduction processes, will be described.
New parton distributions from large-x and low-Q^{2} data
Alberto Accardi; Christy, M. Eric; Keppel, Cynthia E.; Melnitchouk, Wally; Monaghan, Peter A.; Morfin, Jorge G.; Owens, Joseph F.
2010-02-11
We report results of a new global next-to-leading order fit of parton distribution functions in which cuts on W and Q are relaxed, thereby including more data at high values of x. Effects of target mass corrections (TMCs), higher twist contributions, and nuclear corrections for deuterium data are significant in the large-x region. The leading twist parton distributions are found to be stable to TMC model variations as long as higher twist contributions are also included. Furthermore, the behavior of the d quark as x → 1 is particularly sensitive to the deuterium corrections, and using realistic nuclear smearing models the d-quark distribution at large x is found to be softer than in previous fits performed with more restrictive cuts.
Generalized Parton Distributions and their Singularities
Anatoly Radyushkin
2011-04-01
A new approach to building models of generalized parton distributions (GPDs) is discussed that is based on the factorized DD (double distribution) Ansatz within the single-DD formalism. The latter was not used before, because reconstructing GPDs from the forward limit one should start in this case with a very singular function $f(\\beta)/\\beta$ rather than with the usual parton density $f(\\beta)$. This results in a non-integrable singularity at $\\beta=0$ exaggerated by the fact that $f(\\beta)$'s, on their own, have a singular $\\beta^{-a}$ Regge behavior for small $\\beta$. It is shown that the singularity is regulated within the GPD model of Szczepaniak et al., in which the Regge behavior is implanted through a subtracted dispersion relation for the hadron-parton scattering amplitude. It is demonstrated that using proper softening of the quark-hadron vertices in the regions of large parton virtualities results in model GPDs $H(x,\\xi)$ that are finite and continuous at the "border point'' $x=\\xi$. Using a simple input forward distribution, we illustrate the implementation of the new approach for explicit construction of model GPDs. As a further development, a more general method of regulating the $\\beta=0$ singularities is proposed that is based on the separation of the initial single DD $f(\\beta, \\alpha)$ into the "plus'' part $[f(\\beta,\\alpha)]_{+}$ and the $D$-term. It is demonstrated that the "DD+D'' separation method allows to (re)derive GPD sum rules that relate the difference between the forward distribution $f(x)=H(x,0)$ and the border function $H(x,x)$ with the $D$-term function $D(\\alpha)$.
Generalized parton distributions and their singularities
Radyushkin, A. V.
2011-04-01
A new approach to building models of generalized parton distributions (GPDs) is discussed that is based on the factorized DD (double distribution) ansatz within the single-DD formalism. The latter was not used before, because reconstructing GPDs from the forward limit one should start in this case with a very singular function f({beta})/{beta} rather than with the usual parton density f({beta}). This results in a nonintegrable singularity at {beta}=0 exaggerated by the fact that f({beta})'s, on their own, have a singular {beta}{sup -a} Regge behavior for small {beta}. It is shown that the singularity is regulated within the GPD model of Szczepaniak et al., in which the Regge behavior is implanted through a subtracted dispersion relation for the hadron-parton scattering amplitude. It is demonstrated that using proper softening of the quark-hadron vertices in the regions of large parton virtualities results in model GPDs H(x,{xi}) that are finite and continuous at the 'border point' x={xi}. Using a simple input forward distribution, we illustrate implementation of the new approach for explicit construction of model GPDs. As a further development, a more general method of regulating the {beta}=0 singularities is proposed that is based on the separation of the initial single DD f({beta},{alpha}) into the 'plus' part [f({beta},{alpha})]{sub +} and the D term. It is demonstrated that the ''DD+D'' separation method allows one to (re)derive GPD sum rules that relate the difference between the forward distribution f(x)=H(x,0) and the border function H(x,x) with the D-term function D({alpha}).
Strongly interacting parton matter equilibration
Ozvenchuk, V.; Linnyk, O.; Bratkovskaya, E.; Gorenstein, M.; Cassing, W.
2012-07-15
We study the kinetic and chemical equilibration in 'infinite' parton matter within the Parton-Hadron-String Dynamics transport approach. The 'infinite' matter is simulated within a cubic box with periodic boundary conditions initialized at different energy densities. Particle abundances, kinetic energy distributions, and the detailed balance of the off-shell quarks and gluons in the strongly-interacting quarkgluon plasma are addressed and discussed.
Off-forward parton distribution
Ji, X.
1998-12-01
Recent developments in studying off-forward parton distributions are discussed. In this talk, the author discusses the recent developments in studying the off-forward parton distributions (OFPD`s). He has written a topical review article on the subject, which will soon be published in Journal of Physics G. The interested audience can consult that article for details. His talk consists of three parts: definition of the new distributions, their physical significance, and experimental measurements.
New parton distributions from large-x and low-Q2 data
Alberto Accardi; Christy, M. Eric; Keppel, Cynthia E.; Melnitchouk, Wally; Monaghan, Peter A.; Morfin, Jorge G.; Owens, Joseph F.
2010-02-11
We report results of a new global next-to-leading order fit of parton distribution functions in which cuts on W and Q are relaxed, thereby including more data at high values of x. Effects of target mass corrections (TMCs), higher twist contributions, and nuclear corrections for deuterium data are significant in the large-x region. The leading twist parton distributions are found to be stable to TMC model variations as long as higher twist contributions are also included. Furthermore, the behavior of the d quark as x → 1 is particularly sensitive to the deuterium corrections, and using realistic nuclear smearing modelsmore » the d-quark distribution at large x is found to be softer than in previous fits performed with more restrictive cuts.« less
Polarized 3 parton production in inclusive DIS at small x
NASA Astrophysics Data System (ADS)
Ayala, Alejandro; Hentschinski, Martin; Jalilian-Marian, Jamal; Tejeda-Yeomans, Maria Elena
2016-10-01
Azimuthal angular correlations between produced hadrons/jets in high energy collisions are a sensitive probe of the dynamics of QCD at small x. Here we derive the triple differential cross section for inclusive production of 3 polarized partons in DIS at small x. The target proton or nucleus is described using the Color Glass Condensate (CGC) formalism. The resulting expressions are used to study azimuthal angular correlations between produced partons in order to probe the gluon structure of the target hadron or nucleus. Our analytic expressions can also be used to calculate the real part of the Next to Leading Order (NLO) corrections to di-hadron production in DIS by integrating out one of the three final state partons.
Global parton distributions with nuclear and finite-Q^2 corrections
Owens, J. F.; Accardi, Alberto; Melnitchouk, Wally
2013-05-01
We present three new sets of parton distribution functions (PDFs) determined by global fits to a wide variety of data for hard scattering processes. The analysis includes target mass and higher twist corrections needed for the description of deep inelastic scattering data at large x and low Q^2, and nuclear corrections for deuterium targets. The PDF sets correspond to three different models for the nuclear effects, and provide a more realistic uncertainty range for the d quark PDF, in particular, compared with previous fits. We describe the PDF error sets for each choice of the nuclear corrections, and provide a user interface for utilizing the distributions.
Medium Effects in Parton Distributions
William Detmold, Huey-Wen Lin
2011-12-01
A defining experiment of high-energy physics in the 1980s was that of the EMC collaboration where it was first observed that parton distributions in nuclei are non-trivially related to those in the proton. This result implies that the presence of the nuclear medium plays an important role and an understanding of this from QCD has been an important goal ever since Here we investigate analogous, but technically simpler, effects in QCD and examine how the lowest moment of the pion parton distribution is modified by the presence of a Bose-condensed gas of pions or kaons.
Experimental studies of Generalized Parton Distributions
NASA Astrophysics Data System (ADS)
Niccolai, Silvia
2015-12-01
Generalized Parton Distributions (GPDs) are nowadays the object of an intense effort of research, in the perspective of understanding nucleon structure. They describe the correlations between the longitudinal momentum and the transverse spatial position of the partons inside the nucleon and they can give access to the contribution of the orbital momentum of the quarks to the nucleon spin. Deeply Virtual Compton scattering (DVCS), the electroproduction on the nucleon, at the quark level, of a real photon, is the process more directly interpretable in terms of GPDs of the nucleon. Depending on the target nucleon (proton or neutron) and on the DVCS observable extracted (cross-sections, target- or beam-spin asymmetries, etc.), different sensitivity to the various GPDs for each quark flavor can be exploited. This article is focused on recent promising results, obtained at Jefferson Lab, on cross-sections and asymmetries for DVCS, and their link to GPDs. These data open the way to a “tomographic” representation of the structure of the nucleon, allowing the extraction of transverse-space densities of the quarks at fixed longitudinal momentum. The extensive experimental program to measure GPDs at Jefferson Lab with the 12 GeV-upgraded electron accelerator and the complementary detectors that will be housed in three experimental Halls (A, B and C), will also be presented.
Summing threshold logs in a parton shower
NASA Astrophysics Data System (ADS)
Nagy, Zoltán; Soper, Davison E.
2016-10-01
When parton distributions are falling steeply as the momentum fractions of the partons increases, there are effects that occur at each order in α s that combine to affect hard scattering cross sections and need to be summed. We show how to accomplish this in a leading approximation in the context of a parton shower Monte Carlo event generator.
First moments of nucleon generalized parton distributions
Wang, P.; Thomas, A. W.
2010-06-01
We extrapolate the first moments of the generalized parton distributions using heavy baryon chiral perturbation theory. The calculation is based on the one loop level with the finite range regularization. The description of the lattice data is satisfactory, and the extrapolated moments at physical pion mass are consistent with the results obtained with dimensional regularization, although the extrapolation in the momentum transfer to t=0 does show sensitivity to form factor effects, which lie outside the realm of chiral perturbation theory. We discuss the significance of the results in the light of modern experiments as well as QCD inspired models.
Generalized parton distributions and Deeply Virtual Compton Scattering on proton at CLAS
R. De Masi
2007-12-01
Two measurements of target and beam spin asymmetries for the reaction ep→epγ were performed with CLAS at Jefferson Laboratory. Polarized 5.7 GeV electrons were impinging on a longitudinally polarized ammonia and liquid hydrogen target respectively. These measurements are sensitive to Generalized Parton Distributions. Sizable sin phi azimuthal angular dependences were observed in both experiments, indicating the dominance of leading twist terms and the possibility of extracting combinations of Generalized Parton Distributions on the nucleon.
Parton distributions with threshold resummation
NASA Astrophysics Data System (ADS)
Bonvini, Marco; Marzani, Simone; Rojo, Juan; Rottoli, Luca; Ubiali, Maria; Ball, Richard D.; Bertone, Valerio; Carrazza, Stefano; Hartland, Nathan P.
2015-09-01
We construct a set of parton distribution functions (PDFs) in which fixed-order NLO and NNLO calculations are supplemented with soft-gluon (threshold) resummation up to NLL and NNLL accuracy respectively, suitable for use in conjunction with any QCD calculation in which threshold resummation is included at the level of partonic cross sections. These resummed PDF sets, based on the NNPDF3.0 analysis, are extracted from deep-inelastic scattering, Drell-Yan, and top quark pair production data, for which resummed calculations can be consistently used. We find that, close to threshold, the inclusion of resummed PDFs can partially compensate the enhancement in resummed matrix elements, leading to resummed hadronic cross-sections closer to the fixed-order calculations. On the other hand, far from threshold, resummed PDFs reduce to their fixed-order counterparts. Our results demonstrate the need for a consistent use of resummed PDFs in resummed calculations.
Generalized parton distributions in nuclei
Vadim Guzey
2009-12-01
Generalized parton distributions (GPDs) of nuclei describe the distribution of quarks and gluons in nuclei probed in hard exclusive reactions, such as e.g. deeply virtual Compton scattering (DVCS). Nuclear GPDs and nuclear DVCS allow us to study new aspects of many traditional nuclear effects (nuclear shadowing, EMC effect, medium modifications of the bound nucleons) as well as to access novel nuclear effects. In my talk, I review recent theoretical progress in the area of nuclear GPDs.
Structure functions and parton distributions
Martin, A.D.; Stirling, W.J.; Roberts, R.G.
1995-07-01
The MRS parton distribution analysis is described. The latest sets are shown to give an excellent description of a wide range of deep-inelastic and other hard scattering data. Two important theoretical issues-the behavior of the distributions at small x and the flavor structure of the quark sea-are discussed in detail. A comparison with the new structure function data from HERA is made, and the outlook for the future is discussed.
Deeply Virtual Exclusive Processes and Generalized Parton Distributions
,
2011-06-01
The goal of the comprehensive program in Deeply Virtual Exclusive Scattering at Jefferson Laboratory is to create transverse spatial images of quarks and gluons as a function of their longitudinal momentum fraction in the proton, the neutron, and in nuclei. These functions are the Generalized Parton Distributions (GPDs) of the target nucleus. Cross section measurements of the Deeply Virtual Compton Scattering (DVCS) reaction ep {yields} ep{gamma} in Hall A support the QCD factorization of the scattering amplitude for Q^2 {>=} 2 GeV^2. Quasi-free neutron-DVCS measurements on the Deuteron indicate sensitivity to the quark angular momentum sum rule. Fully exclusive H(e, e'p{gamma} ) measurements have been made in a wide kinematic range in CLAS with polarized beam, and with both unpolarized and longitudinally polarized targets. Existing models are qualitatively consistent with the JLab data, but there is a clear need for less constrained models. Deeply virtual vector meson production is studied in CLAS. The 12 GeV upgrade will be essential for for these channels. The {rho} and {omega} channels reactions offer the prospect of flavor sensitivity to the quark GPDs, while the {phi}-production channel is dominated by the gluon distribution.
Parton-parton elastic scattering and rapidity gaps at SSC and LHC energies
Duca, V.D.
1993-08-01
The theory of the perturbative pomeron, due to Lipatov and collaborators, is used to compute the probability of observing parton-parton elastic scattering and rapidity gaps between jets in hadron collisions at SSC and LHC energies.
NUCLEAR MODIFICATION TO PARTON DISTRIBUTION FUNCTIONS AND PARTON SATURATION.
QIU, J.-W.
2006-11-14
We introduce a generalized definition of parton distribution functions (PDFs) for a more consistent all-order treatment of power corrections. We present a new set of modified DGLAP evolution equations for nuclear PDFs, and show that the resummed {alpha}{sub s}A{sup 1/3}/Q{sup 2}-type of leading nuclear size enhanced power corrections significantly slow down the growth of gluon density at small-x. We discuss the relation between the calculated power corrections and the saturation phenomena.
Applying target shadow models for SAR ATR
NASA Astrophysics Data System (ADS)
Papson, Scott; Narayanan, Ram M.
2007-04-01
Recent work has suggested that target shadows in synthetic aperture radar (SAR) images can be used effectively to aid in target classification. The method outlined in this paper has four steps - segmentation, representation, modeling, and selection. Segmentation is the process by which a smooth, background-free representation of the target's shadow is extracted from an image chip. A chain code technique is then used to represent the shadow boundary. Hidden Markov modeling is applied to sets of chain codes for multiple targets to create a suitable bank of target representations. Finally, an ensemble framework is proposed for classification. The proposed model selection process searches for an optimal ensemble of models based on various target model configurations. A five target subset of the MSTAR database is used for testing. Since the shadow is a back-projection of the target profile, some aspect angles will contain more discriminatory information then others. Therefore, performance is investigated as a function of aspect angle. Additionally, the case of multiple target looks is considered. The capability of the shadow-only classifier to enhance more traditional classification techniques is examined.
Parton interpretation of the nucleon spin-dependent structure functions
Mankiewicz, L. ); Ryzak, Z. )
1991-02-01
We discuss the interpretation of the nucleon's polarized structure function {ital g}{sub 2}({ital x}). If the target state is represented by its Fock decomposition on the light cone, the operator-product expansion allows us to demonstrate that moments of {ital g}{sub 2}({ital x}) are related to overlap integrals between wave functions of opposite longitudinal polarizations. In the light-cone formalism such wave functions are related by the kinematical operator {ital scrY}, or light-cone parity. As a consequence, it can be shown that moments of {ital g}{sub 2} give information about the same parton wave function, or probability amplitude to find a certain parton configuration in the target which defines {ital g}{sub 1}({ital x}) or {ital F}{sub 2}({ital x}). Specific formulas are given, and possible applications to the phenomenology of the nucleon structure in QCD are discussed.
Nuclear modifications of Parton Distribution Functions
NASA Astrophysics Data System (ADS)
Adeluyi, Adeola Adeleke
-called shadowing region. We also investigate the effects of nuclear modifications on observed quantities in ultrarelativistic nucleus-nucleus collisions. Specifically, we consider deuteron-gold collisions and observables which are directly impacted by modifications, such as pseudorapidity asymmetry and nuclear modification factors. A good description of the shadowing region is afforded by Gribov Theory. Gribov related the shadowing correction to the differential diffractive hadron-nucleon cross section. We generalize Gribov theory to include both the real part of the diffractive scattering amplitude and higher order multiple scattering necessary for heavy nuclei. The diffractive dissociation inputs are taken from experiments. We calculate observables in deuteron-gold collisions. Utilizing the factorization theorem, we use the existing parameterizations of nuclear PDFs and fragmentation functions in a pQCD-improved parton model to calculate nuclear modification factors and pseudorapidity asymmetries. The nuclear modification factor is essentially the ratio of the deuteron-gold cross section to that of the proton-proton cross section scaled by the number of binary collisions. The pseudorapidity asymmetry is the ratio of the cross section in the negative rapidity region relative to that in the equivalent positive rapidity region. Both quantities are sensitive to the effects of nuclear modifications on PDFs. Results are compared to experimental data from the BRAHMS and STAR collaborations.
Generalized parton distributions: Status and perspectives
Weiss, Christian
2009-01-01
We summarize recent developments in understanding the concept of generalized parton distributions (GPDs), its relation to nucleon structure, and its application to high-Q^2 electroproduction processes. Following a brief review of QCD factorization and transverse nucleon structure, we discuss (a) new theoretical methods for the analysis of deeply-virtual Compton scattering (t-channel-based GPD parametrizations, dispersion relations); (b) the phenomenology of hard exclusive meson production (experimental tests of dominance of small-size configurations, model-independent comparative studies); (c) the role of GPDs in small-x physics and pp scattering (QCD dipole model, central exclusive diffraction). We emphasize the usefulness of the transverse spatial (or impact parameter) representation for both understanding the reaction mechanism in hard exclusive processes and visualizing the physical content of the GPDs.
Polarization of partons in the proton
Kobayakawa, K.; Morii, T. ); Tanaka, S. ); Yamanishi, T. )
1992-10-01
The spin-dependent distribution functions of quarks and gluons in a proton are studied so as to explain the European Muon Collaboration {ital g}{sub 1}{sup {ital p}}({ital x}) data by introducing a new model, in which characteristics of both the static quark and the quark-parton model are taken into account. The {ital x} dependence of {ital g}{sub 1}{sup {ital p}}({ital x}) is reproduced well. It is shown that polarized gluons through the anomaly play a significant role and the resultant sum of quark spin is 0.375. Furthermore, {ital g}{sub 1}{sup {ital n}}({ital x}) as well as {ital g}{sub 1}{sup {ital p}}({ital x}) is predicted for future experiments.
A Review of Target Mass Corrections
I. Schienbein; V. Radescu; G. Zeller; M. E. Christy; C. E. Keppel; K. S. McFarland; W. Melnitchouk; F. I. Olness; M. H. Reno; F. Steffens; J.-Y. Yu
2007-09-06
With recent advances in the precision of inclusive lepton-nuclear scattering experiments, it has become apparent that comparable improvements are needed in the accuracy of the theoretical analysis tools. In particular, when extracting parton distribution functions in the large-x region, it is crucial to correct the data for effects associated with the nonzero mass of the target. We present here a comprehensive review of these target mass corrections (TMC) to structure functions data, summarizing the relevant formulas for TMCs in electromagnetic and weak processes. We include a full analysis of both hadronic and partonic masses, and trace how these effects appear in the operator product expansion and the factorized parton model formalism, as well as their limitations when applied to data in the x -> 1 limit. We evaluate the numerical effects of TMCs on various structure functions, and compare fits to data with and without these corrections.
The parton distribution function library
Plothow-Besch, H.
1995-07-01
This article describes an integrated package of Parton Density Functions called PDFLIB which has been added to the CERN Program Library Pool W999 and is labelled as W5051. In this package all the different sets of parton density functions of the Nucleon, Pion and the Photon which are available today have been put together. All these sets have been combined in a consistent way such that they all have similar calling sequences and no external data files have to be read in anymore. A default set has been prepared, although those preferring their own set or wanting to test a new one may do so within the package. The package also offers a program to calculate the strong coupling constant {alpha}, to first or second order. The correct {Lambda}{sub QCD} associated to the selected set of structure functions and the number of allowed flavours with respect to the given Q{sup 2} is automatically used in the calculation. The selection of sets, the program parameters as well as the possibilities to modify the defaults and to control errors occurred during execution are described.
Jet correlations from unintegrated parton distributions
Hautmann, F.; Jung, H.
2008-10-13
Transverse-momentum dependent parton distributions can be introduced gauge-invariantly in QCD from high-energy factorization. We discuss Monte Carlo applications of these distributions to parton showers and jet physics, with a view to the implications for the Monte Carlo description of complex hadronic final states with multiple hard scales at the LHC.
Parton shower evolution in a 3D hydrodynamical medium
Renk, Thorsten
2008-09-15
We present a Monte Carlo simulation of the perturbative quantum chromodynamics shower developing after a hard process embedded in a heavy-ion collision. The main assumption is that the cascade of branching partons traverses a medium that (consistent with standard radiative energy loss pictures) is characterized by a local transport coefficient q-circumflex that measures the virtuality per unit length transferred to a parton that propagates in this medium. This increase in parton virtuality alters the development of the shower and in essence leads to extra induced radiation and hence a softening of the momentum distribution in the shower. After hadronization, this leads to the concept of a medium-modified fragmentation function. On the level of observables, this is manifest as the suppression of high-transverse-momentum (P{sub T}) hadron spectra. We simulate the soft medium created in heavy-ion collisions by a 3D hydrodynamical evolution and average the medium-modified fragmentation function over this evolution to compare with data on single inclusive hadron suppression and extract the q-circumflex that characterizes the medium. Finally, we discuss possible uncertainties of the model formulation and argue that the data in a soft momentum show evidence of qualitatively different physics that presumably cannot be described by a medium-modified parton shower.
Illuminating the 1/x Moment of Parton Distribution Functions
Brodsky, Stanley J.; Llanes-Estrada, Felipe J.; Szczepaniak, Adam P.; /Indiana U.
2007-10-15
The Weisberger relation, an exact statement of the parton model, elegantly relates a high-energy physics observable, the 1/x moment of parton distribution functions, to a nonperturbative low-energy observable: the dependence of the nucleon mass on the value of the quark mass or its corresponding quark condensate. We show that contemporary fits to nucleon structure functions fail to determine this 1/x moment; however, deeply virtual Compton scattering can be described in terms of a novel F1/x(t) form factor which illuminates this physics. An analysis of exclusive photon-induced processes in terms of the parton-nucleon scattering amplitude with Regge behavior reveals a failure of the high Q2 factorization of exclusive processes at low t in terms of the Generalized Parton-Distribution Functions which has been widely believed to hold in the past. We emphasize the need for more data for the DVCS process at large t in future or upgraded facilities.
Parton-Hadron-String Dynamics at relativistic collider energies
NASA Astrophysics Data System (ADS)
Bratkovskaya, E. L.; Cassing, W.; Konchakovski, V. P.; Linnyk, O.
2011-04-01
The novel Parton-Hadron-String Dynamics (PHSD) transport approach is applied to nucleus-nucleus collisions at RHIC energies with respect to differential hadronic spectra in comparison to available data. The PHSD approach is based on a dynamical quasiparticle model for partons (DQPM) matched to reproduce recent lattice-QCD results from the Wuppertal-Budapest group in thermodynamic equilibrium. The transition from partonic to hadronic degrees of freedom is described by covariant transition rates for the fusion of quark-antiquark pairs or three quarks (antiquarks), respectively, obeying flavor current-conservation, color neutrality as well as energy-momentum conservation. Our dynamical studies for heavy-ion collisions at relativistic collider energies are compared to earlier results from the Hadron-String Dynamics (HSD) approach - incorporating no explicit dynamical partonic phase - as well as to experimental data from the STAR, PHENIX, BRAHMS and PHOBOS Collaborations for Au + Au collisions at the top RHIC energy of √{s}=200 GeV. We find a reasonable reproduction of hadron rapidity distributions and transverse mass spectra and also a fair description of the elliptic flow of charged hadrons as a function of the centrality of the reaction and the transverse momentum p. Furthermore, an approximate quark-number scaling of the elliptic flow v of hadrons is observed in the PHSD results, too.
Parton and valon distributions in the nucleon
Hwa, R.C.; Sajjad Zahir, M.
1981-06-01
Structure functions of the nucleon are analyzed in the valon model in which a nucleon is assumed to be a bound state of three valence quark clusters (valons). At high Q/sup 2/ the structure of the valons is described by leading-order results in the perturbative quantum chromodynamics. From the experimental data on deep-inelastic scattering off protons and neutrons, the flavor-dependent valon distributions in the nucleon are determined. Predictions for the parton distributions are then made for high Q/sup 2/ without guesses concerning the quark and gluon distributions at low Q/sup 2/. The sea-quark and gluon distributions are found to have a sharp peak at very small x. Convenient parametrization is provided which interpolates between different numbers of flavors.
Parton transverse momentum and orbital angular momentum distributions
NASA Astrophysics Data System (ADS)
Rajan, Abha; Courtoy, Aurore; Engelhardt, Michael; Liuti, Simonetta
2016-08-01
The quark orbital angular momentum component of proton spin, Lq, can be defined in QCD as the integral of a Wigner phase space distribution weighting the cross product of the quark's transverse position and momentum. It can also be independently defined from the operator product expansion for the off-forward Compton amplitude in terms of a twist-three generalized parton distribution. We provide an explicit link between the two definitions, connecting them through their dependence on partonic intrinsic transverse momentum. Connecting the definitions provides the key for correlating direct experimental determinations of Lq and evaluations through lattice QCD calculations. The direct observation of quark orbital angular momentum does not require transverse spin polarization but can occur using longitudinally polarized targets.
Hard photon production and matrix-element parton-shower merging
Hoeche, Stefan; Schumann, Steffen; Siegert, Frank
2010-02-01
We present a Monte Carlo approach to prompt-photon production, where photons and QCD partons are treated democratically. The photon fragmentation function is modeled by an interleaved QCD+QED parton shower. This known technique is improved by including higher-order real-emission matrix elements. To this end, we extend a recently proposed algorithm for merging matrix elements and truncated parton showers. We exemplify the quality of the Monte Carlo predictions by comparing them to measurements of the photon fragmentation function at LEP and to measurements of prompt photon and diphoton production from the Tevatron experiments.
Evolution of minimum-bias parton fragmentation in nuclear collisions
Trainor, Thomas A.
2009-10-15
Minimum-bias fragment distributions (FDs) are calculated by folding a power-law parton energy spectrum with parametrized fragmentation functions (FFs) derived from e{sup +}-e{sup -} and p-p collisions. Substantial differences between measured e{sup +}-e{sup -} and p-p FFs suggest that FF 'universality' may not be a valid assumption. The common parton spectrum is constrained by comparison with a p-p p{sub t} spectrum hard component. Changes in FFs due to parton 'energy loss' or 'medium modification' are modeled by altering FF parametrizations consistent with rescaling QCD splitting functions. In-vacuum and in-medium FDs are compared with spectrum hard components from 200-GeV Au-Au collisions for several centralities. The reference for all nuclear collisions is the FD derived from in-vacuum e{sup +}-e{sup -} FFs. The hard component for p-p and peripheral Au-Au collisions is found to be strongly suppressed for smaller fragment momenta, consistent with the FD derived from in-vacuum p-p FFs. At a particular centrality the Au-Au hard component transitions to enhancement at smaller momenta and suppression at larger momenta, consistent with FDs derived from in-medium e{sup +}-e{sup -} FFs. Fragmentation systematics suggest that QCD color connections change dramatically in more-central A-A collisions. Observed parton and hadron spectrum systematics are inconsistent with saturation-scale arguments used to support assumptions of parton thermalization.
Disentangling correlations in multiple parton interactions
Calucci, G.; Treleani, D.
2011-01-01
Multiple Parton Interactions are the tool to obtain information on the correlations between partons in the hadron structure. Partons may be correlated in all degrees of freedom and all different correlation terms contribute to the cross section. The contributions due to the different parton flavors can be isolated, at least to some extent, by selecting properly the final state. In the case of high energy proton-proton collisions, the effects of correlations in the transverse coordinates and in fractional momenta are, on the contrary, unavoidably mixed in the final observables. The standard way to quantify the strength of double parton interactions is by the value of the effective cross section and a small value of the effective cross section may be originated both by the relatively short transverse distance between the pairs of partons undergoing the double interaction and by a large dispersion of the distribution in multiplicity of the multiparton distributions. The aim of the present paper is to show how the effects of longitudinal and transverse correlations may be disentangled by taking into account the additional information provided by double parton interactions in high energy proton-deuteron collisions.
Working Group I: Parton distributions: Summary report for the HERA LHC Workshop Proceedings
Dittmar, M.; Forte, S.; Glazov, A.; Moch, S.; Alekhin, S.; Altarelli, G.; Andersen, Jeppe R.; Ball, R.D.; Blumlein, J.; Bottcher, H.; Carli, T.; Ciafaloni, M.; Colferai, D.; Cooper-Sarkar, A.; Corcella, G.; Del Debbio, L.; Dissertori, G.; Feltesse, J.; Guffanti, A.; Gwenlan, C.; Huston, J.; /Zurich, ETH /DESY, Zeuthen /Serpukhov, IHEP /CERN /Rome III U. /INFN, Rome3 /Cambridge U. /Edinburgh U. /Florence U. /INFN, Florence /Oxford U. /DSM, DAPNIA, Saclay /Michigan State U. /Uppsala U. /Barcelona U., ECM /Podgorica U. /Turin U. /INFN, Turin /Harish-Chandra Res. Inst. /Fermilab /Hamburg U., Inst. Theor. Phys. II
2005-11-01
We provide an assessment of the impact of parton distributions on the determination of LHC processes, and of the accuracy with which parton distributions (PDFs) can be extracted from data, in particular from current and forthcoming HERA experiments. We give an overview of reference LHC processes and their associated PDF uncertainties, and study in detail W and Z production at the LHC.We discuss the precision which may be obtained from the analysis of existing HERA data, tests of consistency of HERA data from different experiments, and the combination of these data. We determine further improvements on PDFs which may be obtained from future HERA data (including measurements of F{sub L}), and from combining present and future HERA data with present and future hadron collider data. We review the current status of knowledge of higher (NNLO) QCD corrections to perturbative evolution and deep-inelastic scattering, and provide reference results for their impact on parton evolution, and we briefly examine non-perturbative models for parton distributions. We discuss the state-of-the art in global parton fits, we assess the impact on them of various kinds of data and of theoretical corrections, by providing benchmarks of Alekhin and MRST parton distributions and a CTEQ analysis of parton fit stability, and we briefly presents proposals for alternative approaches to parton fitting. We summarize the status of large and small x resummation, by providing estimates of the impact of large x resummation on parton fits, and a comparison of different approaches to small x resummation, for which we also discuss numerical techniques.
Experimental overview of Generalized Parton Distribution results from HERMES
Zihlmann, B.
2009-08-04
Over the course of more than a decade the HERMES experiment has accumulated a wealth of data with electron and positron beams on various gaseous targets from Hydrogen up to Xenon. In addition, the beams and targets can be polarized. This data set is viewed in the context of Generalized Parton Distributions, a theoretical formalism with an explicit three dimensional view of the structure of the nucleon. It provides a link between experimental observables and the total angular momentum of the quarks in the nucleon.
First JAM results on the determination of polarized parton distributions
Accardi, Alberto; Jimenez-Delgado, Pedro; Melnitchouk, Wally
2014-01-01
The Jefferson Lab Angular Momentum (JAM) Collaboration is a new initiative to study the angular momentum dependent structure of the nucleon. First results on the determination of spin-dependent parton distribution functions at intermediate and large x from world data on polarized deep-inelastic scattering are presented. Different aspects of global QCD analysis are discussed, including the effects of nuclear structure of deuterium and {sup 3}He targets, target mass corrections and higher twist contributions to the g{sub 1} and g{sub 2} structure functions.
Statistical Modeling of Single Target Cell Encapsulation
Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan
2011-01-01
High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems. PMID:21814548
Modeling target erosion during reactive sputtering
NASA Astrophysics Data System (ADS)
Strijckmans, K.; Depla, D.
2015-03-01
The influence of the reactive sputter conditions on the racetrack and the sputter profile for an Al/O2 DC reactive sputter system is studied by modeling. The role of redeposition, i.e. the deposition of sputtered material back on the target, is therefore taken into account. The used model RSD2013 is capable of simulating the effect of redeposition on the target condition in a spatial resolved way. Comparison between including and excluding redeposition in the RSD2013 model shows that the in-depth oxidation profile of the target differs. Modeling shows that it is important to distinguish between the formed racetrack, i.e. the erosion depth profile, and the sputter profile. The latter defines the distribution of the sputtered atoms in the vacuum chamber. As the target condition defines the sputter yield, it does determine the racetrack and the sputter profile of the planar circular target. Both the shape of the racetrack and the sputter profile change as function of the redeposition fraction as well as function of the oxygen flow change. Clear asymmetries and narrowing are observed for the racetrack shape. Similar effects are noticed for the sputter profile but to a different extent. Based on this study, the often heard misconception that the racetrack shape defines the distribution of the sputtered atoms during reactive sputtering is proven to be wrong.
Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting
NASA Astrophysics Data System (ADS)
Gamberg, Leonard
2015-04-01
We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.
Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting
NASA Astrophysics Data System (ADS)
Gamberg, Leonard
2015-10-01
We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.
Pre-equilibrium parton dynamics: Proceedings
Wang, Xin-Nian
1993-12-31
This report contains papers on the following topics: parton production and evolution; QCD transport theory; interference in the medium; QCD and phase transition; and future heavy ion experiments. This papers have been indexed separately elsewhere on the data base.
Delineating parton distributions and the strong coupling
Jimenez-Delgado, P.; Reya, E.
2014-04-29
In this study, global fits for precision determinations of parton distributions, together with the highly correlated strong coupling α_{s}, are presented up to next-to-next-to- leading order (NNLO) of QCD utilizing most world data (charm and jet production data are used where theoretically possible), except Tevatron gauge boson production data and LHC data which are left for genuine predictions. This is done within the 'dynamical' (valencelike input at Q_{0}^{2} = 0.8 GeV^{2} ) and 'standard' (input at Q_{0}^{2} = 2 GeV^{2}) approach. The stability and reliability of the results are ensured by including nonperturbative higher-twist terms, nuclear corrections as well as target mass corrections, and by applying various (Q^{2}, W^{2}) cuts on available data. In addition, the Q_{0}^{2} dependence of the results is studied in detail. Predictions are given, in particular for LHC, on gauge and Higgs boson as well as for top-quark pair production. At NNLO the dynamical approach results in α_{s}(M_{Z}^{2}) = 0.1136 ± 0.0004, whereas the somewhat less constrained standard fit gives α_{s}(M_{Z}^{2}) = 0.1162 ± 0.0006.
Delineating parton distributions and the strong coupling
Jimenez-Delgado, P.; Reya, E.
2014-04-29
In this study, global fits for precision determinations of parton distributions, together with the highly correlated strong coupling αs, are presented up to next-to-next-to- leading order (NNLO) of QCD utilizing most world data (charm and jet production data are used where theoretically possible), except Tevatron gauge boson production data and LHC data which are left for genuine predictions. This is done within the 'dynamical' (valencelike input at Q02 = 0.8 GeV2 ) and 'standard' (input at Q02 = 2 GeV2) approach. The stability and reliability of the results are ensured by including nonperturbative higher-twist terms, nuclear corrections as well asmore » target mass corrections, and by applying various (Q2, W2) cuts on available data. In addition, the Q02 dependence of the results is studied in detail. Predictions are given, in particular for LHC, on gauge and Higgs boson as well as for top-quark pair production. At NNLO the dynamical approach results in αs(MZ2) = 0.1136 ± 0.0004, whereas the somewhat less constrained standard fit gives αs(MZ2) = 0.1162 ± 0.0006.« less
The midpoint between dipole and parton showers
Höche, Stefan; Prestel, Stefan
2015-09-28
We present a new parton-shower algorithm. Borrowing from the basic ideas of dipole cascades, the evolution variable is judiciously chosen as the transverse momentum in the soft limit. This leads to a very simple analytic structure of the evolution. A weighting algorithm is implemented that allows one to consistently treat potentially negative values of the splitting functions and the parton distributions. Thus, we provide two independent, publicly available implementations for the two event generators PYTHIA and SHERPA.
Transverse-momentum-dependent parton distributions (TMDs)
NASA Astrophysics Data System (ADS)
Bacchetta, Alessandro
2011-10-01
Transverse-momentum-dependent parton distributions (TMDs) provide three-dimensional images of the partonic structure of the nucleon in momentum space. We made impressive progress in understanding TMDs, both from the theoretical and experimental point of view. This brief overview on TMDs is divided in two parts: in the first, an essential list of achievements is presented. In the second, a selection of open questions is discussed.
APACIC++ 2.0. A PArton Cascade In C++
NASA Astrophysics Data System (ADS)
Krauss, F.; Schälicke, A.; Soff, G.
2006-06-01
simulate ee-annihilation experiments as well as hadron-hadron collision. The generated events are suitable for direct comparison with experiment. This is achieved by dividing the simulation into well-separated steps. First, the signal process is selected by employing multi-particle matrix elements at tree-level. Then the strong interacting particles experience additional radiation of soft or collinear partons described by means of the parton shower. Finally, the partons are translated into observable hadrons using phenomenological models. The module APACIC++ concentrates on the parton shower evolution of jets, both in the initial and in the final state of the signal process. Suitable interfaces to other modules of the event generator SHERPA are provided. Reasons for the new version: This new version is able to perform not only final state shower but also initial state shower evolutions. Thus the program gives now also a realistic description of proton-proton and proton-anti-proton collisions. It is particularly designed to simulate events at the Tevatron or the LHC. Summary of revisions: The package has been extended by a number of classes for the description of the initial state shower. In order to give optimal support for these new routines, all existing classes of the final state shower have been revised, but the basic structure and concept of the program has been maintained. In addition a new dicing strategy has been introduced in the time-like evolution routine, which substantially improved the performance of the final state shower. Additional comments: The package APACIC++ is used as the parton shower module of the general purpose event generator SHERPA. There it takes full advantage of its capabilities to merge multi-jet matrix element and parton shower evolution. Running time: The example programs take a matter of seconds to run.
Double Parton Fragmentation Function and its Evolution in Quarkonium Production
NASA Astrophysics Data System (ADS)
Kang, Zhong-Bo
2014-01-01
We summarize the results of a recent study on a new perturbative QCD factorization formalism for the production of heavy quarkonia of large transverse momentum pT at collider energies. Such a new factorization formalism includes both the leading power (LP) and next-to-leading power (NLP) contributions to the cross section in the mQ2/p_T^2 expansion for heavy quark mass mQ. For the NLP contribution, the so-called double parton fragmentation functions are involved, whose evolution equations have been derived. We estimate fragmentation functions in the non-relativistic QCD formalism, and found that their contribution reproduce the bulk of the large enhancement found in explicit NLO calculations in the color singlet model. Heavy quarkonia produced from NLP channels prefer longitudinal polarization, in contrast to the single parton fragmentation function. This might shed some light on the heavy quarkonium polarization puzzle.
HEAVY QUARKS AT RHIC FROM PARTON TRANSPORT THEORY.
MOLNAR, D.
2006-05-15
There are several indications that an opaque partonic medium is created in energetic Au+Au collisions ({radical}s{sub NN} {approx} GeV/nucleon) at the Relativistic Heavy Ion Collider (RHIC). At the extreme densities of {approx} 10-100 times normal nuclear density reached even heavy-flavor hadrons are affected significantly. Heavy-quark observables are presented from the parton transport model MPC, focusing on the nuclear suppression pattern, azimuthal anisotropy (''elliptic flow''), and azimuthal correlations. Comparison with Au + Au data at top RHIC energy {radical}s{sub NN} = 200 GeV indicates significant heavy quark rescattering, corresponding roughly five times higher opacities than estimates based on leading-order perturbative QCD. We propose measurements of charm-anticharm, e.g., D-meson azimuthal correlations as a sensitive, independent probe to corroborate these findings.
Uncertainties in determining parton distributions at large x
Alberto Accardi, Wolodymyr Melnitchouk, Jeff Owens, Michael Christy, Cynthia Keppel, Lingyan Zhu, Jorge Morfin
2011-07-01
We critically examine uncertainties in parton distribution functions (PDFs) at large x arising from nuclear effects in deuterium F2 structure function data. Within a global PDF analysis, we assess the impact on the PDFs from uncertainties in the deuteron wave function at short distances and nucleon off-shell effects, the use of relativistic kinematics, as well as the use of less a restrictive parametrization of the d/u ratio. We find that in particular the d-quark and gluon PDFs vary significantly with the choice of nuclear model. We highlight the impact of these uncertainties on the determination of the neutron structure function, and on W boson production and parton luminosity at the Tevatron and the LHC. Finally, we discuss prospects for new measurements sensitive to the d-quark and gluon distributions but insensitive to nuclear corrections.
Target & Propagation Models for the FINDER Radar
NASA Technical Reports Server (NTRS)
Cable, Vaughn; Lux, James; Haque, Salmon
2013-01-01
Finding persons still alive in piles of rubble following an earthquake, a severe storm, or other disaster is a difficult problem. JPL is currently developing a victim detection radar called FINDER (Finding Individuals in Emergency and Response). The subject of this paper is directed toward development of propagation & target models needed for simulation & testing of such a system. These models are both physical (real rubble piles) and numerical. Early results from the numerical modeling phase show spatial and temporal spreading characteristics when signals are passed through a randomly mixed rubble pile.
Transverse nucleon structure and diagnostics of hard parton-parton processes at LHC
L. Frankfurt, M. Strikman, C. Weiss
2011-03-01
We propose a new method to determine at what transverse momenta particle production in high-energy pp collisions is governed by hard parton-parton processes. Using information on the transverse spatial distribution of partons obtained from hard exclusive processes in ep/\\gamma p scattering, we evaluate the impact parameter distribution of pp collisions with a hard parton-parton process as a function of p_T of the produced parton (jet). We find that the average pp impact parameters in such events depend very weakly on p_T in the range 2 < p_T < few 100 GeV, while they are much smaller than those in minimum-bias inelastic collisions. The impact parameters in turn govern the observable transverse multiplicity in such events (in the direction perpendicular to the trigger particle or jet). Measuring the transverse multiplicity as a function of p_T thus provides an effective tool for determining the minimum p_T for which a given trigger particle originates from a hard parton-parton process.
First JAM results on the determination of polarized parton distributions
Jimenez-Delgado, Pedro
2013-04-01
The Jefferson Lab Angular Momentum (JAM) collaboration is a new initiative aimed to the study of the angular-momentum-dependent structure of the nucleon. First results on the determination of spin-dependent parton distribution functions from world data on polarized deep-inelastic scattering will be presented and compared with previous determinations from other groups. Different aspects of global QCD analysis will be discussed, including effects due to nuclear structure, higher twist, and target-mass corrections, as well as the impact of different data selections.
Termites as targets and models for biotechnology.
Scharf, Michael E
2015-01-01
Termites have many unique evolutionary adaptations associated with their eusocial lifestyles. Recent omics research has created a wealth of new information in numerous areas of termite biology (e.g., caste polyphenism, lignocellulose digestion, and microbial symbiosis) with wide-ranging applications in diverse biotechnological niches. Termite biotechnology falls into two categories: (a) termite-targeted biotechnology for pest management purposes, and (b) termite-modeled biotechnology for use in various industrial applications. The first category includes several candidate termiticidal modes of action such as RNA interference, digestive inhibition, pathogen enhancement, antimicrobials, endocrine disruption, and primer pheromone mimicry. In the second category, termite digestomes are deep resources for host and symbiont lignocellulases and other enzymes with applications in a variety of biomass, industrial, and processing applications. Moving forward, one of the most important approaches for accelerating advances in both termite-targeted and termite-modeled biotechnology will be to consider host and symbiont together as a single functional unit. PMID:25341102
Termites as targets and models for biotechnology.
Scharf, Michael E
2015-01-01
Termites have many unique evolutionary adaptations associated with their eusocial lifestyles. Recent omics research has created a wealth of new information in numerous areas of termite biology (e.g., caste polyphenism, lignocellulose digestion, and microbial symbiosis) with wide-ranging applications in diverse biotechnological niches. Termite biotechnology falls into two categories: (a) termite-targeted biotechnology for pest management purposes, and (b) termite-modeled biotechnology for use in various industrial applications. The first category includes several candidate termiticidal modes of action such as RNA interference, digestive inhibition, pathogen enhancement, antimicrobials, endocrine disruption, and primer pheromone mimicry. In the second category, termite digestomes are deep resources for host and symbiont lignocellulases and other enzymes with applications in a variety of biomass, industrial, and processing applications. Moving forward, one of the most important approaches for accelerating advances in both termite-targeted and termite-modeled biotechnology will be to consider host and symbiont together as a single functional unit.
Chiral dynamics and partonic structure at large transverse distances
Strikman, M.; Weiss, C.
2009-12-30
In this paper, we study large-distance contributions to the nucleon’s parton densities in the transverse coordinate (impact parameter) representation based on generalized parton distributions (GPDs). Chiral dynamics generates a distinct component of the partonic structure, located at momentum fractions x≲M_{π}/M_{N} and transverse distances b~1/M_{π}. We calculate this component using phenomenological pion exchange with a physical lower limit in b (the transverse “core” radius estimated from the nucleon’s axial form factor, R_{core}=0.55 fm) and demonstrate its universal character. This formulation preserves the basic picture of the “pion cloud” model of the nucleon’s sea quark distributions, while restricting its application to the region actually governed by chiral dynamics. It is found that (a) the large-distance component accounts for only ~1/3 of the measured antiquark flavor asymmetry d¯-u¯ at x~0.1; (b) the strange sea quarks s and s¯ are significantly more localized than the light antiquark sea; (c) the nucleon’s singlet quark size for x<0.1 is larger than its gluonic size, (b^{2})_{q+q¯}>(b^{2})_{g}, as suggested by the t-slopes of deeply-virtual Compton scattering and exclusive J/ψ production measured at HERA and FNAL. We show that our approach reproduces the general N_{c}-scaling of parton densities in QCD, thanks to the degeneracy of N and Δ intermediate states in the large-N_{c} limit. Finally, we also comment on the role of pionic configurations at large longitudinal distances and the limits of their applicability at small x.
Target space supersymmetric sigma model techniques
de Boer, Jan; Skenderis, Kostas
1996-07-01
We briefly review the covariant formulation of the Green-Schwarz superstring by Berkovits, and describe how a detailed tree-level and one-loop analysis of this model leads, for the first time, to a derivation of the low-energy effective action of the heterotic superstring while keeping target-space supersymmetry manifest. The resulting low-energy theory is old-minimal supergravity coupled to tensor multiplet. The dilaton is part of the compensator multiplet.
New limits on intrinsic charm in the nucleon from global analysis of parton distributions.
Jimenez-Delgado, P; Hobbs, T J; Londergan, J T; Melnitchouk, W
2015-02-27
We present a new global QCD analysis of parton distribution functions, allowing for possible intrinsic charm (IC) contributions in the nucleon inspired by light-front models. The analysis makes use of the full range of available high-energy scattering data for Q^{2}≳1 GeV^{2} and W^{2}≳3.5 GeV^{2}, including fixed-target proton and deuteron cross sections at lower energies that were excluded in previous global analyses. The expanded data set places more stringent constraints on the momentum carried by IC, with ⟨x⟩_{IC} at most 0.5% (corresponding to an IC normalization of ∼1%) at the 4σ level for Δχ^{2}=1. We also critically assess the impact of older EMC measurements of F_{2}^{c} at large x, which favor a nonzero IC, but with very large χ^{2} values. PMID:25768757
New Limits on Intrinsic Charm in the Nucleon from Global Analysis of Parton Distributions
NASA Astrophysics Data System (ADS)
Jimenez-Delgado, P.; Hobbs, T. J.; Londergan, J. T.; Melnitchouk, W.
2015-02-01
We present a new global QCD analysis of parton distribution functions, allowing for possible intrinsic charm (IC) contributions in the nucleon inspired by light-front models. The analysis makes use of the full range of available high-energy scattering data for Q2≳1 GeV2 and W2≳3.5 GeV2 , including fixed-target proton and deuteron cross sections at lower energies that were excluded in previous global analyses. The expanded data set places more stringent constraints on the momentum carried by IC, with ⟨x ⟩IC at most 0.5% (corresponding to an IC normalization of ˜1 % ) at the 4 σ level for Δ χ2=1 . We also critically assess the impact of older EMC measurements of F2c at large x , which favor a nonzero IC, but with very large χ2 values.
New limits on intrinsic charm in the nucleon from global analysis of parton distributions
Jimenez-Delgado, P.; Hobbs, T. J.; Londergan, J. T.; Melnitchouk, W.
2015-02-27
We present a new global QCD analysis of parton distribution functions, allowing for possible intrinsic charm (IC) contributions in the nucleon inspired by light-front models. The analysis makes use of the full range of available high-energy scattering data for Q2 ≥ 1 GeV2 and W2 ≥ 3.5 GeV2, including fixed-target proton and deuteron deep cross sections at lower energies that were excluded in previously global analyses. The expanded data set places more stringent constraints on the momentum carried by IC, with (x)IC at most 0.5% (corresponding to an IC normalization of ~1%) at the 4σ level for ΔX2 = 1.more » We also assess the impact of older EMC measurements of Fc2c at large x, which favor a nonzero IC, but with very large X2 values.« less
Parton distributions from SMC and SLAC data
Ramsey, G.P. |; Goshtasbpour, M. |
1996-01-04
We have extracted spin-weighted parton distributions in a proton from recent data at CERN and SLAC. The valence, sea quark and Antiquark spin-weighted distributions are determined separately. The data are all consistent with a small to moderate polarized gluon distribution, so that the anomaly term is not significant in the determination of the constituent contributions to the spin of the proton. We have analyzed the consistency of the results obtained from various sets of data and the Biorken sum rule. Although all data are consistent with the sum rule, the polarized distributions from different experiments vary, even with higher order QCD corrections taken into account. Results split into two models, one set implying a large polarized strange sea which violates the positivity bound, and the other set yielding a smaller polarized strange sea. Only further experiments which extract information about the polarized sea will reconcile these differences. We suggest specific experiments which can be performed to determine the size of the polarized sea and gluons.
Recent progress on nuclear parton distribution functions
NASA Astrophysics Data System (ADS)
Hirai, M.; Kumano, S.; Saito, K.
2011-09-01
We report current status of global analyses on nuclear parton distribution functions (NPDFs). The optimum NPDFs are determined by analyzing high-energy nuclear reaction data. Due to limited experimental measurements, antiquark modifications have large uncertainties at x > 0.2 and gluon modifications cannot be determined. A nuclear modification difference between u and d quark distributions could be an origin of the long-standing NuTeV sin2θw anomaly. There is also an issue of nuclear modification differences between the structure functions of charged-lepton and neutrino reactions. Next, nuclear clustering effects are discussed in structure functions F2A as a possible explanation for an anomalous result in the 9Be nucleus at the Thomas Jefferson National Accelerator Facility (JLab). Last, tensor-polarized quark and antiquark distribution functions are extracted from HERMES data on the polarized structure function b1 of the deuteron, and they could be used for testing theoretical models and for proposing future experiments, for example, the one at JLab. Such measurements could open a new field of spin physics in spin-one hadrons.
Targeted mutagenesis tools for modelling psychiatric disorders.
Deussing, Jan M
2013-10-01
In the 1980s, the basic principles of gene targeting were discovered and forged into sharp tools for efficient and precise engineering of the mouse genome. Since then, genetic mouse models have substantially contributed to our understanding of major neurobiological concepts and are of utmost importance for our comprehension of neuropsychiatric disorders. The "domestication" of site-specific recombinases and the continuous creative technological developments involving the implementation of previously identified biological principles such as transcriptional and posttranslational control now enable conditional mutagenesis with high spatial and temporal resolution. The initiation and successful accomplishment of large-scale efforts to annotate functionally the entire mouse genome and to build strategic resources for the research community have significantly accelerated the rapid proliferation and broad propagation of mouse genetic tools. Addressing neurobiological processes with the assistance of genetic mouse models is a routine procedure in psychiatric research and will be further extended in order to improve our understanding of disease mechanisms. In light of the highly complex nature of psychiatric disorders and the current lack of strong causal genetic variants, a major future challenge is to model of psychiatric disorders more appropriately. Humanized mice, and the recently developed toolbox of site-specific nucleases for more efficient and simplified tailoring of the genome, offer the perspective of significantly improved models. Ultimately, these tools will push the limits of gene targeting beyond the mouse to allow genome engineering in any model organism of interest.
Constraints on parton distribution from CDF
Bodek, A.; CDF Collaboration
1995-10-01
The asymmetry in W{sup -} - W{sup +} production in p{bar p} collisions and Drell-Yan data place tight constraints on parton distributions functions. The W asymmetry data constrain the slope of the quark distribution ratio d(x)/u(x) in the x range 0.007-0.27. The published W asymmetry results from the CDF 1992.3 data ({approx} 20 pb{sup -1}) greatly reduce the systematic error originating from the choice of PDF`s in the W mass measurement at CDF. These published results have also been included in the CTEQ3, MRSA, and GRV94 parton distribution fits. These modern parton distribution functions axe still in good agreement with the new 1993-94 CDF data({approx} 108 pb{sup -1} combined). Preliminary results from CDF for the Drell-Yan cross section in the mass range 11-350 GeV/c{sup 2} are discussed.
Studies of Transverse Momentum Distributions of Partons
NASA Astrophysics Data System (ADS)
Avagyan, Harut
2014-03-01
The detailed understanding of the orbital structure of partonic distributions, encoded in Transverse Momentum Dependent (TMD) parton distributions, has been widely recognized as key objective of the JLab 12 GeV upgrade, the polarised pp program at RHIC, and a driving force behind the construction of the Electron Ion Collider. Several proposals have been already approved by the JLab PAC to study TMDs using different spin-azimuthal asymmetries at JLab12 and were awarded the highest physics rating. Although the interest in TMDs has grown enormously we are still in need of fresh theoretical and phenomenological ideas. One of the main challenges still remaining is the extraction of actual 3D parton distribution functions from hard scattering processes in nucleons and nuclei. In this talk, we present an overview of the latest developments and future studies of the TMDs.
Parton Propagation and Fragmentation in QCD Matter
Alberto Accardi, Francois Arleo, William Brooks, David D'Enterria, Valeria Muccifora
2009-12-01
We review recent progress in the study of parton propagation, interaction and fragmentation in both cold and hot strongly interacting matter. Experimental highlights on high-energy hadron production in deep inelastic lepton-nucleus scattering, proton-nucleus and heavy-ion collisions, as well as Drell-Yan processes in hadron-nucleus collisions are presented. The existing theoretical frameworks for describing the in-medium interaction of energetic partons and the space-time evolution of their fragmentation into hadrons are discussed and confronted to experimental data. We conclude with a list of theoretical and experimental open issues, and a brief description of future relevant experiments and facilities.
Triple parton scattering in collinear approximation of perturbative QCD
NASA Astrophysics Data System (ADS)
Snigirev, A. M.
2016-08-01
Revised formulas for the inclusive cross section of a triple parton scattering process in a hadron collision are suggested based on the modified collinear three-parton distributions. The possible phenomenological issues are discussed.
Modeling unmanned system collaborative target engagement
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.; Hicklen, Michael L.
2007-04-01
This paper describes a novel algorithm for collaborative target engagement by unmanned systems (UMS) resulting in emergent behavior. We demonstrate UMS collaborative engagement using a simulation testbed model of a road, convoy vehicles traveling along the road, a squadron of unmanned aerial vehicles (UAVs), and multiple unmanned ground vehicles (UGVs) which are set to detonate when within close proximity to a convoy vehicle. No explicit artificial intelligence or swarming algorithms were used. Collision avoidance was an intrinsic phenomena. All entities acted independently throughout the simulation, but were given similar local instructions for possible courses of action (COAs) depending on current situations. Our algorithm and results are summarized in this paper.
A simple model for gene targeting.
Ratilainen, T; Lincoln, P; Nordén, B
2001-01-01
Sequence-specific binding to genomic-size DNA sequences by artificial agents is of major interest for the development of gene-targeting strategies, gene-diagnostic applications, and biotechnical tools. The binding of one such agent, peptide nucleic acid (PNA), to a randomized human genome has been modeled with statistical mass action calculations. With the length of the PNA probe, the average per-base binding constant k(0), and the binding affinity loss of a mismatched base pair as main parameters, the specificity was gauged as a "therapeutic ratio" G = maximum safe [PNA](tot)/minimal efficient [PNA](tot). This general, though simple, model suggests that, above a certain threshold length of the PNA, the microscopic binding constant k(0) is the primary determinant for optimal discrimination, and that only a narrow range of rather low k(0) values gives a high therapeutic ratio G. For diagnostic purposes, the value of k(0) could readily be modulated by changing the temperature, due to the substantial Delta H degrees associated with the binding equilibrium. Applied to gene therapy, our results stress the need for appropriate control of the binding constant and added amount of the gene-targeting agent, to meet the varying conditions (ionic strength, presence of competing DNA-binding molecules) found in the cell. PMID:11606298
Fragmentation of parton jets at small x
Kirschner, R.
1985-08-01
The parton fragmentation function is calculated in the region of small x in the doubly logarithmic approximation of QCD. For this, the method of separating the softest particle, which has hitherto been applied only in the Regge kinematic region, is developed. Simple arguments based on unitarity and gauge invariance are used to derive the well known condition of ordering of the emission angles.
Progress in the dynamical parton distributions
Jimenez-Delgado, Pedro
2012-06-01
The present status of the (JR) dynamical parton distribution functions is reported. Different theoretical improvements, including the determination of the strange sea input distribution, the treatment of correlated errors and the inclusion of alternative data sets, are discussed. Highlights in the ongoing developments as well as (very) preliminary results in the determination of the strong coupling constant are presented.
Parton Distributions and Spin-Orbital Correlations
Yuan, Feng
2007-09-24
In this talk, I summarize a recent study showing that the large-x parton distributions contain important information on the quark orbital angular momentum of nucleon. This contribution could explain the conflict between the experimental data and the theory predictions for the polarized quark distributions. Future experiments at JLab shall provide further test for our predictions.
PARTON DISTRIBUTIONS AND SPIN-ORBITAL CORRELATIONS.
FENG,Y.
2007-05-21
In this talk, the author summarizes a recent study showing that the large-x parton distributions contain important information on the quark orbital angular momentum of nucleon. This contribution could explain the conflict between the experimental data and the theory predictions for the polarized quark distributions. Future experiments at JLAB shall provide further test for our predictions.
Systematic Improvement of QCD Parton Showers
Winter, Jan; Hoeche, Stefan; Hoeth, Hendrik; Krauss, Frank; Schonherr, Marek; Zapp, Korinna; Schumann, Steffen; Siegert, Frank; /Freiburg U.
2012-05-17
In this contribution, we will give a brief overview of the progress that has been achieved in the field of combining matrix elements and parton showers. We exemplify this by focusing on the case of electron-positron collisions and by reporting on recent developments as accomplished within the SHERPA event generation framework.
Global analysis of nuclear parton distributions
NASA Astrophysics Data System (ADS)
de Florian, Daniel; Sassot, Rodolfo; Zurita, Pia; Stratmann, Marco
2012-04-01
We present a new global QCD analysis of nuclear parton distribution functions and their uncertainties. In addition to the most commonly analyzed data sets for the deep-inelastic scattering of charged leptons off nuclei and Drell-Yan dilepton production, we include also measurements for neutrino-nucleus scattering and inclusive pion production in deuteron-gold collisions. The analysis is performed at next-to-leading order accuracy in perturbative QCD in a general mass variable flavor number scheme, adopting a current set of free nucleon parton distribution functions, defined accordingly, as reference. The emerging picture is one of consistency, where universal nuclear modification factors for each parton flavor reproduce the main features of all data without any significant tension among the different sets. We use the Hessian method to estimate the uncertainties of the obtained nuclear modification factors and examine critically their range of validity in view of the sparse kinematic coverage of the present data. We briefly present several applications of our nuclear parton densities in hard nuclear reactions at BNL-RHIC, CERN-LHC, and a future electron-ion collider.
Intrinsic transverse momentum and parton correlations from dynamical chiral symmetry breaking
Peter Schweitzer, Mark Strikman, Christian Weiss
2013-01-01
The dynamical breaking of chiral symmetry in QCD is caused by nonperturbative interactions on a distance scale rho ~ 0.3 fm, much smaller than the typical hadronic size R ~ 1 fm. These short-distance interactions influence the intrinsic transverse momentum distributions of partons and their correlations at a low normalization point. We study this phenomenon in an effective description of the low-energy dynamics in terms of chiral constituent quark degrees of freedom, which refers to the large-N_c limit of QCD. The nucleon is obtained as a system of constituent quarks and antiquarks moving in a self-consistent classical chiral field (relativistic mean-field approximation, or chiral quark-soliton model). The calculated transverse momentum distributions of constituent quarks and antiquarks are matched with QCD quarks, antiquarks and gluons at the chiral symmetry--breaking scale rho^{-2}. We find that the transverse momentum distribution of valence quarks is localized at p_T^2 ~ R^{-2} and roughly of Gaussian shape. The distribution of unpolarized sea quarks exhibits a would-be power-like tail ~1/p_T^2 extending up to the chiral symmetry-breaking scale. Similar behavior is observed in the flavor-nonsinglet polarized sea. The high-momentum tails are the result of short-range correlations between sea quarks in the nucleon's light-cone wave function, which are analogous to short-range NN correlations in nuclei. We show that the nucleon's light-cone wave function contains correlated pairs of transverse size rho << R with scalar-isoscalar (Sigma) and pseudoscalar-isovector (Pi) quantum numbers, whose internal wave functions have a distinctive spin structure and become identical at p_T^2 ~ rho^{-2} (restoration of chiral symmetry). These features are model-independent and represent an effect of dynamical chiral symmetry breaking on the nucleon's partonic structure. Our results have numerous implications for the transverse momentum distributions of particles produced in hard
A Search Model for Imperfectly Detected Targets
NASA Technical Reports Server (NTRS)
Ahumada, Albert
2012-01-01
Under the assumptions that 1) the search region can be divided up into N non-overlapping sub-regions that are searched sequentially, 2) the probability of detection is unity if a sub-region is selected, and 3) no information is available to guide the search, there are two extreme case models. The search can be done perfectly, leading to a uniform distribution over the number of searches required, or the search can be done with no memory, leading to a geometric distribution for the number of searches required with a success probability of 1/N. If the probability of detection P is less than unity, but the search is done otherwise perfectly, the searcher will have to search the N regions repeatedly until detection occurs. The number of searches is thus the sum two random variables. One is N times the number of full searches (a geometric distribution with success probability P) and the other is the uniform distribution over the integers 1 to N. The first three moments of this distribution were computed, giving the mean, standard deviation, and the kurtosis of the distribution as a function of the two parameters. The model was fit to the data presented last year (Ahumada, Billington, & Kaiwi, 2 required to find a single pixel target on a simulated horizon. The model gave a good fit to the three moments for all three observers.
Examining the Crossover from the Hadronic to Partonic Phase in QCD
Xu Mingmei; Yu Meiling; Liu Lianshou
2008-03-07
A mechanism, consistent with color confinement, for the transition between perturbative and physical vacua during the gradual crossover from the hadronic to partonic phase is proposed. The essence of this mechanism is the appearance and growing up of a kind of grape-shape perturbative vacuum inside the physical one. A percolation model based on simple dynamics for parton delocalization is constructed to exhibit this mechanism. The crossover from hadronic matter to sQGP (strongly coupled quark-gluon plasma) as well as the transition from sQGP to weakly coupled quark-gluon plasma with increasing temperature is successfully described by using this model.
Modelling nutrient reduction targets - model structure complexity vs. data availability
NASA Astrophysics Data System (ADS)
Capell, Rene; Lausten Hansen, Anne; Donnelly, Chantal; Refsgaard, Jens Christian; Arheimer, Berit
2015-04-01
In most parts of Europe, macronutrient concentrations and loads in surface water are currently affected by human land use and land management choices. Moreover, current macronutrient concentration and load levels often violate European Water Framework Directive (WFD) targets and effective measures to reduce these levels are sought after by water managers. Identifying such effective measures in specific target catchments should consider the four key processes release, transport, retention, and removal, and thus physical catchment characteristics as e.g. soils and geomorphology, but also management data such as crop distribution and fertilizer application regimes. The BONUS funded research project Soils2Sea evaluates new, differentiated regulation strategies to cost-efficiently reduce nutrient loads to the Baltic Sea based on new knowledge of nutrient transport and retention processes between soils and the coast. Within the Soils2Sea framework, we here examine the capability of two integrated hydrological and nutrient transfer models, HYPE and Mike SHE, to model runoff and nitrate flux responses in the 100 km2 Norsminde catchment, Denmark, comparing different model structures and data bases. We focus on comparing modelled nitrate reductions within and below the root zone, and evaluate model performances as function of available model structures (process representation within the model) and available data bases (temporal forcing data and spatial information). This model evaluation is performed to aid in the development of model tools which will be used to estimate the effect of new nutrient reduction measures on the catchment to regional scale, where available data - both climate forcing and land management - typically are increasingly limited with the targeted spatial scale and may act as a bottleneck for process conceptualizations and thus the value of a model as tool to provide decision support for differentiated regulation strategies.
Studies of transverse momentum dependent parton distributions and Bessel weighting
Aghasyan, M.; Avakian, H.; De Sanctis, E.; Gamberg, L.; Mirazita, M.; Musch, B.; Prokudin, A.; Rossi, P.
2015-03-01
In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.
Studies of transverse momentum dependent parton distributions and Bessel weighting
Aghasyan, M.; Avakian, H.; De Sanctis, E.; Gamberg, L.; Mirazita, M.; Musch, B.; Prokudin, A.; Rossi, P.
2015-03-01
In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less
Studies of transverse momentum dependent parton distributions and Bessel weighting
NASA Astrophysics Data System (ADS)
Aghasyan, M.; Avakian, H.; De Sanctis, E.; Gamberg, L.; Mirazita, M.; Musch, B.; Prokudin, A.; Rossi, P.
2015-03-01
In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/ Q 2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.
Representing the observer in electro-optical target acquisition models.
Vollmerhausen, Richard H
2009-09-28
Electro-optical target acquisition models predict the probability that a human observer recognizes or identifies a target. To accurately model targeting performance, the impact of imager blur and noise on human vision must be quantified. In the most widely used target acquisition models, human vision is treated as a "black box" that is characterized by its signal transfer response and detection thresholds. This paper describes an engineering model of observer vision. Characteristics of the observer model are compared to psychophysical data. This paper also describes how to integrate the observer model into both reflected light and thermal sensor models. PMID:19907512
Ma, Guo -Liang; Bzdak, Adam
2014-11-04
In this study, we show that the incoherent elastic scattering of partons, as present in a multi-phase transport model (AMPT), with a modest parton–parton cross-section of σ = 1.5 – 3 mb, naturally explains the long-range two-particle azimuthal correlation as observed in proton–proton and proton–nucleus collisions at the Large Hadron Collider.
The sound generated by a fast parton in the quark-gluon plasma is a crescendo
NASA Astrophysics Data System (ADS)
Neufeld, R. B.; Müller, B.
2009-11-01
The total energy deposited into the medium per unit length by a fast parton traversing a quarkgluon plasma is calculated. We take the medium excitation due to collisions to be given by the well known expression for the collisional drag force. The parton's radiative energy loss contributes to the energy deposition because each radiated gluon acts as an additional source of collisional energy loss in the medium. In our model, this leads to a length dependence on the differential energy loss due to the interactions of radiated gluons with the medium. The final result, which is a sum of the primary and the secondary contributions, is then treated as the coefficient of a local hydrodynamic source term. Results are presented for energy density wave induced by two fast, back-to-back partons created in an initial hard interaction.
Generalized parton distributions and exclusive processes
Guzey, Vadim
2013-10-01
In last fifteen years, GPDs have emerged as a powerful tool to reveal such aspects of the QCD structure of the nucleon as: - 3D parton correlations and distributions; - spin content of the nucleon. Further advances in the field of GPDs and hard exclusive processes rely on: - developments in theory and new methods in phenomenology such as new flexible parameterizations, neural networks, global QCD fits - new high-precision data covering unexplored kinematics: JLab at 6 and 12 GeV, Hermes with recoil detector, Compass, EIC. This slide-show presents: Nucleon structure in QCD, particularly hard processes, factorization and parton distributions; and a brief overview of GPD phenomenology, including basic properties of GPDs, GPDs and QCD structure of the nucleon, and constraining GPDs from experiments.
Target Recognition Using Neural Networks for Model Deformation Measurements
NASA Technical Reports Server (NTRS)
Ross, Richard W.; Hibler, David L.
1999-01-01
Optical measurements provide a non-invasive method for measuring deformation of wind tunnel models. Model deformation systems use targets mounted or painted on the surface of the model to identify known positions, and photogrammetric methods are used to calculate 3-D positions of the targets on the model from digital 2-D images. Under ideal conditions, the reflective targets are placed against a dark background and provide high-contrast images, aiding in target recognition. However, glints of light reflecting from the model surface, or reduced contrast caused by light source or model smoothness constraints, can compromise accurate target determination using current algorithmic methods. This paper describes a technique using a neural network and image processing technologies which increases the reliability of target recognition systems. Unlike algorithmic methods, the neural network can be trained to identify the characteristic patterns that distinguish targets from other objects of similar size and appearance and can adapt to changes in lighting and environmental conditions.
Automating ground-fixed target modeling with the smart target model generator
NASA Astrophysics Data System (ADS)
Verner, D.; Dukes, R.
2007-04-01
The Smart Target Model Generator (STMG) is an AFRL/MNAL sponsored tool for generating 3D building models for use in various weapon effectiveness tools. These tools include tri-service approved tools such as Modular Effectiveness/Vulnerability Assessment (MEVA), Building Analysis Module in Joint Weaponeering System (JWS), PENCRV3D, and WinBlast. It also supports internal dispersion modeling of chemical contaminants. STMG also has capabilities to generate infrared or other sensor images. Unlike most CAD-models, STMG provides physics-based component properties such as strength, density, reinforcement, and material type. Interior components such as electrical and mechanical equipment, rooms, and ducts are also modeled. Buildings can be manually created with a graphical editor or automatically generated using rule-bases which size and place the structural components using rules based on structural engineering principles. In addition to its primary purposes of supporting conventional kinetic munitions, it can also be used to support sensor modeling and automatic target recognition.
Tradeoffs among watershed model calibration targets for parameter estimation
Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation f...
Generalized Parton Distributions And Deeply Virtual Compton Scattering At Clas
De Masi, Rita
2007-09-01
The deeply virtual Compton scattering is the simplest process to access the generalized parton distributions of the nucleon. A dedicated large statistics experiment for the measurement of deeply virtual Compton scattering with a 6 GeV polarized electron beam on a proton target has been performed at the Hall-B of Jefferson Laboratory with the CLAS spectrometer. The experiment covered a wide kinematic range, allowing the study of the beam spin asymmetry as function of the Bjorken variable xB, the Mandelstam variable t, the virtual photon four-momentum squared Q2 and the angle phi between leptonic and hadronic planes. The preliminary results are in agreement with previous measurements and with the predicted twist-2 dominance.
Nuclear and partonic dynamics in high energy elastic nucleus-nucleus scattering
NASA Astrophysics Data System (ADS)
Małecki, Andrzej
1991-10-01
A hybrid description of diffraction which combines a geometrical modeling of multiple scattering with many-channel effects resulting from intrinsic dynamics on the nuclear and subnuclear level is presented. The application to 4-4He elastic scattering is satisfactory. Our analysis suggests that, at large momentum transfers, the parton constituents of nucleons immersed in nuclei are deconfined.
Nuclear and partonic dynamics in high energy elastic nucleus-nucleus scattering
Malecki, A. )
1991-10-01
A hybrid description of diffraction which combines a geometrical modeling of multiple scattering with many-channel effects resulting from intrinsic dynamics on the nuclear and subnuclear level is presented. The application to {sup 4}He-{sup 4}He elastic scattering is satisfactory. Our analysis suggests that, at large momentum transfers, the parton constituents of nucleons immersed in nuclei are deconfined.
J. J. Sakurai Prize for Theoretical Particle Physics Talk: Partons, QCD, and Factorization
NASA Astrophysics Data System (ADS)
Soper, Davison
2009-05-01
Many important cross sections in high-energy collisions are analyzed using factorization properties. I review the nature of factorization, how it arose from the parton model, and current issues in its development. This talk will be coordinated with the one by Collins.
Self-Organizing Maps and Parton Distribution Functions
K. Holcomb, Simonetta Liuti, D. Z. Perry
2011-05-01
We present a new method to extract parton distribution functions from high energy experimental data based on a specific type of neural networks, the Self-Organizing Maps. We illustrate the features of our new procedure that are particularly useful for an anaysis directed at extracting generalized parton distributions from data. We show quantitative results of our initial analysis of the parton distribution functions from inclusive deep inelastic scattering.
Hierarchical target model analysis of tactical thermal imagery
NASA Astrophysics Data System (ADS)
Lee, Harry C.; Olson, Teresa L. P.; Sefcik, Jason A.
2002-07-01
Hierarchical Target Model Analysis (HTMA) is an automatic pattern matching process for categorizing tactical targets. Stored target model information is re-projected into the image space using the sensor camera model state vector. The analysis is carried out in image gradient angle space for greater flexibility and reduced processing. Re-sampling the gradient angle space allows the classification process to work at a wider variety of target ranges. The target model database is built from an assortment of both target operating and background environmental conditions. Incremental classification is possible by applying the matching strategy at increasing target resolution levels that are either self or range closure induced. The first application of this process has been on thermal imagery. It can easily be extended to other image domains.
Spatial frequency dependence of target signature for infrared performance modeling
NASA Astrophysics Data System (ADS)
Du Bosq, Todd; Olson, Jeffrey
2011-05-01
The standard model used to describe the performance of infrared imagers is the U.S. Army imaging system target acquisition model, based on the targeting task performance metric. The model is characterized by the resolution and sensitivity of the sensor as well as the contrast and task difficulty of the target set. The contrast of the target is defined as a spatial average contrast. The model treats the contrast of the target set as spatially white, or constant, over the bandlimit of the sensor. Previous experiments have shown that this assumption is valid under normal conditions and typical target sets. However, outside of these conditions, the treatment of target signature can become the limiting factor affecting model performance accuracy. This paper examines target signature more carefully. The spatial frequency dependence of the standard U.S. Army RDECOM CERDEC Night Vision 12 and 8 tracked vehicle target sets is described. The results of human perception experiments are modeled and evaluated using both frequency dependent and independent target signature definitions. Finally the function of task difficulty and its relationship to a target set is discussed.
Transorbital target localization in the porcine model
NASA Astrophysics Data System (ADS)
DeLisi, Michael P.; Mawn, Louise A.; Galloway, Robert L.
2013-03-01
Current pharmacological therapies for the treatment of chronic optic neuropathies such as glaucoma are often inadequate due to their inability to directly affect the optic nerve and prevent neuron death. While drugs that target the neurons have been developed, existing methods of administration are not capable of delivering an effective dose of medication along the entire length of the nerve. We have developed an image-guided system that utilizes a magnetically tracked flexible endoscope to navigate to the back of the eye and administer therapy directly to the optic nerve. We demonstrate the capabilities of this system with a series of targeted surgical interventions in the orbits of live pigs. Target objects consisted of NMR microspherical bulbs with a volume of 18 μL filled with either water or diluted gadolinium-based contrast, and prepared with either the presence or absence of a visible coloring agent. A total of 6 pigs were placed under general anesthesia and two microspheres of differing color and contrast content were blindly implanted in the fat tissue of each orbit. The pigs were scanned with T1-weighted MRI, image volumes were registered, and the microsphere containing gadolinium contrast was designated as the target. The surgeon was required to navigate the flexible endoscope to the target and identify it by color. For the last three pigs, a 2D/3D registration was performed such that the target's coordinates in the image volume was noted and its location on the video stream was displayed with a crosshair to aid in navigation. The surgeon was able to correctly identify the target by color, with an average intervention time of 20 minutes for the first three pigs and 3 minutes for the last three.
Understanding target delineation using simple probabilistic modelling
NASA Astrophysics Data System (ADS)
Willis, Chris J.
2015-10-01
Performance assessment is carried out for a simple target delineation process based on thresholding and shape fitting. The method uses the information contained in Receiver Operating Characteristic curves together with basic observations regarding target sizes and shapes. Performance is gauged by considering the delineations that might result from having particular arrangements of detected pixels within the vicinity of a hypothesized target. In particular, the method considers the qualities of delineations generated when having various combinations of detected pixels at a number of locations around the inner and outer boundaries of the underlying object. Three distinct types of arrangement for pixels on the inner target boundary are considered. Each has the potential to lead to a good quality delineation in a thresholding and shape fitting scheme. The deleterious effect of false alarms within the surrounding local region is also taken into account. The resulting ensembles of detected pixels are treated using familiar rules for combination to form probabilities for the delineations as a whole. Example results are produced for simple target prototypes in cluttered SAR imagery.
J.W. Negele; R.C. Brower; P. Dreher; R. Edwards; G. Fleming; Ph. Hagler; U.M. Heller; Th. Lippert; A.V.Pochinsky; D.B. Renner; D. Richards; K. Schilling; W. Schroers
2004-04-01
This talk presents recent calculations in full QCD of the lowest three moments of generalized parton distributions and the insight they provide into the behavior of nucleon electromagnetic form factors, the origin of the nucleon spin, and the transverse structure of the nucleon. In addition, new exploratory calculations in the chiral regime of full QCD are discussed.
Nondiagonal parton distributions at small x
NASA Astrophysics Data System (ADS)
Frankfurt, L. L.; Freund, A.; Guzey, V.; Strikman, M.
1997-04-01
In this paper we make predictions for nondiagonal parton distributions in a proton in the LLA. We calculate the DGLAP-type evolution kernels in the LLA, solve the nondiagonal GLAP evolution equations with a modified version of the CTEQ-package and comment on the range of applicability of the LLA in the asymmetric regime. We show that the nondiagonal gluon distribution x2G(x1,x2,Q2) can be well approximated at small x by the conventional gluon density xG(x,Q2) and explain that the cross sections of hard diffractive processes are determined by x2G(x1,x2).
Global QCD Analysis of Polarized Parton Densities
Stratmann, Marco
2009-08-04
We focus on some highlights of a recent, first global Quantum Chromodynamics (QCD) analysis of the helicity parton distributions of the nucleon, mainly the evidence for a rather small gluon polarization over a limited region of momentum fraction and for interesting flavor patterns in the polarized sea. It is examined how the various sets of data obtained in inclusive and semi-inclusive deep inelastic scattering and polarized proton-proton collisions help to constrain different aspects of the quark, antiquark, and gluon helicity distributions. Uncertainty estimates are performed using both the robust Lagrange multiplier technique and the standard Hessian approach.
Studies of partonic transverse momentum and spin structure of the nucleon
NASA Astrophysics Data System (ADS)
Contalbrigo, M.
2014-06-01
The investigation of the partonic degrees of freedom beyond collinear approximation (3D description) has been gained increasing interest in the last decade. The Thomas Jefferson National Laboratory, after the CEBAF upgrade to 12 GeV, will become the most complete facility for the investigation of the hadron structure in the valence region by scattering of polarized electron off various polarized nucleon targets. A compendium of the planned experiments is here presented.
From Bethe-Salpeter Wave functions to Generalised Parton Distributions
NASA Astrophysics Data System (ADS)
Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.
2016-09-01
We review recent works on the modelling of generalised parton distributions within the Dyson-Schwinger formalism. We highlight how covariant computations, using the impulse approximation, allows one to fulfil most of the theoretical constraints of the GPDs. Specific attention is brought to chiral properties and especially the so-called soft pion theorem, and its link with the Axial-Vector Ward-Takahashi identity. The limitation of the impulse approximation are also explained. Beyond impulse approximation computations are reviewed in the forward case. Finally, we stress the advantages of the overlap of lightcone wave functions, and possible ways to construct covariant GPD models within this framework, in a two-body approximation.
Excited nucleon as a van der Waals system of partons
Jenkovszky, L. L.; Muskeyev, A. O. Yezhov, S. N.
2012-06-15
Saturation in deep inelastic scattering (DIS) and deeply virtual Compton scattering (DVCS) is associated with a phase transition between the partonic gas, typical of moderate x and Q{sup 2}, and partonic fluid appearing at increasing Q{sup 2} and decreasing Bjorken x. We suggest the van der Waals equation of state to describe properly this phase transition.
Nucleon Generalized Parton Distributions from Full Lattice QCD
Robert Edwards; Philipp Haegler; David Richards; John Negele; Konstantinos Orginos; Wolfram Schroers; Jonathan Bratt; Andrew Pochinsky; Michael Engelhardt; George Fleming; Bernhard Musch; Dru Renner
2007-07-03
We present a comprehensive study of the lowest moments of nucleon generalized parton distributions in N_f=2+1 lattice QCD using domain wall valence quarks and improved staggered sea quarks. Our investigation includes helicity dependent and independent generalized parton distributions for pion masses as low as 350 MeV and volumes as large as (3.5 fm)^3.
The parton orbital angular momentum: Status and prospects
NASA Astrophysics Data System (ADS)
Liu, Keh-Fei; Lorcé, Cédric
2016-06-01
Theoretical progress on the formulation and classification of the quark and gluon orbital angular momenta (OAM) is reviewed. Their relation to parton distributions and open questions and puzzles are discussed. We give a status report on the lattice calculation of the parton kinetic and canonical OAM and point out several strategies to calculate the quark and gluon canonical OAM on the lattice.
Yao, Zhi-Jiang; Dong, Jie; Che, Yu-Jing; Zhu, Min-Feng; Wen, Ming; Wang, Ning-Ning; Wang, Shan; Lu, Ai-Ping; Cao, Dong-Sheng
2016-05-01
Drug-target interactions (DTIs) are central to current drug discovery processes and public health fields. Analyzing the DTI profiling of the drugs helps to infer drug indications, adverse drug reactions, drug-drug interactions, and drug mode of actions. Therefore, it is of high importance to reliably and fast predict DTI profiling of the drugs on a genome-scale level. Here, we develop the TargetNet server, which can make real-time DTI predictions based only on molecular structures, following the spirit of multi-target SAR methodology. Naïve Bayes models together with various molecular fingerprints were employed to construct prediction models. Ensemble learning from these fingerprints was also provided to improve the prediction ability. When the user submits a molecule, the server will predict the activity of the user's molecule across 623 human proteins by the established high quality SAR model, thus generating a DTI profiling that can be used as a feature vector of chemicals for wide applications. The 623 SAR models related to 623 human proteins were strictly evaluated and validated by several model validation strategies, resulting in the AUC scores of 75-100 %. We applied the generated DTI profiling to successfully predict potential targets, toxicity classification, drug-drug interactions, and drug mode of action, which sufficiently demonstrated the wide application value of the potential DTI profiling. The TargetNet webserver is designed based on the Django framework in Python, and is freely accessible at http://targetnet.scbdd.com . PMID:27167132
Metrics for image-based modeling of target acquisition
NASA Astrophysics Data System (ADS)
Fanning, Jonathan D.
2012-06-01
This paper presents an image-based system performance model. The image-based system model uses an image metric to compare a given degraded image of a target, as seen through the modeled system, to the set of possible targets in the target set. This is repeated for all possible targets to generate a confusion matrix. The confusion matrix is used to determine the probability of identifying a target from the target set when using a particular system in a particular set of conditions. The image metric used in the image-based model should correspond closely to human performance. The image-based model performance is compared to human perception data on Contrast Threshold Function (CTF) tests, naked eye Triangle Orientation Discrimination (TOD), and TOD including an infrared camera system. Image-based system performance modeling is useful because it allows modeling of arbitrary image processing. Modern camera systems include more complex image processing, much of which is nonlinear. Existing linear system models, such as the TTP metric model implemented in NVESD models such as NV-IPM, assume that the entire system is linear and shift invariant (LSI). The LSI assumption makes modeling nonlinear processes difficult, such as local area processing/contrast enhancement (LAP/LACE), turbulence reduction, and image fusion.
Parton Charge Symmetry Violation: Electromagnetic Effects and W Production Asymmetries
J.T. Londergan; D.P. Murdock; A.W. Thomas
2006-04-14
Recent phenomenological work has examined two different ways of including charge symmetry violation in parton distribution functions. First, a global phenomenological fit to high energy data has included charge symmetry breaking terms, leading to limits on the magnitude of parton charge symmetry breaking. In a second approach, two groups have included the coupling of partons to photons in the QCD evolution equations. One possible experiment that could search for isospin violation in parton distributions is a measurement of the asymmetry in W production at a collider. In this work we include both of the postulated sources of parton charge symmetry violation. We show that, given charge symmetry violation of a magnitude consistent with existing high energy data, the expected W production asymmetries would be quite small, generally less than one percent.
NASA Astrophysics Data System (ADS)
Geiger, Klaus
1997-08-01
VNI is a general-purpose Monte Carlo event generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. On the basis of renormalization-group improved parton description and quantum-kinetic theory, it uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme that is governed by the dynamics itself. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position space, momentum space and color space. The parton evolution is described in terms of a space-time generalization of the familiar momentum-space description of multiple (semi) hard interactions in QCD, involving 2 → 2 parton collisions, 2 → 1 parton fusion processes, and 1 → 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. This article gives a brief review of the physics underlying VNI, which is followed by a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including a simple example), annotates input and control parameters, and discusses output data provided by it.
Li, Hanshan
2016-05-01
This paper researches the calculation method of space target optical characteristics to improve performance and sensitivity of the photoelectric detection target. In accordance with the detection principle of the photoelectric detection target and the detection screen thickness geometrical relationship, this paper sets up the space target spectral characteristic model using the surface element mesh analysis method and the bidirectional reflection distribution function. It provides the incident radiation energy calculation function in the optical lens entrance pupil of the photoelectric detection target detection area in order to determine the total spectral radiance intensity calculation function of the space target. The paper also reports on the minimum flux calculation function detected by the photoelectric detection target based on the definition of detection sensitivity and the change curve of the target's radiation energy when entering the detection area at different incident angles. Lastly, it demonstrates the spectral illuminations of an optical detection system under different radiation wavelengths and reflection radiation angles, as well as change curves of the target's spectral radiation intensity passing through the detection screen area and at different incident angles from the same distance.
Impact of hadronic and nuclear corrections on global analysis of spin-dependent parton distributions
Jimenez-Delgado, Pedro; Accardi, Alberto; Melnitchouk, Wally
2014-02-01
We present the first results of a new global next-to-leading order analysis of spin-dependent parton distribution functions from the most recent world data on inclusive polarized deep-inelastic scattering, focusing in particular on the large-x and low-Q^2 regions. By directly fitting polarization asymmetries we eliminate biases introduced by using polarized structure function data extracted under nonuniform assumptions for the unpolarized structure functions. For analysis of the large-x data we implement nuclear smearing corrections for deuterium and 3He nuclei, and systematically include target mass and higher twist corrections to the g_1 and g_2 structure functions at low Q^2. We also explore the effects of Q^2 and W^2 cuts in the data sets, and the potential impact of future data on the behavior of the spin-dependent parton distributions at large x.
Novel phenomenology of parton distributions from the Drell-Yan process
NASA Astrophysics Data System (ADS)
Peng, Jen-Chieh; Qiu, Jian-Wei
2014-05-01
The Drell-Yan massive lepton-pair production in hadronic collisions provides a unique tool complementary to the Deep-Inelastic Scattering for probing the partonic substructures in hadrons. We review key concepts, approximations, and progress for QCD factorization of the Drell-Yan process in terms of collinear or transverse momentum dependent (TMD) parton distribution functions. We present experimental results from recent fixed-target Drell-Yan as well as W and Z boson production at colliders, focusing on the topics of flavor structure of the nucleon sea as well as the extraction of novel Sivers and Boer-Mulders functions via single transverse spin asymmetries and azimuthal lepton angular distribution of the Drell-Yan process. Prospects for future Drell-Yan experiments are also presented.
Monte Carlo modeling of spallation targets containing uranium and americium
NASA Astrophysics Data System (ADS)
Malyshkin, Yury; Pshenichnov, Igor; Mishustin, Igor; Greiner, Walter
2014-09-01
Neutron production and transport in spallation targets made of uranium and americium are studied with a Geant4-based code MCADS (Monte Carlo model for Accelerator Driven Systems). A good agreement of MCADS results with experimental data on neutron- and proton-induced reactions on 241Am and 243Am nuclei allows to use this model for simulations with extended Am targets. It was demonstrated that MCADS model can be used for calculating the values of critical mass for 233,235U, 237Np, 239Pu and 241Am. Several geometry options and material compositions (U, U + Am, Am, Am2O3) are considered for spallation targets to be used in Accelerator Driven Systems. All considered options operate as deep subcritical targets having neutron multiplication factor of k∼0.5. It is found that more than 4 kg of Am can be burned in one spallation target during the first year of operation.
Lappi, T.; Venugopalan, R.; Mantysaari, H.
2015-02-25
We argue that the proton multiplicities measured in Roman pot detectors at an electron ion collider can be used to determine centrality classes in incoherent diffractive scattering. Incoherent diffraction probes the fluctuations in the interaction strengths of multi-parton Fock states in the nuclear wavefunctions. In particular, the saturation scale that characterizes this multi-parton dynamics is significantly larger in central events relative to minimum bias events. As an application, we examine the centrality dependence of incoherent diffractive vector meson production. We identify an observable which is simultaneously very sensitive to centrality triggered parton fluctuations and insensitive to details of the model.
Pion valence-quark parton distribution function
NASA Astrophysics Data System (ADS)
Chang, Lei; Thomas, Anthony W.
2015-10-01
Within the Dyson-Schwinger equation formulation of QCD, a rainbow ladder truncation is used to calculate the pion valence-quark distribution function (PDF). The gap equation is renormalized at a typical hadronic scale, of order 0.5 GeV, which is also set as the default initial scale for the pion PDF. We implement a corrected leading-order expression for the PDF which ensures that the valence-quarks carry all of the pion's light-front momentum at the initial scale. The scaling behavior of the pion PDF at a typical partonic scale of order 5.2 GeV is found to be (1 - x) ν, with ν ≃ 1.6, as x approaches one.
New limits on intrinsic charm in the nucleon from global analysis of parton distributions
Jimenez-Delgado, P.; Hobbs, T. J.; Londergan, J. T.; Melnitchouk, W.
2015-02-27
We present a new global QCD analysis of parton distribution functions, allowing for possible intrinsic charm (IC) contributions in the nucleon inspired by light-front models. The analysis makes use of the full range of available high-energy scattering data for Q^{2} ≥ 1 GeV_{2} and W^{2} ≥ 3.5 GeV^{2}, including fixed-target proton and deuteron deep cross sections at lower energies that were excluded in previously global analyses. The expanded data set places more stringent constraints on the momentum carried by IC, with (x)_{IC} at most 0.5% (corresponding to an IC normalization of ~1%) at the 4σ level for Δ_{X2} = 1. We also assess the impact of older EMC measurements of F^{c}_{2}c at large x, which favor a nonzero IC, but with very large X^{2} values.
Evolution effects on parton energy loss with detailed balance
Cheng Luan; Wang Enke
2010-07-15
The initial conditions in the chemically nonequilibrated medium and Bjorken expanding medium at Relativistic Heavy Ion Collider (RHIC) are determined. With a set of rate equations describing the chemical equilibration of quarks and gluons based on perturbative QCD, we investigate the consequence for parton evolution at RHIC. With considering parton evolution, it is shown that the Debye screening mass and the inverse mean free-path of gluons reduce with increasing proper time in the QGP medium. The parton evolution affects the parton energy loss with detailed balance, both parton energy loss from stimulated emission in the chemically nonequilibrated expanding medium and in Bjorken expanding medium are linear dependent on the propagating distance rather than square dependent in the static medium. The energy absorption cannot be neglected at intermediate jet energies and small propagating distance of the energetic parton in contrast with that it is important only at intermediate jet energy in the static medium. This will increase the energy and propagating distance dependence of the parton energy loss and will affect the shape of suppression of moderately high P{sub T} hadron spectra.
Target signature modeling and bistatic scattering measurement studies
NASA Technical Reports Server (NTRS)
Burnside, W. D.; Lee, T. H.; Rojas, R.; Marhefka, R. J.; Bensman, D.
1989-01-01
Four areas of study are summarized: bistatic scattering measurements studies for a compact range; target signature modeling for test and evaluation hardware in the loop situation; aircraft code modification study; and SATCOM antenna studies on aircraft.
Georeferenced model simulations efficiently support targeted monitoring
NASA Astrophysics Data System (ADS)
Berlekamp, Jürgen; Klasmeier, Jörg
2010-05-01
The European Water Framework Directive (WFD) demands the good ecological and chemical status of surface waters. To meet the definition of good chemical status of the WFD surface water concentrations of priority pollutants must not exceed established environmental quality standards (EQS). Surveillance of the concentrations of numerous chemical pollutants in whole river basins by monitoring is laborious and time-consuming. Moreover, measured data do often not allow for immediate source apportionment which is a prerequisite for defining promising reduction strategies to be implemented within the programme of measures. In this context, spatially explicit model approaches are highly advantageous because they provide a direct link between local point emissions (e.g. treated wastewater) or diffuse non-point emissions (e.g. agricultural runoff) and resulting surface water concentrations. Scenario analyses with such models allow for a priori investigation of potential positive effects of reduction measures such as optimization of wastewater treatment. The geo-referenced model GREAT-ER (Geography-referenced Regional Exposure Assessment Tool for European Rivers) has been designed to calculate spatially resolved averaged concentrations for different flow conditions (e.g. mean or low flow) based on emission estimations for local point source emissions such as treated effluents from wastewater treatment plants. The methodology was applied to selected pharmaceuticals (diclofenac, sotalol, metoprolol, carbamazepin) in the Main river basin in Germany (approx. 27,290 km²). Average concentrations of the compounds were calculated for each river reach in the whole catchment. Simulation results were evaluated by comparison with available data from orienting monitoring and used to develop an optimal monitoring strategy for the assessment of water quality regarding micropollutants at the catchment scale.
Wideband radar signal modeling of ground moving targets in clutter
NASA Astrophysics Data System (ADS)
Malas, John A.; Pasala, Krishna M.; Westerkamp, John J.
2002-08-01
Research in the area of air-to-ground target detection, track and identification (ID) requires the development of target signal models for known geometric shapes moving in ground clutter. Space-time adaptive filtering techniques in particular make good use of temporal-spatial synthetic radar signal return data. A radar signal model is developed to generate synthetic wideband radar signal data for use in multi-channel adaptive signal processing.
A Tutorial on Target-Mediated Drug Disposition (TMDD) Models
Dua, P; Hawkins, E; van der Graaf, PH
2015-01-01
Target-mediated drug disposition (TMDD) is the phenomenon in which a drug binds with high affinity to its pharmacological target site (such as a receptor) to such an extent that this affects its pharmacokinetic characteristics.1 The aim of this Tutorial is to provide an introductory guide to the mathematical aspects of TMDD models for pharmaceutical researchers. Examples of Berkeley Madonna2 code for some models discussed in this Tutorial are provided in the Supplementary Materials. PMID:26225261
Automated target recognition using passive radar and coordinated flight models
NASA Astrophysics Data System (ADS)
Ehrman, Lisa M.; Lanterman, Aaron D.
2003-09-01
Rather than emitting pulses, passive radar systems rely on illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. These systems are particularly attractive since they allow receivers to operate without emitting energy, rendering them covert. Many existing passive radar systems estimate the locations and velocities of targets. This paper focuses on adding an automatic target recognition (ATR) component to such systems. Our approach to ATR compares the Radar Cross Section (RCS) of targets detected by a passive radar system to the simulated RCS of known targets. To make the comparison as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. The estimated positions become inputs for an algorithm that uses a coordinated flight model to compute probable aircraft orientation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of several potential target classes as they execute the estimated maneuvers. The RCS is then scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. The Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, so that the RCS can be further scaled. The Rician model compares the RCS of the illuminated aircraft with those of the potential targets. This comparison results in target identification.
Modelling of dynamic targeting in the Air Operations Centre
NASA Astrophysics Data System (ADS)
Lo, Edward H. S.; Au, T. Andrew
2007-12-01
Air Operations Centres (AOCs) are high stress multitask environments for planning and executing of theatre-wide airpower. Operators have multiple responsibilities to ensure that the orchestration of air assets is coordinated to maximum effect. AOCs utilise a dynamic targeting process to immediately prosecute time-sensitive targets. For this process to work effectively, a timely decision must be made regarding the appropriate course of action before the action is enabled. A targeting solution is typically developed using a number of inter-related processes in the kill chain - the Find, Fix, Track, Target, Engage, and Assess (F2T2EA) model. The success of making a right decision about dynamic targeting is ultimately limited by the cognitive and cooperative skills of the team prosecuting the mission and their associated workload. This paper presents a model of human interaction and tasks within the dynamic targeting sequence. The complex network of tasks executed by the team can be analysed by undertaking simulation of the model to identify possible information-processing bottlenecks and overloads. The model was subjected to various tests to generate typical outcomes, operator utilisation, duration as well as rates of output in the dynamic targeting process. This capability will allow for future "what-if" evaluations of numerous concepts for team formation or task reallocation, complementing live exercises and experiments.
Prioritizing therapeutic targets using patient-derived xenograft models.
Lodhia, K A; Hadley, A M; Haluska, P; Scott, C L
2015-04-01
Effective systemic treatment of cancer relies on the delivery of agents with optimal therapeutic potential. The molecular age of medicine has provided genomic tools that can identify a large number of potential therapeutic targets in individual patients, heralding the promise of personalized treatment. However, determining which potential targets actually drive tumor growth and should be prioritized for therapy is challenging. Indeed, reliable molecular matches of target and therapeutic agent have been stringently validated in the clinic for only a small number of targets. Patient-derived xenografts (PDXs) are tumor models developed in immunocompromised mice using tumor procured directly from the patient. As patient surrogates, PDX models represent a powerful tool for addressing individualized therapy. Challenges include humanizing the immune system of PDX models and ensuring high quality molecular annotation, in order to maximize insights for the clinic. Importantly, PDX can be sampled repeatedly and in parallel, to reveal clonal evolution, which may predict mechanisms of drug resistance and inform therapeutic strategy design.
Modeling the effects of contrast enhancement on target acquisition performance
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Fanning, Jonathan D.
2008-04-01
Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content, by better utilizing the available gray levels either globally or locally. This paper assesses the range-performance effects of various contrast enhancement algorithms for target identification with well contrasted vehicles. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing linearly scaled images and various contrast enhancement processed images. Contrast enhancement is modeled in the US Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of feature saturation or enhancement. To account for the equivalent blur associated with each contrast enhancement algorithm, an additional effective MTF was calculated and added to the model. The measured results are compared with the predicted performance based on the target task difficulty metric used in NVThermIP.
Zebrafish: predictive model for targeted cancer therapeutics from nature.
Zulkhernain, Nursafwana Syazwani; Teo, Soo Hwang; Patel, Vyomesh; Tan, Pei Jean
2014-01-01
Targeted therapy, the treatment of cancer based on an underlying genetic alteration, is rapidly gaining favor as the preferred therapeutic approach. To date, although natural products represent a rich resource of bio-diverse drug candidates, only a few have been identified to be effective as targeted cancer therapies largely due to the incompatibilities to current high-throughput screening methods. In this article, we review the utility of a zebrafish developmental screen for bioactive natural product-based compounds that target signaling pathways that are intimately shared with those in humans. Any bioactive compound perturbing signaling pathways identified from phenotypic developmental defects in zebrafish embryos provide an opportunity for developing targeted therapies for human cancers. This model provides a promising tool in the search for targeted cancer therapeutics from natural products. PMID:25348017
Probing transverse momentum dependent parton distributions in charmonium and bottomonium production
NASA Astrophysics Data System (ADS)
Mukherjee, Asmita; Rajesh, Sangem
2016-03-01
We propose the study of unpolarized transverse momentum dependent gluon parton distributions as well as the effect of linearly polarized gluons on transverse momentum and rapidity distributions of J /ψ and ϒ production within the framework of transverse momentum dependent factorization employing a color evaporation model (CEM) in an unpolarized proton-proton collision. We estimate the transverse momentum and rapidity distributions of J /ψ and ϒ at LHCb, RHIC and AFTER energies using TMD evolution formalism.
Designing Multi-target Compound Libraries with Gaussian Process Models.
Bieler, Michael; Reutlinger, Michael; Rodrigues, Tiago; Schneider, Petra; Kriegl, Jan M; Schneider, Gisbert
2016-05-01
We present the application of machine learning models to selecting G protein-coupled receptor (GPCR)-focused compound libraries. The library design process was realized by ant colony optimization. A proprietary Boehringer-Ingelheim reference set consisting of 3519 compounds tested in dose-response assays at 11 GPCR targets served as training data for machine learning and activity prediction. We compared the usability of the proprietary data with a public data set from ChEMBL. Gaussian process models were trained to prioritize compounds from a virtual combinatorial library. We obtained meaningful models for three of the targets (5-HT2c , MCH, A1), which were experimentally confirmed for 12 of 15 selected and synthesized or purchased compounds. Overall, the models trained on the public data predicted the observed assay results more accurately. The results of this study motivate the use of Gaussian process regression on public data for virtual screening and target-focused compound library design.
Designing Multi-target Compound Libraries with Gaussian Process Models.
Bieler, Michael; Reutlinger, Michael; Rodrigues, Tiago; Schneider, Petra; Kriegl, Jan M; Schneider, Gisbert
2016-05-01
We present the application of machine learning models to selecting G protein-coupled receptor (GPCR)-focused compound libraries. The library design process was realized by ant colony optimization. A proprietary Boehringer-Ingelheim reference set consisting of 3519 compounds tested in dose-response assays at 11 GPCR targets served as training data for machine learning and activity prediction. We compared the usability of the proprietary data with a public data set from ChEMBL. Gaussian process models were trained to prioritize compounds from a virtual combinatorial library. We obtained meaningful models for three of the targets (5-HT2c , MCH, A1), which were experimentally confirmed for 12 of 15 selected and synthesized or purchased compounds. Overall, the models trained on the public data predicted the observed assay results more accurately. The results of this study motivate the use of Gaussian process regression on public data for virtual screening and target-focused compound library design. PMID:27492085
Generating target system specifications from a domain model using CLIPS
NASA Technical Reports Server (NTRS)
Sugumaran, Vijayan; Gomaa, Hassan; Kerschberg, Larry
1991-01-01
The quest for reuse in software engineering is still being pursued and researchers are actively investigating the domain modeling approach to software construction. There are several domain modeling efforts reported in the literature and they all agree that the components that are generated from domain modeling are more conducive to reuse. Once a domain model is created, several target systems can be generated by tailoring the domain model or by evolving the domain model and then tailoring it according to the specified requirements. This paper presents the Evolutionary Domain Life Cycle (EDLC) paradigm in which a domain model is created using multiple views, namely, aggregation hierarchy, generalization/specialization hierarchies, object communication diagrams and state transition diagrams. The architecture of the Knowledge Based Requirements Elicitation Tool (KBRET) which is used to generate target system specifications is also presented. The preliminary version of KBRET is implemented in the C Language Integrated Production System (CLIPS).
Target model and simulation for laser imaging fuze
NASA Astrophysics Data System (ADS)
Li, Weiheng; Song, Chengtian
2013-09-01
Image detection is an important direction of fuze development nowadays, and laser imaging fuze is one of the main technologies. This paper carries out the research in simulation technology of the process with detection, scan and imaging, which is used in laser imaging fuze for tank target, and get the simulation images information of different intersection conditions, including tank spot information，distance information and power information. The target coordinate system is established with the movement characteristics，physical characteristics and existing coordinate system of tank target. And through transferring missile coordinates to the target coordinate system as well as the relative movement between the different time intervals, the model of missile-target in time and space is build up. The model is build up according to the tank target and diffusion properties of different background, including desert, soil, vegetation, and buildings. The relations of scattering power and bidirectional reflectance distribution function deduced the laser echo power calculation formula, which can calculate the echoes incidence to each surface of the laser.The design of laser imaging fuze simulation system is complicated ,which contains the technology of the process with detection, scan and imaging used in laser imaging fuze for tank target. The simulation system products the tank spot picture, the distance gradation picture, and the power gradation picture. The latter two contains two-dimensional information, the scanning distance as well as the value of echo power to meet the expected design effects.
Huston, Joey [Co-Spokesperson; Ownes, Joseph [Co-Spokesperson
The Coordinated Theoretical-Experimental Project on QCD is a multi-institutional collaboration devoted to a broad program of research projects and cooperative enterprises in high-energy physics centered on Quantum Chromodynamics (QCD) and its implications in all areas of the Standard Model and beyond. The Collaboration consists of theorists and experimentalists at 18 universities and 5 national laboratories. More than 65 sets of Parton Distribution Functions are available for public access. Links to many online software tools, information about Parton Distribution Functions, papers, and other resources are also available.
Rapid SAR target modeling through genetic inheritance mechanism
NASA Astrophysics Data System (ADS)
Bala, Jerzy; Pachowicz, Peter W.; Vafaie, Halleh
1997-07-01
The paper presents a methodology and GETP experimental system for rapid SAR target signature generation from limited initial sensory data. The methodology exploits and integrates the following four processes: (1) analysis of initial SAR image signatures and their transformation into higher-level blob representation, (2) blob modeling, (3) genetic inheritance modeling to generate new instances of a target model in blob representation, and (4) synthesis of new SAR signatures from genetically evolved blob data. The GETP system takes several SAR signatures of the target and transforms each signature into more general scattered blob graphs, where each blob represents local energy cluster. A single graph node is describe by blob relative position, confidence, and iconic data. Graph data is forwarded to the genetic modeling process while blob image is stored in a catalog. Genetic inheritance is applied to the initial population of graph data. New graph models of the target are generated and evaluated. Selected graph variations are forwarded to the synthesis process. The synthesis process restores target signature from a given graph and a catalog of blobs. The background is synthesized to complement the signature. Initial experimental results are illustrated with 64 X 32 image sections of a tank.
Multiple parton interaction studies at DØ
Lincoln, D.
2016-04-01
Here, we present the results of studies of multiparton interactions done by the DØ collaboration using the Fermilab Tevatron at a center of mass energy of 1.96 TeV. We also present three analyses, involving three distinct final signatures: (a) a photon with at least 3 jets ( γ + 3jets), (b) a photon with a bottom or charm quark tagged jet and at least 2 other jets ( γ + b/c + 2jets), and (c) two J/ ψ mesons. The fraction of photon + jet events initiated by double parton scattering is about 20%, while the fraction for events inmore » which two J/ ψ mesons were produced is 30 ± 10. While the two measurements are statistically compatible, the difference might indicate differences in the quark and gluon distribution within a nucleon. Finally, this speculation originates from the fact that photon + jet events are created by collisions with quarks in the initial states, while J/ ψ events are produced preferentially by a gluonic initial state.« less
Performance of binoculars: Berek's model of target detection.
Merlitz, Holger
2015-01-01
A model of target detection thresholds, first presented by Max Berek of Leitz, is fitted into a simple set of closed equations. These are combined with a recently published universal formula for the human eye's pupil size to yield a versatile formalism that is capable of predicting binocular performance gains. The model encompasses target size, contrast, environmental luminance, binocular's objective diameter, magnification, angle of view, transmission, stray light, and the observer's age. We analyze performance parameters of various common binocular models and compare the results with popular approximations to binocular performance, like the well-known twilight index. The formalisms presented here are of interest in military target detection as well as in civil applications such as hunting, surveillance, object security, law enforcement, and astronomy. PMID:26366494
The influence of the target strength model on computed perforation
NASA Astrophysics Data System (ADS)
Reaugh, J. E.
1993-06-01
The authors used an axisymmetric, two-dimensional Eulerian computer simulation program to simulate the penetration of a tungsten rod, with length to diameter ratio L/D = 10, into a thick steel target and the same rod into finite steel plates of thicknesses between 0.9 and 1.3 L. They compare the perforation limit with the semi-infinite penetration depth at the same velocity (the excess thickness) when the model for target strength is constant yield stress and when the model incorporates work hardening and thermal softening. The authors also compare their computed results with available experimental results, which show an excess thickness of about 1 rod diameter.
The influence of the target strength model on computed perforation
NASA Astrophysics Data System (ADS)
Reaugh, John E.
1994-07-01
We used an axi-symmetric, two-dimensional Eulerian computer simulation program to simulate the penetration of tungsten rod with length to diameter ratio L/D=10 into a thick steel target, and the same rod into finite steel plates of thicknesses between 0.9 and 1.3 L. We compare the perforation limit with the semi-infinite penetrtaion depth at the same velocity (the excess thickness) when the model for target strength is constant yield stress, and when the model incorporates work hardening and thermal softening. We also compare our computed results with available experimental results, which show an excess of about 1 rod diameter.
Precision Modeling Of Targets Using The VALUE Computer Program
NASA Astrophysics Data System (ADS)
Hoffman, George A.; Patton, Ronald; Akerman, Alexander
1989-08-01
The 1976-vintage LASERX computer code has been augmented to produce realistic electro-optical images of targets. Capabilities lacking in LASERX but recently incorporated into its VALUE successor include: •Shadows cast onto the ground •Shadows cast onto parts of the target •See-through transparencies (e.g.,canopies) •Apparent images due both to atmospheric scattering and turbulence •Surfaces characterized by multiple bi-directional reflectance functions VALUE provides not only realistic target modeling by its precise and comprehensive representation of all target attributes, but additionally VALUE is very user friendly. Specifically, setup of runs is accomplished by screen prompting menus in a sequence of queries that is logical to the user. VALUE also incorporates the Optical Encounter (OPEC) software developed by Tricor Systems,Inc., Elgin, IL.
Modelling hot electron generation in short pulse target heating experiments
NASA Astrophysics Data System (ADS)
Sircombe, N. J.; Hughes, S. J.
2013-11-01
Target heating experiments planned for the Orion laser facility, and electron beam driven fast ignition schemes, rely on the interaction of a short pulse high intensity laser with dense material to generate a flux of energetic electrons. It is essential that the characteristics of this electron source are well known in order to inform transport models in radiation hydrodynamics codes and allow effective evaluation of experimental results and forward modelling of future campaigns. We present results obtained with the particle in cell (PIC) code EPOCH for realistic target and laser parameters, including first and second harmonic light. The hot electron distributions are characterised and their implications for onward transport and target heating are considered with the aid of the Monte-Carlo transport code THOR.
Coins as intermediate targets: reconstructive analysis with synthetic body models.
Thali, Michael J; Kneubuehl, Beat P; Rodriguez, William R; Smirniotopoulos, James G; Richardson, A Charles; Fowler, David; Godwin, Michael; Jurrus, Aaron; Fletcher, Douglas; Mallak, Craig
2009-06-01
The phenomenon of intermediate targets is well known in wound ballistics. In forensic science, models are used to reconstruct injury patterns to answer questions regarding the dynamic formation of these unusual injuries. Soft-tissue substitutes or glycerin soap and ordnance gelatin have been well established. Recently, based on previous experiences with artificial bone, a skull-brain model was developed. The goal of this study was to create and analyze a model-supported reconstruction of a real forensic case with a coin as an intermediate target. It was possible not only to demonstrate the "bullet-coin interaction," but also to recreate the wound pattern found in the victim. This case demonstrates that by using ballistic models, gunshot cases can be reproduced simply and economically, without coming into conflict with ethical guidelines. PMID:19465807
Modeling target acquisition tasks associated with security and surveillance
NASA Astrophysics Data System (ADS)
Vollmerhausen, Richard; Robinson, Aaron L.
2007-07-01
Military sensor applications include tasks such as the surveillance of activity and searching for roadside explosives. These tasks involve identifying and tracking specific objects in a cluttered scene. Unfortunately, the probability of accomplishing these tasks is not predicted by the traditional detect, recognize, and identify (DRI) target acquisition models. The reason why many security and surveillance tasks are functionally different from the traditional DRI tasks is described. Experiments using characters and simple shapes illustrate the problem with using the DRI model to predict the probability of identifying individual objects. The current DRI model is extended to predict specific object identification by including the frequency spectrum content of target contrast. The predictions of the new model match experimental data.
Coins as intermediate targets: reconstructive analysis with synthetic body models.
Thali, Michael J; Kneubuehl, Beat P; Rodriguez, William R; Smirniotopoulos, James G; Richardson, A Charles; Fowler, David; Godwin, Michael; Jurrus, Aaron; Fletcher, Douglas; Mallak, Craig
2009-06-01
The phenomenon of intermediate targets is well known in wound ballistics. In forensic science, models are used to reconstruct injury patterns to answer questions regarding the dynamic formation of these unusual injuries. Soft-tissue substitutes or glycerin soap and ordnance gelatin have been well established. Recently, based on previous experiences with artificial bone, a skull-brain model was developed. The goal of this study was to create and analyze a model-supported reconstruction of a real forensic case with a coin as an intermediate target. It was possible not only to demonstrate the "bullet-coin interaction," but also to recreate the wound pattern found in the victim. This case demonstrates that by using ballistic models, gunshot cases can be reproduced simply and economically, without coming into conflict with ethical guidelines.
Jimenez-Delgado, P.; Reya, E.
2009-12-01
Based on our recent next-to-next-to-leading order (NNLO) dynamical parton distributions as obtained in the 'fixed flavor number scheme', we generate radiatively parton distributions in the 'variable flavor number scheme' where the heavy-quark flavors (c,b,t) also become massless partons within the nucleon. Only within this latter factorization scheme are NNLO calculations feasible at present, since the required partonic subprocesses are only available in the approximation of massless initial-state partons. The NNLO predictions for gauge boson production are typically larger (by more than 1{sigma}) than the next-to-leading order (NLO) ones, and rates at LHC energies can be predicted with an accuracy of about 5%, whereas at Tevatron they are more than 2{sigma} above the NLO ones. The NNLO predictions for standard model Higgs-boson production via the dominant gluon fusion process have a total (parton distribution function and scale) uncertainty of about 10% at LHC which almost doubles at the lower Tevatron energies; they are typically about 20% larger than the ones at NLO but the total uncertainty bands overlap.
Using habitat suitability models to target invasive plant species surveys
Crall, Alycia W.; Jarnevich, Catherine S.; Panke, Brendon; Young, Nick; Renz, Mark; Morisette, Jeffrey
2013-01-01
Managers need new tools for detecting the movement and spread of nonnative, invasive species. Habitat suitability models are a popular tool for mapping the potential distribution of current invaders, but the ability of these models to prioritize monitoring efforts has not been tested in the field. We tested the utility of an iterative sampling design (i.e., models based on field observations used to guide subsequent field data collection to improve the model), hypothesizing that model performance would increase when new data were gathered from targeted sampling using criteria based on the initial model results. We also tested the ability of habitat suitability models to predict the spread of invasive species, hypothesizing that models would accurately predict occurrences in the field, and that the use of targeted sampling would detect more species with less sampling effort than a nontargeted approach. We tested these hypotheses on two species at the state scale (Centaurea stoebe and Pastinaca sativa) in Wisconsin (USA), and one genus at the regional scale (Tamarix) in the western United States. These initial data were merged with environmental data at 30-m2 resolution for Wisconsin and 1-km2 resolution for the western United States to produce our first iteration models. We stratified these initial models to target field sampling and compared our models and success at detecting our species of interest to other surveys being conducted during the same field season (i.e., nontargeted sampling). Although more data did not always improve our models based on correct classification rate (CCR), sensitivity, specificity, kappa, or area under the curve (AUC), our models generated from targeted sampling data always performed better than models generated from nontargeted data. For Wisconsin species, the model described actual locations in the field fairly well (kappa = 0.51, 0.19, P 2) = 47.42, P < 0.01). From these findings, we conclude that habitat suitability models can be
Killing Sections and Sigma Models with Lie Algebroid Targets
NASA Astrophysics Data System (ADS)
Bruce, Andrew James
2016-08-01
We define and examine the notion of a Killing section of a Riemannian Lie algebroid as a natural generalisation of a Killing vector field. We show that the various expression for a vector field to be Killing naturally generalise to the setting of Lie algebroids. As an application we examine the internal symmetries of a class of sigma models for which the target space is a Riemannian Lie algebroid. Critical points of these sigma models are interpreted as generalised harmonic maps.
Studies of Multi-Parton Interactions in Photon+Jets Events at D0
Bandurin, Dmitry; /Florida State U.
2011-09-01
We consider sample of inclusive {gamma} + 3 jet events collected by the D0 experiment. The double parton fraction (f{sub DP}) and effective cross section {sigma}{sub eff}, a process-independent scale parameter related to the parton density inside the nucleon, are measured in three intervals of the second (ordered in p{sub T}) jet transverse momentum p{sub T}{sup jet2} within the 15 {le} p{sub T}{sup jet2} {le} 30 GeV range. Also we measured cross sections as a function of the angle in the plane transverse to the beam direction between the transverse momentum (p{sub T}) of the {gamma} + leading jet system and p{sub T} of the other jet for {gamma} + 2 jet, or p{sub T} sum of the two other jets for {gamma} + 3 jet events. The results are compared to different models of multiple parton interactions (MPI) in the PYTHIA and SHERPA Monte Carlo (MC) generators.
Pathophysiology of gene-targeted mouse models for cystic fibrosis.
Grubb, B R; Boucher, R C
1999-01-01
Pathophysiology of Gene-Targeted Mouse Models for Cystic Fibrosis. Physiol. Rev. 79, Suppl.: S193-S214, 1999. - Mutations in the gene causing the fatal disease cystic fibrosis (CF) result in abnormal transport of several ions across a number of epithelial tissues. In just 3 years after this gene was cloned, the first CF mouse models were generated. The CF mouse models generated to date have provided a wealth of information on the pathophysiology of the disease in a variety of organs. Heterogeneity of disease in the mouse models is due to the variety of gene-targeting strategies used in the generation of the CF mouse models as well as the diversity of the murine genetic background. This paper reviews the pathophysiology in the tissues and organs (gastrointestinal, airway, hepatobiliary, pancreas, reproductive, and salivary tissue) involved in the disease in the various CF mouse models. Marked similarities to and differences from the human disease have been observed in the various murine models. Some of the CF mouse models accurately reflect the ion-transport abnormalities and disease phenotype seen in human CF patients, especially in gastrointestinal tissue. However, alterations in airway ion transport, which lead to the devastating lung disease in CF patients, appear to be largely absent in the CF mouse models. Reasons for these unexpected findings are discussed. This paper also reviews pharmacotherapeutic and gene therapeutic studies in the various mouse models. PMID:9922382
Method calibration of the model 13145 infrared target projectors
NASA Astrophysics Data System (ADS)
Huang, Jianxia; Gao, Yuan; Han, Ying
2014-11-01
The SBIR Model 13145 Infrared Target Projectors ( The following abbreviation Evaluation Unit ) used for characterizing the performances of infrared imaging system. Test items: SiTF, MTF, NETD, MRTD, MDTD, NPS. Infrared target projectors includes two area blackbodies, a 12 position target wheel, all reflective collimator. It provide high spatial frequency differential targets, Precision differential targets imaged by infrared imaging system. And by photoelectricity convert on simulate signal or digital signal. Applications software (IR Windows TM 2001) evaluate characterizing the performances of infrared imaging system. With regards to as a whole calibration, first differently calibration for distributed component , According to calibration specification for area blackbody to calibration area blackbody, by means of to amend error factor to calibration of all reflective collimator, radiance calibration of an infrared target projectors using the SR5000 spectral radiometer, and to analyze systematic error. With regards to as parameter of infrared imaging system, need to integrate evaluation method. According to regulation with -GJB2340-1995 General specification for military thermal imaging sets -testing parameters of infrared imaging system, the results compare with results from Optical Calibration Testing Laboratory . As a goal to real calibration performances of the Evaluation Unit.
CASP9 assessment of free modeling target predictions.
Kinch, Lisa; Yong Shi, Shuo; Cong, Qian; Cheng, Hua; Liao, Yuxing; Grishin, Nick V
2011-01-01
We present an overview of the ninth round of Critical Assessment of Protein Structure Prediction (CASP9) "Template free modeling" category (FM). Prediction models were evaluated using a combination of established structural and sequence comparison measures and a novel automated method designed to mimic manual inspection by capturing both global and local structural features. These scores were compared to those assigned manually over a diverse subset of target domains. Scores were combined to compare overall performance of participating groups and to estimate rank significance. Moreover, we discuss a few examples of free modeling targets to highlight the progress and bottlenecks of current prediction methods. Notably, a server prediction model for a single target (T0581) improved significantly over the closest structure template (44% GDT increase). This accomplishment represents the "winner" of the CASP9 FM category. A number of human expert groups submitted slight variations of this model, highlighting a trend for human experts to act as "meta predictors" by correctly selecting among models produced by the top-performing automated servers. The details of evaluation are available at http://prodata.swmed.edu/CASP9/ . PMID:21997521
Nonlinear sigma models with compact hyperbolic target spaces
Gubser, Steven; Saleem, Zain H.; Schoenholz, Samuel S.; Stoica, Bogdan; Stokes, James
2016-06-23
We explore the phase structure of nonlinear sigma models with target spaces corresponding to compact quotients of hyperbolic space, focusing on the case of a hyperbolic genus-2 Riemann surface. The continuum theory of these models can be approximated by a lattice spin system which we simulate using Monte Carlo methods. The target space possesses interesting geometric and topological properties which are reflected in novel features of the sigma model. In particular, we observe a topological phase transition at a critical temperature, above which vortices proliferate, reminiscent of the Kosterlitz-Thouless phase transition in the O(2) model [1, 2]. Unlike in themore » O(2) case, there are many different types of vortices, suggesting a possible analogy to the Hagedorn treatment of statistical mechanics of a proliferating number of hadron species. Below the critical temperature the spins cluster around six special points in the target space known as Weierstrass points. In conclusion, the diversity of compact hyperbolic manifolds suggests that our model is only the simplest example of a broad class of statistical mechanical models whose main features can be understood essentially in geometric terms.« less
Nonlinear sigma models with compact hyperbolic target spaces
NASA Astrophysics Data System (ADS)
Gubser, Steven; Saleem, Zain H.; Schoenholz, Samuel S.; Stoica, Bogdan; Stokes, James
2016-06-01
We explore the phase structure of nonlinear sigma models with target spaces corresponding to compact quotients of hyperbolic space, focusing on the case of a hyperbolic genus-2 Riemann surface. The continuum theory of these models can be approximated by a lattice spin system which we simulate using Monte Carlo methods. The target space possesses interesting geometric and topological properties which are reflected in novel features of the sigma model. In particular, we observe a topological phase transition at a critical temperature, above which vortices proliferate, reminiscent of the Kosterlitz-Thouless phase transition in the O(2) model [1, 2]. Unlike in the O(2) case, there are many different types of vortices, suggesting a possible analogy to the Hagedorn treatment of statistical mechanics of a proliferating number of hadron species. Below the critical temperature the spins cluster around six special points in the target space known as Weierstrass points. The diversity of compact hyperbolic manifolds suggests that our model is only the simplest example of a broad class of statistical mechanical models whose main features can be understood essentially in geometric terms.
Modeling astatine production in liquid lead-bismuth spallation targets
NASA Astrophysics Data System (ADS)
David, J. C.; Boudard, A.; Cugnon, J.; Ghali, S.; Leray, S.; Mancusi, D.; Zanini, L.
2013-03-01
Astatine isotopes can be produced in liquid lead-bismuth eutectic targets through proton-induced double charge exchange reactions on bismuth or in secondary helium-induced interactions. Models implemented into the most common high-energy transport codes generally have difficulties to correctly estimate their production yields as was shown recently by the ISOLDE Collaboration, which measured release rates from a lead-bismuth target irradiated by 1.4 and 1 GeV protons. In this paper, we first study the capability of the new version of the Liège intranuclear cascade model, INCL4.6, coupled to the deexcitation code ABLA07 to predict the different elementary reactions involved in the production of such isotopes through a detailed comparison of the model with the available experimental data from the literature. Although a few remaining deficiencies are identified, very satisfactory results are found, thanks in particular to improvements brought recently on the treatment of low-energy helium-induced reactions. The implementation of the models into MCNPX allows identifying the respective contributions of the different possible reaction channels in the ISOLDE case. Finally, the full simulation of the ISOLDE experiment is performed, taking into account the likely rather long diffusion time from the target, and compared with the measured diffusion rates for the different astatine isotopes, at the two studied energies, 1.4 and 1 GeV. The shape of the isotopic distribution is perfectly reproduced as well as the absolute release rates, assuming in the calculation a diffusion time between 5 and 10hours. This work finally shows that our model, thanks to the attention paid to the emission of high-energy clusters and to low-energy cluster induced reactions, can be safely used within MCNPX to predict isotopes with a charge larger than that of the target by two units in spallation targets, and, probably, more generally to isotopes created in secondary reactions induced by composite
Medium Modifications of Hadron Properties and Partonic Processes
Brooks, W. K.; Strauch, S.; Tsushima, K.
2011-06-01
Chiral symmetry is one of the most fundamental symmetries in QCD. It is closely connected to hadron properties in the nuclear medium via the reduction of the quark condensate
From C to Parton Sea: How Supercomputing Reveals Nucleon Structure
NASA Astrophysics Data System (ADS)
Lin, Huey-Wen
2016-03-01
Studying the structure of nucleons is not only important to understanding the strong interactions of quarks and gluons, but also to improving the precision of new-physics searches. Since a broad class of experiments, including the LHC and dark-matter detection, require interactions with nucleons, the mission to probe femtoscale physics is also essential for disentangling Standard-Model contributions from potential new physics. These SM backgrounds require parton distribution functions (PDFs) as inputs. However, after decades of experiments and theoretical efforts, there still remain many unknowns, especially in the sea flavor structure and transversely polarized structure. In a discrete spacetime, we can make a direct numerical calculation of the implications of QCD using sufficiently large supercomputing resources. A nonperturbative approach from first principles, lattice QCD, provides hope to expand our understanding of nucleon structure, especially in regions that are difficult to observe in experiments. In this work, we present a first direct calculation of the Bjorken-x dependence of the PDFs using Large-Momentum Effective Theory (LaMET) and comment on the surprising result revealed for the nucleon sea-flavor asymmetry. The work of HWL is supported in part by the M. Hildred Blewett Fellowship of the American Physical Society, www.aps.org.
NUCLEAR REACTION MODELING FOR RIA ISOL TARGET DESIGN
S. MASHNIK; ET AL
2001-03-01
Los Alamos scientists are collaborating with researchers at Argonne and Oak Ridge on the development of improved nuclear reaction physics for modeling radionuclide production in ISOL targets. This is being done in the context of the MCNPX simulation code, which is a merger of MCNP and the LAHET intranuclear cascade code, and simulates both nuclear reaction cross sections and radiation transport in the target. The CINDER code is also used to calculate the time-dependent nuclear decays for estimating induced radioactivities. They give an overview of the reaction physics improvements they are addressing, including intranuclear cascade (INC) physics, where recent high-quality inverse-kinematics residue data from GSI have led to INC spallation and fission model improvements; and preequilibrium reactions important in modeling (p,xn) and (p,xnyp) cross sections for the production of nuclides far from stability.
CASP9 Assessment of Free Modeling Target Predictions
Kinch, Lisa; Shi, Shuo Yong; Cong, Qian; Cheng, Hua; Liao, Yuxing; Grishin, Nick V.
2011-01-01
We present an overview of the ninth round of Critical Assessment of Protein Structure Prediction (CASP9) ‘Template free modeling’ category (FM). Prediction models were evaluated using a combination of established structural and sequence comparison measures and a novel automated method designed to mimic manual inspection by capturing both global and local structural features. These scores were compared to those assigned manually over a diverse subset of target domains. Scores were combined to compare overall performance of participating groups and to estimate rank significance. Moreover, we discuss a few examples of free modeling targets to highlight the progress and bottlenecks of current prediction methods. Notably, a server prediction model for a single target (T0581) improved significantly over the closest structure template (44% GDT increase). This accomplishment represents the ‘winner’ of the CASP9 FM category. A number of human expert groups submitted slight variations of this model, highlighting a trend for human experts to act as “meta predictors” by correctly selecting among models produced by the top-performing automated servers. The details of evaluation are available at http://prodata.swmed.edu/CASP9/ PMID:21997521
Comparison of measured and modeled BRDF of natural targets
NASA Astrophysics Data System (ADS)
Boucher, Yannick; Cosnefroy, Helene; Petit, Alain D.; Serrot, Gerard; Briottet, Xavier
1999-07-01
The Bidirectional Reflectance Distribution Function (BRDF) plays a major role to evaluate or simulate the signatures of natural and artificial targets in the solar spectrum. A goniometer covering a large spectral and directional domain has been recently developed by the ONERA/DOTA. It was designed to allow both laboratory and outside measurements. The spectral domain ranges from 0.40 to 0.95 micrometer, with a resolution of 3 nm. The geometrical domain ranges 0 - 60 degrees for the zenith angle of the source and the sensor, and 0 - 180 degrees for the relative azimuth between the source and the sensor. The maximum target size for nadir measurements is 22 cm. The spatial target irradiance non-uniformity has been evaluated and then used to correct the raw measurements. BRDF measurements are calibrated thanks to a spectralon reference panel. Some BRDF measurements performed on sand and short grass and are presented here. Eight bidirectional models among the most popular models found in the literature have been tested on these measured data set. A code fitting the model parameters to the measured BRDF data has been developed. The comparative evaluation of the model performances is carried out, versus different criteria (root mean square error, root mean square relative error, correlation diagram . . .). The robustness of the models is evaluated with respect to the number of BRDF measurements, noise and interpolation.
Moving target detection algorithm based on Gaussian mixture model
NASA Astrophysics Data System (ADS)
Wang, Zhihua; Kai, Du; Zhang, Xiandong
2013-07-01
In real-time video surveillance system, background noise and disturbance for the detection of moving objects will have a significant impact. The traditional Gaussian mixture model;GMM&;has strong adaptive various complex background ability, but slow convergence speed and vulnerable to illumination change influence. the paper proposes an improved moving target detection algorithm based on Gaussian mixture model which increase the convergence rate of foreground to the background model transformation and introducing the concept of the changing factors, through the three frame differential method solved light mutation problem. The results show that this algorithm can improve the accuracy of the moving object detection, and has good stability and real-time.
Brain Arteriovenous Malformation Modeling, Pathogenesis and Novel Therapeutic Targets
Chen, Wanqiu; Choi, Eun-Jung; McDougall, Cameron M.; Su, Hua
2014-01-01
Patients harboring brain arteriovenous malformation (bAVM) are at life-threatening risk of rupture and intracranial hemorrhage (ICH). The pathogenesis of bAVM has not been completely understood. Current treatment options are invasive and ≈ 20% of patients are not offered interventional therapy because of excessive treatment risk. There are no specific medical therapies to treat bAVMs. The lack of validated animal models has been an obstacle for testing hypotheses of bAVM pathogenesis and testing new therapies. In this review, we summarize bAVM model development; and bAVM pathogenesis and potential therapeutic targets that have been identified during model development. PMID:24723256
NASA Astrophysics Data System (ADS)
Karyan, Gevorg
2015-01-01
HERMES experiment at DESY in Hamburg collected a wealth of semi-inclusive deep-inelastic scattering data using the 27.6 GeV lepton beam and pure gaseous, unpolarised hydrogen and deuterium targets. These data can be used in studies of the transverse-momentum dependent effects and can provide a check of existing models for transverse-momentum dependent parton distribution and fragmentation functions.
Multiscale Modeling of Functionalized Nanocarriers in Targeted Drug Delivery
Liu, Jin; Bradley, Ryan; Eckmann, David M.; Ayyaswamy, Portonovo S.; Radhakrishnan, Ravi
2011-01-01
Targeted drug delivery using functionalized nanocarriers (NCs) is a strategy in therapeutic and diagnostic applications. In this paper we review the recent development of models at multiple length and time scales and their applications to targeting of antibody functionalized nanocarriers to antigens (receptors) on the endothelial cell (EC) surface. Our mesoscale (100 nm-1 μm) model is based on phenomenological interaction potentials for receptor-ligand interactions, receptor-flexure and resistance offered by glycocalyx. All free parameters are either directly determined from independent biophysical and cell biology experiments or estimated using molecular dynamics simulations. We employ a Metropolis Monte Carlo (MC) strategy in conjunction with the weighted histogram analysis method (WHAM) to compute the free energy landscape (potential of mean force or PMF) associated with the multivalent antigen-antibody interactions mediating the NC binding to EC. The binding affinities (association constants) are then derived from the PMF by computing absolute binding free energy of binding of NC to EC, taking into account the relevant translational and rotational entropy losses of NC and the receptors. We validate our model predictions by comparing the computed binding affinities and PMF to a wide range of experimental measurements, including in vitro cell culture, in vivo endothelial targeting, atomic force microscopy (AFM), and flow chamber experiments. The model predictions agree closely and quantitatively with all types experimental measurements. On this basis, we conclude that our computational protocol represents a quantitative and predictive approach for model driven design and optimization of functionalized NCs in targeted vascular drug delivery. PMID:22116782
Delineating the polarized and unpolarized partonic structure of the nucleon
Jimenez-Delgado, Pedro
2015-03-01
Our latest results on the extraction of parton distribution functions of the nucleon are reported. First an overview of the recent JR14 upgrade of our unpolarized PDFs, including NNLO determinations of the strong coupling constant and a discussion of the role of the input scale in parton distribution analysis. In the second part of the talk recent results on the determination of spin-dependent PDFs from the JAM collaboration are given, including a careful treatment of hadronic and nuclear corrections, as well as results on the impact of present and future data in our understanding of the spin of the nucleon.
Delineating the polarized and unpolarized partonic structure of the nucleon
Jimenez-Delgado, Pedro
2015-03-01
Reports on our latest extractions of parton distribution functions of the nucleon are given. First an overview of the recent JR14 upgrade of our unpolarized PDFs, including NNLO determinations of the strong coupling constant and a discussion of the role of the input scale in parton distribution analysis. In the second part of the talk recent results on the determination of spin-dependent PDFs from the JAM collaboration are reported, including a careful treatment of hadronic and nuclear corrections, as well as reports on the impact of present and future data in our understanding of the spin of the nucleon.
Transverse momentum dependent (TMD) parton distribution functions: Status and prospects*
Angeles-Martinez, R.; Bacchetta, A.; Balitsky, Ian I.; Boer, D.; Boglione, M.; Boussarie, R.; Ceccopieri, F. A.; Cherednikov, I. O.; Connor, P.; Echevarria, M. G.; et al
2015-01-01
In this study, we review transverse momentum dependent (TMD) parton distribution functions, their application to topical issues in high-energy physics phenomenology, and their theoretical connections with QCD resummation, evolution and factorization theorems. We illustrate the use of TMDs via examples of multi-scale problems in hadronic collisions. These include transverse momentum qT spectra of Higgs and vector bosons for low qT, and azimuthal correlations in the production of multiple jets associated with heavy bosons at large jet masses. We discuss computational tools for TMDs, and present the application of a new tool, TMDLIB, to parton density fits and parameterizations.
The role of the input scale in parton distribution analyses
Pedro Jimenez-Delgado
2012-08-01
A first systematic study of the effects of the choice of the input scale in global determinations of parton distributions and QCD parameters is presented. It is shown that, although in principle the results should not depend on these choices, in practice a relevant dependence develops as a consequence of what is called procedural bias. This uncertainty should be considered in addition to other theoretical and experimental errors, and a practical procedure for its estimation is proposed. Possible sources of mistakes in the determination of QCD parameter from parton distribution analysis are pointed out.
The influence of the target strength model on computed perforation
Reaugh, J.E.
1993-06-01
The authors used an axi-symmetric, two-dimensional Eulerian computer simulation program to simulate the penetration of a tungsten rod with length to diameter ratio L/D = 10 into a thick steel target, and the same rod into finite steel plates of thicknesses between 0.9 and 1.3 L. They compare the perforation limit with the semi-infinite penetration depth at the same velocity (the excess thickness) when the model for target strength is constant yield stress, and when the model incorporates work hardening and thermal softening. The authors also compare their computed results with available experimental results, which show an excess thickness of about 1 rod diameter.
NASA Astrophysics Data System (ADS)
Bergoen, Halkan
2002-09-01
The subject of this thesis is to model and verify the correctness of the architecture of the Digital Image Synthesizer (DIS). The DIS, a system-on-a-chip, is especially useful as a counter-targeting repeater. It synthesizes the characteristic echo signature of a pre-selected target. The VHDL (VHSIC (Very High Speed Integrated Circuit) Hardware Description Language) description of the DIS architecture was exported from Tanner S-Edit, modified, and simulated. Different software oriented verification approaches were researched and a White-box approach to functional verification was adopted. An algorithm based on the hardware functionality was developed to compare expected and simulated results. Initially, the architecture of one Range Bin Modulator was exported. Modifications to the VHDL source code included modeling of the behavior of the N-FET and P-FET (Positive Channel Field Effect Transistor) transistors as well as Ground and Vdd (the voltages connected to the drains of the FETs). It also included renaming of entities to comply with VHDL naming conventions. Simulation results were compared to manual calculations and Matlab programs to verify the architecture. The procedure was repeated for the architecture of an Eight-Range Bin Modulator with equally successful results. VHDL was then used to create a super class of a 32-Range Bin Modulator. Test vectors developed in Matlab were used to yet again verify correct functionality.
Recent progress in the statistical approach of parton distributions
Soffer, Jacques
2011-07-15
We recall the physical features of the parton distributions in the quantum statistical approach of the nucleon. Some predictions from a next-to-leading order QCD analysis are compared to recent experimental results. We also consider their extension to include their transverse momentum dependence.
Global parton distributions for the LHC Run II
NASA Astrophysics Data System (ADS)
Ball, R. D.
2016-07-01
We review the next generation global PDF sets: NNPDF3.0, MMHT14 and CT14. We describe the global datasets, particularly the new data from LHC Run I, the developments in QCD theory and PDF methodology, recent improvements in their combination and delivery, and future prospects for parton determination at Run II.
PARTON SATURATION, PRODUCTION, AND EQUILIBRATION IN HIGH ENERGY NUCLEAR COLLISIONS
VENUGOPALAN,R.
1999-03-20
Deeply inelastic scattering of electrons off nuclei can determine whether parton distributions saturate at HERA energies. If so, this phenomenon will also tell us a great deal about how particles are produced, and whether they equilibrate, in high energy nuclear collisions.
Optical model analyses of heavy ion fragmentation in hydrogen targets
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.
1994-01-01
Quantum-mechanical optical-model methods for calculating cross sections for the fragmentation of high-energy heavy ions by hydrogen targets are presented. The cross sections are calculated with a knockout-ablation collision formalism which has no arbitrary fitting parameters. Predictions of elemental production cross sections from the fragmentation of 1.2A Ge(V(La-139) nuclei and of isotope production cross sections from the fragmentation of 400A MeV(S-32) nuclei are in good agreement with recently reported experimental measurements.
Matching next-to-leading order predictions to parton showers in supersymmetric QCD
Degrande, Céline; Fuks, Benjamin; Hirschi, Valentin; Proudom, Josselin; Shao, Hua-Sheng
2016-02-03
We present a fully automated framework based on the FeynRules and MadGraph5_aMC@NLO programs that allows for accurate simulations of supersymmetric QCD processes at the LHC. Starting directly from a model Lagrangian that features squark and gluino interactions, event generation is achieved at the next-to-leading order in QCD, matching short-distance events to parton showers and including the subsequent decay of the produced supersymmetric particles. As an application, we study the impact of higher-order corrections in gluino pair-production in a simplified benchmark scenario inspired by current gluino LHC searches.
Matching next-to-leading order predictions to parton showers in supersymmetric QCD
NASA Astrophysics Data System (ADS)
Degrande, Céline; Fuks, Benjamin; Hirschi, Valentin; Proudom, Josselin; Shao, Hua-Sheng
2016-04-01
We present a fully automated framework based on the FEYNRULES and MADGRAPH5_aMC@NLO programs that allows for accurate simulations of supersymmetric QCD processes at the LHC. Starting directly from a model Lagrangian that features squark and gluino interactions, event generation is achieved at the next-to-leading order in QCD, matching short-distance events to parton showers and including the subsequent decay of the produced supersymmetric particles. As an application, we study the impact of higher-order corrections in gluino pair-production in a simplified benchmark scenario inspired by current gluino LHC searches.
NASA Astrophysics Data System (ADS)
Maltoni, Fabio; Mawatari, Kentarou; Zaro, Marco
2014-01-01
Vector-boson fusion and associated production at the LHC can provide key information on the strength and structure of the Higgs couplings to the Standard Model particles. Using an effective field theory approach, we study the effects of next-to-leading order (NLO) QCD corrections matched to a parton shower on selected observables for various spin-0 hypotheses. We find that inclusion of NLO corrections is needed to reduce the theoretical uncertainties on the total rates as well as to reliably predict the shapes of the distributions. Our results are obtained in a fully automatic way via FeynRules and MadGraph5_aMC@NLO.
VBFNLO: A parton level Monte Carlo for processes with electroweak bosons
NASA Astrophysics Data System (ADS)
Arnold, K.; Bähr, M.; Bozzi, G.; Campanario, F.; Englert, C.; Figy, T.; Greiner, N.; Hackstein, C.; Hankele, V.; Jäger, B.; Klämke, G.; Kubocz, M.; Oleari, C.; Plätzer, S.; Prestel, S.; Worek, M.; Zeppenfeld, D.
2009-09-01
VBFNLO is a fully flexible parton level Monte Carlo program for the simulation of vector boson fusion, double and triple vector boson production in hadronic collisions at next-to-leading order in the strong coupling constant. VBFNLO includes Higgs and vector boson decays with full spin correlations and all off-shell effects. In addition, VBFNLO implements CP-even and CP-odd Higgs boson via gluon fusion, associated with two jets, at the leading-order one-loop level with the full top- and bottom-quark mass dependence in a generic two-Higgs-doublet model. A variety of effects arising from beyond the Standard Model physics are implemented for selected processes. This includes anomalous couplings of Higgs and vector bosons and a Warped Higgsless extra dimension model. The program offers the possibility to generate Les Houches Accord event files for all processes available at leading order. Program summaryProgram title:VBFNLO Catalogue identifier: AEDO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL version 2 No. of lines in distributed program, including test data, etc.: 339 218 No. of bytes in distributed program, including test data, etc.: 2 620 847 Distribution format: tar.gz Programming language: Fortran, parts in C++ Computer: All Operating system: Linux, should also work on other systems Classification: 11.1, 11.2 External routines: Optionally Les Houches Accord PDF Interface library and the GNU Scientific library Nature of problem: To resolve the large scale dependence inherent in leading order calculations and to quantify the cross section error induced by uncertainties in the determination of parton distribution functions, it is necessary to include NLO corrections. Moreover, whenever stringent cuts are required on decay products and/or identified jets the question arises whether the scale dependence and a k-factor, defined
DISSECTING OCD CIRCUITS: FROM ANIMAL MODELS TO TARGETED TREATMENTS.
Ahmari, Susanne E; Dougherty, Darin D
2015-08-01
Obsessive-compulsive disorder (OCD) is a chronic, severe mental illness with up to 2-3% prevalence worldwide. In fact, OCD has been classified as one of the world's 10 leading causes of illness-related disability according to the World Health Organization, largely because of the chronic nature of disabling symptoms.([1]) Despite the severity and high prevalence of this chronic and disabling disorder, there is still relatively limited understanding of its pathophysiology. However, this is now rapidly changing due to development of powerful technologies that can be used to dissect the neural circuits underlying pathologic behaviors. In this article, we describe recent technical advances that have allowed neuroscientists to start identifying the circuits underlying complex repetitive behaviors using animal model systems. In addition, we review current surgical and stimulation-based treatments for OCD that target circuit dysfunction. Finally, we discuss how findings from animal models may be applied in the clinical arena to help inform and refine targeted brain stimulation-based treatment approaches. PMID:25952989
Dissecting OCD Circuits: From Animal Models to Targeted Treatments
Ahmari, Susanne E.; Dougherty, Darin D.
2015-01-01
Obsessive Compulsive Disorder (OCD) is a chronic, severe mental illness with up to 2–3% prevalence worldwide, which has been classified as one of the world’s 10 leading causes of illness-related disability according to the World Health Organization, largely because of the chronic nature of disabling symptoms 1. Despite the severity and high prevalence of this chronic and disabling disorder, there is still relatively limited understanding of its pathophysiology. However, this is now rapidly changing due to development of powerful technologies that can be used to dissect the neural circuits underlying pathologic behaviors. In this article, we describe recent technical advances that have allowed neuroscientists to start identifying the circuits underlying complex repetitive behaviors using animal model systems. In addition, we review current surgical and stimulation-based treatments for OCD that target circuit dysfunction. Finally, we discuss how findings from animal models may be applied in the clinical arena to help inform and refine targeted brain stimulation-based treatment approaches. PMID:25952989
Guzey, V.; Teckentrup, T.
2006-09-01
We develop the minimal model of a new leading order parametrization of generalized parton distributions (GPDs) introduced by Polyakov and Shuvaev. The model for GPDs H and E is formulated in terms of the forward quark distributions, the Gegenbauer moments of the D-term, and the forward limit of the GPD E. The model is designed primarily for small and medium-size values of x{sub B}, x{sub B}{<=}0.2. We examine two different models of the t dependence of the GPDs: the factorized exponential model and the nonfactorized Regge-motivated model. Using our model, we successfully described the deeply virtual Compton scattering (DVCS) cross section measured by H1 and ZEUS, the moments of the beam-spin A{sub LU}{sup sin{phi}}, the beam-charge A{sub C}{sup cos{phi}}, and the transversely polarized target A{sub UT}{sup sin{phi}}{sup cos{phi}} DVCS asymmetries measured by HERMES and A{sub LU}{sup sin{phi}} measured by CLAS. The data on A{sub C}{sup cos{phi}} prefer the Regge-motivated model of the t dependence of the GPDs. The data on A{sub UT}{sup sin{phi}}{sup cos{phi}} indicate that the u and d quarks carry only a small fraction of the proton total angular momentum.
Target space pseudoduality in supersymmetric sigma models on symmetric spaces
NASA Astrophysics Data System (ADS)
Sarisaman, Mustafa
We discuss the target space pseudoduality in supersymmetric sigma models on symmetric spaces. We first consider the case where sigma models based on real compact connected Lie groups of the same dimensionality and give examples using three dimensional models on target spaces. We show explicit construction of nonlocal conserved currents on the pseudodual manifold. We then switch the Lie group valued pseudoduality equations to Lie algebra valued ones, which leads to an infinite number of pseudoduality equations. We obtain an infinite number of conserved currents on the tangent bundle of the pseudo-dual manifold. Since pseudoduality imposes the condition that sigma models pseudodual to each other are based on symmetric spaces with opposite curvatures (i.e. dual symmetric spaces), we investigate pseudoduality transformation on the symmetric space sigma models in the third chapter. We see that there can be mixing of decomposed spaces with each other, which leads to mixings of the following expressions. We obtain the pseudodual conserved currents which are viewed as the orthonormal frame on the pullback bundle of the tangent space of G˜ which is the Lie group on which the pseudodual model based. Hence we obtain the mixing forms of curvature relations and one loop renormalization group beta function by means of these currents. In chapter four, we generalize the classical construction of pseudoduality transformation to supersymmetric case. We perform this both by component expansion method on manifold M and by orthonormal coframe method on manifold SO( M). The component method produces the result that pseudoduality transformation is not invertible at all points and occurs from all points on one manifold to only one point where riemann normal coordinates valid on the second manifold. Torsion of the sigma model on M must vanish while it is nonvanishing on M˜, and curvatures of the manifolds must be constant and the same because of anticommuting grassmann numbers. We obtain
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses. PMID:21138291
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.
Generalized parton distributions and rapidity gap survival in exclusive diffractive pp scattering
Frankfurt, L.; Hyde, C. E.; Strikman, M.; Weiss, C.
2007-03-01
We study rapidity gap survival (RGS) in the production of high-mass systems (H=dijet, heavy quarkonium, Higgs boson) in double-gap exclusive diffractive pp scattering, pp{yields}p+(gap)+H+(gap)+p. Our approach is based on the idea that hard and soft interactions are approximately independent because they proceed over widely different time and distance scales. We implement this idea in a partonic description of proton structure, which allows for a model-independent treatment of the interplay of hard and soft interactions. The high-mass system is produced in a hard scattering process with exchange of two gluons between the protons, whose amplitude is calculable in terms of the gluon generalized parton distribution (GPD), measured in exclusive ep scattering. The hard scattering process is modified by soft spectator interactions, which we calculate neglecting correlations between hard and soft interactions (independent interaction approximation). We obtain an analytic expression for the RGS probability in terms of the phenomenological pp elastic scattering amplitude, without reference to the eikonal approximation. Contributions from inelastic intermediate states are suppressed. The onset of the black-disk limit in pp scattering at TeV energies strongly suppresses diffraction at small impact parameters and is the main factor in determining the RGS probability. Correlations between hard and soft interactions (e.g. due to scattering from the long-range pion field of the proton or due to possible short-range transverse correlations between partons) further decrease the RGS probability. We also investigate the dependence of the diffractive cross section on the transverse momenta of the final-state protons ('diffraction pattern'). By measuring this dependence one can perform detailed tests of the interplay of hard and soft interactions and even extract information about the gluon GPD in the proton. Such studies appear to be feasible with the planned forward detectors at the
Glueck, M.; Reya, E.; Pisano, C.
2008-04-01
Recent measurements for F{sub 2}(x,Q{sup 2}) have been analyzed in terms of the 'dynamical' and 'standard' parton model approach at next-to-leading order (NLO) and next-to-next-to-leading order (NNLO) of perturbative QCD. Having fixed the relevant NLO and NNLO parton distributions, we present the implications and predictions for the longitudinal structure function F{sub L}(x,Q{sup 2}). It is shown that the previously noted extreme perturbative NNLO/NLO instability of F{sub L}(x,Q{sup 2}) is an artifact of the commonly utilized 'standard' gluon distributions. In particular it is demonstrated that using the appropriate--dynamically generated--parton distributions at NLO and NNLO, F{sub L}(x,Q{sup 2}) turns out to be perturbatively rather stable already for Q{sup 2}{>=}O(2-3 GeV{sup 2})
Nuclear Parton Distributions with the LHeC
NASA Astrophysics Data System (ADS)
Klein, Max
2016-03-01
Nuclear parton distributions are far from being known today because of an infant experimental base. Based on design studies of the LHeC and using new simulations, of the inclusive neutral and charged current cross section measurements and of the strange, charm and beauty densities in nuclei, it is demonstrated how that energy frontier electron-ion collider would unfold the complete set of nuclear PDFs in a hugely extended kinematic range of deep inelastic scattering, extending in Bjorken x down to values near to 10-6 in the perturbative domain. Together with a very precise and complete set of proton PDFs, the LHeC nPDFs will thoroughly change the theoretical understanding of parton dynamics and structure inside hadrons.
Small-x parton content of the photon
NASA Astrophysics Data System (ADS)
Forshaw, J. R.; Harriman, P. N.
1992-11-01
The parton content of the photon is examined systematically. In the low-Q2 region (Q2<1 GeV2) we assume the photon behaves largely according to the vector-meson-dominance (VMD) hypothesis, while for higher Q2 we supplement the VMD with a purely perturbative pointlike component and QCD corrections. Special care is taken in the region of small x: we work in the leading-logarithm approximation (LLA) in 1/x at low Q2 and in the LLA in Q2 for Q2>4 GeV2. We include small-x shadowing effects in the VMD sector and show that they need not be considered in the pointlike sector where the parton content is much less.
Transverse momentum dependent (TMD) parton distribution functions: Status and prospects*
Angeles-Martinez, R.; Bacchetta, A.; Balitsky, Ian I.; Boer, D.; Boglione, M.; Boussarie, R.; Ceccopieri, F. A.; Cherednikov, I. O.; Connor, P.; Echevarria, M. G.; Ferrera, G.; Grados Luyando, J.; Hautmann, F.; Jung, H.; Kasemets, T.; Kutak, K.; Lansberg, J. P.; Lykasov, G.; Madrigal Martinez, J. D.; Mulders, P. J.; Nocera, E. R.; Petreska, E.; Pisano, C.; Placakyte, R.; Radescu, V.; Radici, M.; Schnell, G.; Signori, A.; Szymanowski, L.; Taheri Monfared, S.; Van der Veken, F. F.; van Haevermaet, H. J.; Van Mechelen, P.; Vladimirov, A. A.; Wallon, S.
2015-01-01
In this study, we review transverse momentum dependent (TMD) parton distribution functions, their application to topical issues in high-energy physics phenomenology, and their theoretical connections with QCD resummation, evolution and factorization theorems. We illustrate the use of TMDs via examples of multi-scale problems in hadronic collisions. These include transverse momentum q_{T} spectra of Higgs and vector bosons for low q_{T}, and azimuthal correlations in the production of multiple jets associated with heavy bosons at large jet masses. We discuss computational tools for TMDs, and present the application of a new tool, TMD_{LIB}, to parton density fits and parameterizations.
Moments of nucleon spin-dependent generalized parton distributions
Wolfram Schroers; Richard Brower; Patrick Dreher; Robert Edwards; George Fleming; P. Hagler; Urs Heller; Thomas Lippert; John Negele; Andrew Pochinsky; Dru Renner; David Richards; Klaus Schilling
2004-03-01
We present a lattice measurement of the first two moments of the spin-dependent GPD H-tilde(x,xi,t). From these we obtain the axial coupling constant and the second moment of the spin-dependent forward parton distribution. The measurements are done in full QCD using Wilson fermions. In addition, we also present results from a first exploratory study of full QCD using Asqtad sea and domain-wall valence fermions.
Leading Twist Parton Distribution Amplitudes in Heavy Vector Mesons
NASA Astrophysics Data System (ADS)
Gao, Fei; Ding, Minghui; Chang, Lei; Liu, Yu-Xin; Roberts, Craig D.
2016-03-01
We employed QCD's Dyson-Schwinger equations (DSEs) for heavy quarks and obtained the leading twist parton distribution amplitudes (PDAs) in heavy vector mesons J/Ψ and ϒ. We found that all of the amplitudes are narrower than the asymptotic form, while they deviate from δ function. This indicates that the interaction between the two continent quarks are still important in the mesons consisted of charm and bottom quarks.
Deeply Pseudoscalar Meson Electroproduction with CLAS and Generalized Parton Distributions
Guidal, Michel; Kubarovsky, Valery P.
2015-06-01
We discuss the recent data of exclusive $\\pi^0$ (and $\\pi^+$) electroproduction on the proton obtained by the CLAS collaboration at Jefferson Lab. It is observed that the cross sections, which have been decomposed in $\\sigma_T+\\epsilon\\sigma_L$, $\\sigma_{TT}$ and $\\sigma_{LT}$ structure functions, are dominated by transverse amplitude contributions. The data can be interpreted in the Generalized Parton Distribution formalism provided that one includes helicity-flip transversity GPDs.
Informing Pedagogy Through the Brain-Targeted Teaching Model
Hardiman, Mariale
2012-01-01
Improving teaching to foster creative thinking and problem-solving for students of all ages will require two essential changes in current educational practice. First, to allow more time for deeper engagement with material, it is critical to reduce the vast number of topics often required in many courses. Second, and perhaps more challenging, is the alignment of pedagogy with recent research on cognition and learning. With a growing focus on the use of research to inform teaching practices, educators need a pedagogical framework that helps them interpret and apply research findings. This article describes the Brain-Targeted Teaching Model, a scheme that relates six distinct aspects of instruction to research from the neuro- and cognitive sciences. PMID:23653775
Target detection in hyperspectral imagery using forward modeling and in-scene information
NASA Astrophysics Data System (ADS)
Axelsson, Maria; Friman, Ola; Haavardsholm, Trym Vegard; Renhorn, Ingmar
2016-09-01
This work addresses the problem of detecting and classifying materials and targets in hyperspectral images based on their reflectance spectrum. Accurate target detection in hyperspectral imagery requires a radiative transfer model that maps between the spectral reflectance domain and the measured radiance domain. Such a model can be employed in two ways for detection - using atmospheric compensation, where the measured hyperspectral radiance image is converted to a reflectance image, and using forward modeling, where the target reflectance spectrum is converted to an at-sensor target radiance spectrum. This work presents a forward modeling detection method that utilizes in-scene information to estimate the parameters in the radiative transfer model. Uncertainty in the radiative transfer model and variability of the target spectra are captured using a constrained subspace model for the target. Target detection using library spectra and target rediscovery are evaluated in hyperspectral images of a complex urban scene.
High-resolution ground target infrared signature modeling for combat target identification training
NASA Astrophysics Data System (ADS)
Sanders, Jeffrey S.
2003-09-01
Recent world events have accelerated the evolution of the US military from monolithic formations arrayed against a known enemy, to a force that must respond to rapidly changing world events. New technologies are part of the Army's evolution and thermal imaging sensors are becoming more and more prevalent on the modern battlefield. These sensors are integrated into advanced weapon systems or commonly used for battlefield surveillance. Thermal imaging systems give the soldier the ability to deliver deadly force onto an enemy at long ranges at any time of day or night. The ability to differentiate friendly and threat forces in this situation is critical for the avoidance of friendly fire incidents and for the proper use of battlefield resources. The ability to foresee the location of the Army's next battlefield is becoming more difficult, and we don't know where the next battlefield will be from year to year. Infrared target recognition training tools need to be flexible, adaptable, and be based on not only the latest intelligence data but have geographically specific training available to the soldier. To address this training issue, personnel of the Measurement and Signatures Division at the National Ground Intelligence Center have created the Simulated Infrared Earth Environment Lab (SIREEL) web site. The SIREEL web site contains extensive infrared signature data on numerous threat and friendly vehicles and the site is designed to provide country-specific vehicle identification training in support of US military deployments. The bulk of the content currently on the site consists of infrared signature data collected over a decade of intelligence gathering. The site also employs state of the art infrared signature modeling capabilities to provide the soldier in training the most flexible training possible. If measured data on a vehicle is not available, the website developers have the capability to calculate the infrared signature of ground vehicles in any location
Cellular communication and “non-targeted effects”: Modelling approaches
NASA Astrophysics Data System (ADS)
Ballarini, Francesca; Facoetti, Angelica; Mariotti, Luca; Nano, Rosanna; Ottolenghi, Andrea
2009-10-01
During the last decade, a large number of experimental studies on the so-called "non-targeted effects", in particular bystander effects, outlined that cellular communication plays a significant role in the pathways leading to radiobiological damage. Although it is known that two main types of cellular communication (i.e. via gap junctions and/or molecular messengers diffusing in the extra-cellular environment, such as cytokines, NO etc.) play a major role, it is of utmost importance to better understand the underlying mechanisms, and how such mechanisms can be modulated by ionizing radiation. Though the "final" goal is of course to elucidate the in vivo scenario, in the meanwhile also in vitro studies can provide useful insights. In the present paper we will discuss key issues on the mechanisms underlying non-targeted effects and cell communication, for which theoretical models and simulation codes can be of great help. In this framework, we will present in detail three literature models, as well as an approach under development at the University of Pavia. More specifically, we will first focus on a version of the "State-Vector Model" including bystander-induced apoptosis of initiated cells, which was successfully fitted to in vitro data on neoplastic transformation supporting the hypothesis of a protective bystander effect mediated by apoptosis. The second analyzed model, focusing on the kinetics of bystander effects in 3D tissues, was successfully fitted to data on bystander damage in an artificial 3D skin system, indicating a signal range of the order of 0.7-1 mm. A third model for bystander effect, taking into account of spatial location, cell killing and repopulation, showed dose-response curves increasing approximately linearly at low dose rates but quickly flattening out for higher dose rates, also predicting an effect augmentation following dose fractionation. Concerning the Pavia approach, which can model the release, diffusion and depletion/degradation of
CARROLL,J.
1999-09-10
The RIKEN-BNL center workshop on ''Hard parton physics in high energy nuclear collisions'' was held at BNL from March 1st-5th! 1999. The focus of the workshop was on hard probes of nucleus-nucleus collisions that will be measured at RHIC with the PHENIX and STAR detectors. There were about 45 speakers and over 70 registered participants at the workshop, with roughly a quarter of the speakers from overseas. About 60% of the talks were theory talks. A nice overview of theory for RHIC was provided by George Sterman. The theoretical talks were on a wide range of topics in QCD which can be classified under the following: (a) energy loss and the Landau-Pomeranchuk-Migdal effect; (b) minijet production and equilibration; (c) small x physics and initial conditions; (d) nuclear parton distributions and shadowing; (e) spin physics; (f) photon, di-lepton, and charm production; and (g) hadronization, and simulations of high pt physics in event generators. Several of the experimental talks discussed the capabilities of the PHENIX and STAR detectors at RHIC in measuring high pt particles in heavy ion collisions. In general, these talks were included in the relevant theory sessions. A session was set aside to discuss the spin program at RHIC with polarized proton beams. In addition, there were speakers from 08, HERA, the fixed target experiments at Fermilab, and the CERN fixed target Pb+Pb program, who provided additional perspective on a range of issues of relevance to RHIC; from jets at the Tevatron, to saturation of parton distributions at HERA, and recent puzzling data on direct photon production in fixed target experiments, among others.
Wee partons in large nuclei: From virtual dream to hard reality
Venugopalan, R.
1995-06-01
We construct a weak coupling, many body theory to compute parton distributions in large nuclei for x {much_lt} A{sup {minus} 1/3}. The wee partons are highly coherent, non-Abelian Weizsaecker-Williams fields. Radiative corrections to the classical results axe discussed. The parton distributions for a single nucleus provide the initial conditions for the dynamical evolution of matter formed in ultrarelativistic nuclear collisions.
NASA Astrophysics Data System (ADS)
Seng, Chien-Yeah; Ramsey-Musolf, Michael J.
2013-07-01
We study the effect of parton angular momentum on the twist-four correction to the left-right asymmetry in the electron-deuteron parity-violating deep-inelastic scattering (PVDIS). We show that this higher-twist correction is transparent to the dynamics of parton angular momentum needed to account for the Sivers and Boer-Mulders functions and spin-independent parton distribution functions. A sufficiently precise measurement of the PVDIS asymmetry may, thus, provide additional information about the parton dynamics responsible for nucleon spin.
Selected topics on parton distribution functions
Hirai, M.; Saito, K.; Kawamura, H.; Kumano, S.
2011-12-14
We report recent studies on structure functions of the nucleon and nuclei. First, clustering effects are investigated in the structure function F{sub 2} of {sup 9}Be for explaining an unusual nuclear correction found in a JLab experiment. We propose that high densities created by formation of clustering structure like 2{alpha}+neutron in {sup 9}Be is the origin of the unexpected JLab result by using the antisymmetrized molecular dynamics (AMD). There is an approved proposal at JLab to investigate the structure functions of light nuclei including the cluster structure, so that much details will become clear in a few years. Second, tensor-polarized quark and antiquark distributions are obtained by analyzing HERMES measurements on the structure function b{sub 1} for the deuteron. The result suggests a finite tensor polarization for antiquark distributions, which is an interesting topic for further theoretical and experimental investigations. An experimental proposal exists at JLab for measuring b{sub 1} of the deuteron as a new tensor-structure study in 2010's. Furthermore, the antiquark tensor polarization could be measured by polarized deuteron Drell-Yan processes at hadron facilities such as J-PARC and GSI-FAIR. Third, the recent CDF dijet anomaly is investigated within the standard model by considering possible modifications of the strange-quark distribution. We find that the shape of a dijet-mass spectrum changes depending on the strange-quark distribution. It indicates that the CDF excess could be partially explained as a PDF effect, particularly by the strangeness in the nucleon, within the standard model if the excess at m{sub jj}{approx_equal}140 GeV is not a sharp peak.
Modeling to Support the Development of Habitat Targets for Piping Plovers on the Missouri River
Buenau, Kate E.
2015-05-05
Report on modeling and analyses done in support of developing quantative sandbar habitat targets for piping plovers, including assessment of reference, historical, dams present but not operated, and habitat construction calibrated to meet population viability targets.
Generalized Parton distributions with CLAS and CLAS12
S. Niccolai
2012-04-01
Recent promising results obtained with the Jefferson Lab CLAS detector on deeply virtual exclusive processes and their link to the Generalized Parton Distributions, along with the experimental program to study GPDs at the 12-GeV upgraded JLab using the CLAS12 detector, are discussed here. With its wide acceptance, high luminosity, good resolution and particle identification capabilities, as well as large Q2 and xB coverage (1 GeV < Q2 < 10 GeV2, 0.1 < xB < 0.8), CLAS12 will be the ideal facility to pursue the research on the 3-dimensional structure of the nucleon in the valence region.
Parton-hadron matter in- and out-off equilibrium
NASA Astrophysics Data System (ADS)
Bratkovskaya, E. L.; Ozvenchuk, V.; Cassing, W.; Konchakovski, V. P.; Linnyk, O.
2013-08-01
We study the shear and bulk viscosities of partonic and hadronic matter - as well as the electric conductivity - as functions of temperature T within the Parton-Hadron-String Dynamics (PHSD) off-shell transport approach. Dynamical hadronic and partonic systems in equilibrium are studied by the PHSD simulations in a finite box with periodic boundary conditions. The ratio of the shear viscosity to entropy density η(T)/s(T) from PHSD shows a minimum (with a value of about 0.1) close to the critical temperature Tc, while it approaches the perturbative QCD (pQCD) limit at higher temperatures in line with lattice QCD results. For T < Tc, i.e. in the hadronic phase, the ratio η/s rises fast with decreasing temperature due to a lower interaction rate of the hadronic system and a significantly smaller number of degrees-of-freedom. The bulk viscosity ζ(T) - evaluated in the relaxation time approach - is found to strongly depend on the effects of mean fields (or potentials) in the partonic phase. We find a significant rise of the ratio ζ(T)/s(T) in the vicinity of the critical temperature Tc, when consistently including the scalar mean-field from PHSD, which is also in agreement with that from lQCD calculations. Furthermore, we present the results for the ratio (η + 3ζ/4)/s, which is found to depend non-trivially on temperature and to generally agree with the lQCD calculations as well. Within the PHSD calculations, the strong maximum of ζ(T)/η(T) close to Tc has to be attributed to mean-fields (or potential) effects that in PHSD are encoded in the temperature dependence of the quasiparticle masses, which is related to the infrared enhancement of the resummed (effective) coupling g(T). We also find that the dimensionless ratio of the electric conductivity over temperature σ0/T rises above Tc approximately linearly with T up to T = 2.5Tc, but approaches a constant above 5Tc, as expected qualitatively from perturbative QCD (pQCD).
Pion and kaon valence-quark parton distribution functions
Nguyen, Trang; Bashir, Adnan; Roberts, Craig D.; Tandy, Peter C.
2011-06-15
A rainbow-ladder truncation of QCD's Dyson-Schwinger equations, constrained by existing applications to hadron physics, is employed to compute the valence-quark parton distribution functions of the pion and kaon. Comparison is made to {pi}-N Drell-Yan data for the pion's u-quark distribution and to Drell-Yan data for the ratio u{sub K}(x)/u{sub {pi}}(x): the environmental influence of this quantity is a parameter-free prediction, which agrees well with existing data. Our analysis unifies the computation of distribution functions with that of numerous other properties of pseudoscalar mesons.
Pion and kaon valence-quark parton distribution functions.
Nguyen, T.; Bashir, A.; Roberts, C. D.; Tandy, P. C.
2011-06-16
A rainbow-ladder truncation of QCD's Dyson-Schwinger equations, constrained by existing applications to hadron physics, is employed to compute the valence-quark parton distribution functions of the pion and kaon. Comparison is made to {pi}-N Drell-Yan data for the pion's u-quark distribution and to Drell-Yan data for the ratio u{sub K}(x)/u{sub {pi}}(x): the environmental influence of this quantity is a parameter-free prediction, which agrees well with existing data. Our analysis unifies the computation of distribution functions with that of numerous other properties of pseudoscalar mesons.
Nondiagonal parton distributions in the leading logarithmic approximation
NASA Astrophysics Data System (ADS)
Frankfurt, L. L.; Freund, A.; Guzey, V.; Strikman, M.
1998-02-01
In this paper we make predictions for nondiagonal parton distributions in a proton in the LLA. We calculate the DGLAP-type evolution kernels in the LLA, solve the nondiagonal GLAP evolution equations with a modified version of the CTEQ-package and comment on the range of applicability of the LLA in the asymmetric regime. We show that the nondiagonal gluon distribution g(x1,x2,t,μ2) can be well approximated at small x by the conventional gluon density xG(x,μ2).
Transverse Momentum-Dependent Parton Distributions From Lattice QCD
Michael Engelhardt, Bernhard Musch, Philipp Haegler, Andreas Schaefer
2012-12-01
Starting from a definition of transverse momentum-dependent parton distributions for semi-inclusive deep inelastic scattering and the Drell-Yan process, given in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection, a scheme to determine such observables in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are presented, focusing in particular on their dependence on the staple extent and the Collins-Soper evolution parameter.
An emergent world of gauge force and partons
NASA Astrophysics Data System (ADS)
Ma, Yao; Weng, Zheng-Yu
2014-09-01
We illustrate how a completely new world of gauge force emerges from a conventional condensed matter system in a rigorous way. A characteristic energy scale (Mott gap) separates such an exotic universe from the ordinary one that we condensed matter physicists are more familiar with at higher energies. The governing physical law is no longer about individual electrons but concerns fractionalized particles, i.e., partons, as the new collective modes resulted from strong correlation among the electrons. Novel phenomena in this low-energy universe are clearly distinguished from Landau's Fermi liquid described by the perturbative quantum many-body theory.
Transverse Momentum-Dependent Parton Distributions from Lattice QCD
NASA Astrophysics Data System (ADS)
Engelhardt, M.; Musch, B.; Hägler, P.; Negele, J.; Schäfer, A.
Starting from a definition of transverse momentum-dependent parton distributions for semi-inclusive deep inelastic scattering and the Drell-Yan process, given in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection, a scheme to determine such observables in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are presented, focusing in particular on their dependence on the staple extent and the Collins-Soper evolution parameter.
Pion and kaon valence-quark parton distribution functions
NASA Astrophysics Data System (ADS)
Nguyen, Trang; Bashir, Adnan; Roberts, Craig D.; Tandy, Peter C.
2011-06-01
A rainbow-ladder truncation of QCD’s Dyson-Schwinger equations, constrained by existing applications to hadron physics, is employed to compute the valence-quark parton distribution functions of the pion and kaon. Comparison is made to π-N Drell-Yan data for the pion’s u-quark distribution and to Drell-Yan data for the ratio uK(x)/uπ(x): the environmental influence of this quantity is a parameter-free prediction, which agrees well with existing data. Our analysis unifies the computation of distribution functions with that of numerous other properties of pseudoscalar mesons.
Investigating GPDs in the framework of the double distribution model
NASA Astrophysics Data System (ADS)
Nazari, F.; Mirjalili, A.
2016-06-01
In this paper, we construct the generalized parton distribution (GPD) in terms of the kinematical variables x, ξ, t, using the double distribution model. By employing these functions, we could extract some quantities which makes it possible to gain a three-dimensional insight into the nucleon structure function at the parton level. The main objective of GPDs is to combine and generalize the concepts of ordinary parton distributions and form factors. They also provide an exclusive framework to describe the nucleons in terms of quarks and gluons. Here, we first calculate, in the Double Distribution model, the GPD based on the usual parton distributions arising from the GRV and CTEQ phenomenological models. Obtaining quarks and gluons angular momenta from the GPD, we would be able to calculate the scattering observables which are related to spin asymmetries of the produced quarkonium. These quantities are represented by AN and ALS. We also calculate the Pauli and Dirac form factors in deeply virtual Compton scattering. Finally, in order to compare our results with the existing experimental data, we use the difference of the polarized cross-section for an initial longitudinal leptonic beam and unpolarized target particles (ΔσLU). In all cases, our obtained results are in good agreement with the available experimental data.
Jo, H S; Girod, F X; Avakian, H; Burkert, V D; Garçon, M; Guidal, M; Kubarovsky, V; Niccolai, S; Stoler, P; Adhikari, K P; Adikaram, D; Amaryan, M J; Anderson, M D; Anefalos Pereira, S; Ball, J; Baltzell, N A; Battaglieri, M; Batourine, V; Bedlinskiy, I; Biselli, A S; Boiarinov, S; Briscoe, W J; Brooks, W K; Carman, D S; Celentano, A; Chandavar, S; Charles, G; Colaneri, L; Cole, P L; Compton, N; Contalbrigo, M; Crede, V; D'Angelo, A; Dashyan, N; De Vita, R; De Sanctis, E; Deur, A; Djalali, C; Dupre, R; Alaoui, A El; Fassi, L El; Elouadrhiri, L; Fedotov, G; Fegan, S; Filippi, A; Fleming, J A; Garillon, B; Gevorgyan, N; Ghandilyan, Y; Gilfoyle, G P; Giovanetti, K L; Goetz, J T; Golovatch, E; Gothe, R W; Griffioen, K A; Guegan, B; Guler, N; Guo, L; Hafidi, K; Hakobyan, H; Harrison, N; Hattawy, M; Hicks, K; Hirlinger Saylor, N; Ho, D; Holtrop, M; Hughes, S M; Ilieva, Y; Ireland, D G; Ishkhanov, B S; Jenkins, D; Joo, K; Joosten, S; Keller, D; Khachatryan, G; Khandaker, M; Kim, A; Kim, W; Klein, A; Klein, F J; Kuhn, S E; Kuleshov, S V; Lenisa, P; Livingston, K; Lu, H Y; MacGregor, I J D; McKinnon, B; Meziani, Z E; Mirazita, M; Mokeev, V; Montgomery, R A; Moutarde, H; Movsisyan, A; Munevar, E; Munoz Camacho, C; Nadel-Turonski, P; Net, L A; Niculescu, G; Osipenko, M; Ostrovidov, A I; Paolone, M; Park, K; Pasyuk, E; Phillips, J J; Pisano, S; Pogorelko, O; Price, J W; Procureur, S; Prok, Y; Puckett, A J R; Raue, B A; Ripani, M; Rizzo, A; Rosner, G; Rossi, P; Roy, P; Sabatié, F; Salgado, C; Schott, D; Schumacher, R A; Seder, E; Simonyan, A; Skorodumina, Iu; Smith, G D; Sokhan, D; Sparveris, N; Stepanyan, S; Strakovsky, I I; Strauch, S; Sytnik, V; Tian, Ye; Tkachenko, S; Ungaro, M; Voskanyan, H; Voutier, E; Walford, N K; Watts, D P; Wei, X; Weinstein, L B; Wood, M H; Zachariou, N; Zana, L; Zhang, J; Zhao, Z W; Zonta, I
2015-11-20
Unpolarized and beam-polarized fourfold cross sections (d^{4}σ/dQ^{2}dx_{B}dtdϕ) for the ep→e^{'}p^{'}γ reaction were measured using the CLAS detector and the 5.75-GeV polarized electron beam of the Jefferson Lab accelerator, for 110 (Q^{2},x_{B},t) bins over the widest phase space ever explored in the valence-quark region. Several models of generalized parton distributions (GPDs) describe the data well at most of our kinematics. This increases our confidence that we understand the GPD H, expected to be the dominant contributor to these observables. Through a leading-twist extraction of Compton form factors, these results support the model predictions of a larger nucleon size at lower quark-momentum fraction x_{B}.
High range resolution radar target identification using the Prony model and hidden Markov models
NASA Astrophysics Data System (ADS)
Dewitt, Mark R.
1992-12-01
Fully polarized Xpatch signatures are transformed to two left circularly polarized signals. These two signals are then filtered by a linear FM pulse compression ('chirp') transfer function, corrupted by AWGN, and filtered by a filter matched to the 'chirp' transfer function. The bandwidth of the 'chirp' radar is about 750 MHz. Range profile feature extraction is performed using the TLS Prony Model parameter estimation technique developed at Ohio State University. Using the Prony Model, each scattering center is described by a polarization ellipse, relative energy, frequency response, and range. This representation of the target is vector quantized using a K-means clustering algorithm. Sequences of vector quantized scattering centers as well as sequences of vector quantized range profiles are used to synthesize target specific Hidden Markov Models (HMM's). The identification decision is made by determining which HMM has the highest probability of generating the unknown sequence. The data consist of synthesized Xpatch signatures of two targets which have been difficult to separate with other RTI algorithms. The RTI algorithm developed is clearly able to separate these two targets over a 10 by 10 degree (1 degree granularity) aspect angle window off the nose for SNR's as low as 0 dB. The classification rate is 100 percent for SNR's of 5 - 20 dB, 95 percent for a SNR of 0 dB and it drops rapidly for SNR's lower than 0 dB.
NLC Polarized Positron Photon Beam Target Thermal Structural Modeling
Stein, W; Sheppard, J C
2002-06-11
The NLC polarized positron photon beam target is a 0.4 radiation length thick titanium target. Energy deposition from one pulse occurs over 266 nano-seconds and results in heating of the target and pressure pulses straining the material. The 22.1 MeV photon beam has a spot size of 0.75 mm and results in a maximum temperature jump of 233 C. Stresses are induced in the material from thermal expansion of the hotter material. Peak effective stresses reach 19 Ksi (1.34 x 10{sup 8} Pa), which is lower than the yield strength of a titanium alloy by a factor of six.
Electromagnetic modelling of Ground Penetrating Radar responses to complex targets
NASA Astrophysics Data System (ADS)
Pajewski, Lara; Giannopoulos, Antonis
2014-05-01
defined through a constant real value, or else its frequency-dispersion properties can be taken into account by incorporating into the model Debye approximations. The electromagnetic source can be represented as a simple line of current (in the case of two-dimensional models), a Hertzian dipole, a bow tie antenna, or else, the realistic description of a commercial antenna can be included in the model [2]. Preliminary results for some of the proposed cells are presented, obtained by using GprMax [3], a freeware tool which solves Maxwell's equations by using a second order in space and time Finite-Difference Time-Domain algorithm. B-Scans and A-Scans are calculated at 1.5 GHz, for the total electric field and for the field back-scattered by targets embedded in the cells. A detailed description of the structures, together with the relevant numerical results obtained to date, are available for the scientific community on the website of COST Action TU1208, www.GPRadar.eu. Research groups working on the development of electromagnetic forward- and inverse-scattering techniques, as well as on imaging methods, might test and compare the accuracy and applicability of their approaches on the proposed set of scenarios. The aim of this initiative is not that of identifying the best methods, but more properly to indicate the range of reliability of each approach, highlighting its advantages and drawbacks. In the future, the realisation of the proposed concrete cells and the acquisition of GPR experimental data would allow a very effective benchmark for forward and inverse scattering methods. References [1] R. Yelf, A. Ward, "Nine steps to concrete wisdom." Proc. 13th International Conference on Ground Penetrating Radar, Lecce, Italy, 21-25 June 2010, pp. 1-8. [2] C. Warren, A. Giannopoulos, "Creating FDTD models of commercial GPR antennas using Taguchi's optimisation method." Geophysics (2011), 76, article ID G37. [3] A. Giannopoulos, "Modelling ground penetrating radar by GPRMAX
LHAPDF6: parton density access in the LHC precision era
NASA Astrophysics Data System (ADS)
Buckley, Andy; Ferrando, James; Lloyd, Stephen; Nordström, Karl; Page, Ben; Rüfenacht, Martin; Schönherr, Marek; Watt, Graeme
2015-03-01
The Fortran LHAPDF library has been a long-term workhorse in particle physics, providing standardised access to parton density functions for experimental and phenomenological purposes alike, following on from the venerable PDFLIB package. During Run 1 of the LHC, however, several fundamental limitations in LHAPDF's design have became deeply problematic, restricting the usability of the library for important physics-study procedures and providing dangerous avenues by which to silently obtain incorrect results. In this paper we present the LHAPDF 6 library, a ground-up re-engineering of the PDFLIB/LHAPDF paradigm for PDF access which removes all limits on use of concurrent PDF sets, massively reduces static memory requirements, offers improved CPU performance, and fixes fundamental bugs in multi-set access to PDF metadata. The new design, restricted for now to interpolated PDFs, uses centralised numerical routines and a powerful cascading metadata system to decouple software releases from provision of new PDF data and allow completely general parton content. More than 200 PDF sets have been migrated from LHAPDF 5 to the new universal data format, via a stringent quality control procedure. LHAPDF 6 is supported by many Monte Carlo generators and other physics programs, in some cases via a full set of compatibility routines, and is recommended for the demanding PDF access needs of LHC Run 2 and beyond.
A meta-analysis of parton distribution functions
NASA Astrophysics Data System (ADS)
Gao, Jun; Nadolsky, Pavel
2014-07-01
A "meta-analysis" is a method for comparison and combination of nonperturbative parton distribution functions (PDFs) in a nucleon obtained with heterogeneous procedures and assumptions. Each input parton distribution set is converted into a "meta-parametrization" based on a common functional form. By analyzing parameters of the meta-parametrizations from all input PDF ensembles, a combined PDF ensemble can be produced that has a smaller total number of PDF member sets than the original ensembles. The meta-parametrizations simplify the computation of the PDF uncertainty in theoretical predictions and provide an alternative to the 2010 PDF4LHC convention for combination of PDF uncertainties. As a practical example, we construct a META ensemble for computation of QCD observables at the Large Hadron Collider using the next-to-next-to-leading order PDF sets from CTEQ, MSTW, and NNPDF groups as the input. The META ensemble includes a central set that reproduces the average of LHC predictions based on the three input PDF ensembles and Hessian eigenvector sets for computing the combined PDF+α s uncertainty at a common QCD coupling strength of 0.118.
Partonic Equations of State in High-Energy Nuclear Collisions atRHIC
Xu, Nu
2006-10-01
The authors discuss the recent results on equation of state for partonic matter created at RHIC. Issues of partonic collectivity for multi-strange hadrons and J/{psi} from Au + Au collisions at {radical}s{sub NN} = 200 GeV are the focus of this paper.
How large is the gluon polarization in the statistical parton distributions approach?
Soffer, Jacques; Bourrely, Claude; Buccella, Franco
2015-04-10
We review the theoretical foundations of the quantum statistical approach to parton distributions and we show that by using some recent experimental results from Deep Inelastic Scattering, we are able to improve the description of the data by means of a new determination of the parton distributions. We will see that a large gluon polarization emerges, giving a significant contribution to the proton spin.
The drug-target residence time model: a 10-year retrospective.
Copeland, Robert A
2016-02-01
The drug-target residence time model was first introduced in 2006 and has been broadly adopted across the chemical biology, biotechnology and pharmaceutical communities. While traditional in vitro methods view drug-target interactions exclusively in terms of equilibrium affinity, the residence time model takes into account the conformational dynamics of target macromolecules that affect drug binding and dissociation. The key tenet of this model is that the lifetime (or residence time) of the binary drug-target complex, and not the binding affinity per se, dictates much of the in vivo pharmacological activity. Here, this model is revisited and key applications of it over the past 10 years are highlighted.
The difficulty in measuring suitable targets when modeling victimization.
Popp, Ann Marie
2012-01-01
Target suitability is a critical theoretical concept for opportunity theory. Previous research has primarily measured this concept using demographic characteristics of the study participant, which is problematic. This study corrects the measurement problem by employing bullying variables as alternative measures of target suitability because they are arguably better at capturing the social and psychological vulnerability of the individual that is attracting motivated offenders. Using three waves (1999, 2001, & 2003) of the National Crime Victimization Survey (NCVS) School Crime Supplement (SCS), this research explores the impact of the bullying measures along with demographic characteristics and lifestyle measures on the likelihood that a student will experience victimization in school. The findings suggest that the bullying measures are better predictors of victimization over the demographic characteristics and lifestyle measures for all three waves. The findings highlight the need for better measures of target suitability, which capture the social and psychological vulnerability of victims to explain victimization. PMID:23155721
Jo, Hyon -Suk
2015-11-17
Unpolarized and beam-polarized four-fold cross sectionsmore » $$\\frac{d^4 \\sigma}{dQ^2 dx_B dt d\\phi}$$ for the $$ep\\to e^\\prime p^\\prime \\gamma$$ reaction were measured using the CLAS detector and the 5.75-GeV polarized electron beam of the Jefferson Lab accelerator, for 110 ($Q^2,x_B,t$) bins over the widest phase space ever explored in the valence-quark region. Several models of Generalized Parton Distributions (GPDs) describe the data well at most of our kinematics. This increases our confidence that we understand the GPD $H$, expected to be the dominant contributor to these observables. Thus, through a leading-twist extraction of Compton Form Factors, these results reveal a tomographic image of the nucleon.« less
Jo, Hyon -Suk
2015-11-17
Unpolarized and beam-polarized four-fold cross sections $\\frac{d^4 \\sigma}{dQ^2 dx_B dt d\\phi}$ for the $ep\\to e^\\prime p^\\prime \\gamma$ reaction were measured using the CLAS detector and the 5.75-GeV polarized electron beam of the Jefferson Lab accelerator, for 110 ($Q^2,x_B,t$) bins over the widest phase space ever explored in the valence-quark region. Several models of Generalized Parton Distributions (GPDs) describe the data well at most of our kinematics. This increases our confidence that we understand the GPD $H$, expected to be the dominant contributor to these observables. Thus, through a leading-twist extraction of Compton Form Factors, these results reveal a tomographic image of the nucleon.
Polarized lepton deep-inelastic scattering from few-nucleon targets
NASA Astrophysics Data System (ADS)
Woloshyn, R. M.
1989-06-01
The structure functions for deep-inelastic scattering of polarized leptons from polarized few-nucleon targets (nucleon, 2H, 3He) are calculated in a parton model. Spin-dependent quark distributions constructed along the lines of Carlitz-Kaur model are used. The asymmetry for scattering from polarized 3He is small in magnitude and dominated by the neutron contribution. For 2H, cancellation between proton and neutron contributions leads to a very small asymmetry below x≈0.1. Otherwise the asymmetry is large but dominated by the proton.
Modeling Criterion Shifts and Target Checking in Prospective Memory Monitoring
ERIC Educational Resources Information Center
Horn, Sebastian S.; Bayen, Ute J.
2015-01-01
Event-based prospective memory (PM) involves remembering to perform intended actions after a delay. An important theoretical issue is whether and how people monitor the environment to execute an intended action when a target event occurs. Performing a PM task often increases the latencies in ongoing tasks. However, little is known about the…
Wang, Hongyuan; Zhang, Wei; Dong, Aotuo
2012-11-10
A modeling and validation method of photometric characteristics of the space target was presented in order to track and identify different satellites effectively. The background radiation characteristics models of the target were built based on blackbody radiation theory. The geometry characteristics of the target were illustrated by the surface equations based on its body coordinate system. The material characteristics of the target surface were described by a bidirectional reflectance distribution function model, which considers the character of surface Gauss statistics and microscale self-shadow and is obtained by measurement and modeling in advance. The contributing surfaces of the target to observation system were determined by coordinate transformation according to the relative position of the space-based target, the background radiation sources, and the observation platform. Then a mathematical model on photometric characteristics of the space target was built by summing reflection components of all the surfaces. Photometric characteristics simulation of the space-based target was achieved according to its given geometrical dimensions, physical parameters, and orbital parameters. Experimental validation was made based on the scale model of the satellite. The calculated results fit well with the measured results, which indicates the modeling method of photometric characteristics of the space target is correct.
Improved target signature definition for modeling performance of high-gain saturated imagery
NASA Astrophysics Data System (ADS)
Du Bosq, Todd; Preece, Bradley
2010-04-01
The standard model used to describe the performance of infrared sensors is the U.S. Army thermal target acquisition model, NVThermIP. The model is characterized by the apparent size and contrast of the target, and the resolution and sensitivity of the sensor. Currently, manual gain and level determine optimal contrast for military targets. The Night Vision models are calibrated to such images using a spatial average contrast consisting of the root sum squared of the difference between the target and background means, and the standard deviation of the target internal contrast. This definition of contrast applied to the model will show an unrealistic increase in performance for saturated targets. This paper presents a modified definition of target contrast for use in NVThermIP, including a threshold value for target to background mean difference and means to remove saturated pixels from the standard deviation of the target. Human perception experiments were performed and the measured results are compared with the predicted performance using the modified target contrast definition in NVThermIP.
An internal model of a moving visual target in the lateral cerebellum
Cerminara, Nadia L; Apps, Richard; Marple-Horvat, Dilwyn E
2009-01-01
In order to overcome the relatively long delay in processing visual feedback information when pursuing a moving visual target, it is necessary to predict the future trajectory of the target if it is to be tracked with accuracy. Predictive behaviour can be achieved through internal models, and the cerebellum has been implicated as a site for their operation. Purkinje cells in the lateral cerebellum (D zones) respond to visual inputs during visually guided tracking and it has been proposed that their neural activity reflects the operation of an internal model of target motion. Here we provide direct evidence for the existence of such a model in the cerebellum by demonstrating an internal model of a moving external target. Single unit recordings of Purkinje cells in lateral cerebellum (D2 zone) were made in cats trained to perform a predictable visually guided reaching task. For all Purkinje cells that showed tonic simple spike activity during target movement, this tonic activity was maintained during the transient disappearance of the target. Since simple spike activity could not be correlated to eye or limb movements, and the target was familiar and moved in a predictable fashion, we conclude that the Purkinje cell activity reflects the operation of an internal model based on memory of its previous motion. Such a model of the target's motion, reflected in the maintained modulation during the target's absence, could be used in a predictive capacity in the interception of a moving object. PMID:19047203
Nucleon helicity and transversity parton distributions from lattice QCD
NASA Astrophysics Data System (ADS)
Chen, Jiunn-Wei; Cohen, Saul D.; Ji, Xiangdong; Lin, Huey-Wen; Zhang, Jian-Hui
2016-10-01
We present the first lattice-QCD calculation of the isovector polarized parton distribution functions (both helicity and transversity) using the large-momentum effective field theory (LaMET) approach for direct Bjorken-x dependence. We first review the detailed steps of the procedure in the unpolarized case, then generalize to the helicity and transversity cases. We also derive a new mass-correction formulation for all three cases. We then compare the effects of each finite-momentum correction using lattice data calculated at Mπ ≈ 310 MeV. Finally, we discuss the implications of these results for the poorly known antiquark structure and predict the sea-flavor asymmetry in the transversely polarized nucleon.
Double parton effects for jets with large rapidity separation
Szczurek, Antoni; Cisek, Anna; Maciuła, Rafal
2015-04-10
We discuss production of four jets pp → jjjjX with at least two jets with large rapidity separation in proton-proton collisions at the LHC through the mechanism of double-parton scattering (DPS). The cross section is calculated in a factorizaed approximation. Each hard subprocess is calculated in LO collinear approximation. The LO pQCD calculations are shown to give a reasonably good descritption of CMS and ATLAS data on inclusive jet production. It is shown that relative contribution of DPS is growing with increasing rapidity distance between the most remote jets, center-of-mass energy and with decreasing (mini)jet transverse momenta. We show also result for angular azimuthal dijet correlations calculated in the framework of k{sub t} -factorization approximation.
Transverse momentum-dependent parton distribution functions in lattice QCD
Engelhardt, Michael G.; Musch, Bernhard U.; Haegler, Philipp G.; Negele, John W.; Schaefer, Andreas
2013-08-01
A fundamental structural property of the nucleon is the distribution of quark momenta, both parallel as well as perpendicular to its propagation. Experimentally, this information is accessible via selected processes such as semi-inclusive deep inelastic scattering (SIDIS) and the Drell-Yan process (DY), which can be parametrized in terms of transversemomentum-dependent parton distributions (TMDs). On the other hand, these distribution functions can be extracted from nucleon matrix elements of a certain class of bilocal quark operators in which the quarks are connected by a staple-shaped Wilson line serving to incorporate initial state (DY) or final state (SIDIS) interactions. A scheme for evaluating such matrix elements within lattice QCD is developed. This requires casting the calculation in a particular Lorentz frame, which is facilitated by a parametrization of the matrix elements in terms of invariant amplitudes. Exploratory results are presented for the time-reversal odd Sivers and Boer-Mulders transverse momentum shifts.
Transverse momentum-dependent parton distribution functions from lattice QCD
Michael Engelhardt, Philipp Haegler, Bernhard Musch, John Negele, Andreas Schaefer
2012-12-01
Transverse momentum-dependent parton distributions (TMDs) relevant for semi-inclusive deep inelastic scattering (SIDIS) and the Drell-Yan process can be defined in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection. Starting from such a definition, a scheme to determine TMDs in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are obtained using ensembles at the pion masses 369MeV and 518MeV, focusing in particular on the dependence of these shifts on the staple extent and a Collins-Soper-type evolution parameter quantifying proximity of the staples to the light cone.
Global NLO Analysis of Nuclear Parton Distribution Functions
Hirai, M.; Kumano, S.; Nagai, T.-H.
2008-02-21
Nuclear parton distribution functions (NPDFs) are determined by a global analysis of experimental measurements on structure-function ratios F{sub 2}{sup A}/F{sub 2}{sup A{sup '}} and Drell-Yan cross section ratios {sigma}{sub DY}{sup A}/{sigma}{sub DY}{sup A{sup '}}, and their uncertainties are estimated by the Hessian method. The NPDFs are obtained in both leading order (LO) and next-to-leading order (NLO) of {alpha}{sub s}. As a result, valence-quark distributions are relatively well determined, whereas antiquark distributions at x>0.2 and gluon distributions in the whole x region have large uncertainties. The NLO uncertainties are slightly smaller than the LO ones; however, such a NLO improvement is not as significant as the nucleonic case.
Iterative Monte Carlo analysis of spin-dependent parton distributions
NASA Astrophysics Data System (ADS)
Sato, Nobuo; Melnitchouk, W.; Kuhn, S. E.; Ethier, J. J.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration
2016-04-01
We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳0.1 . The study also provides the first determination of the flavor-separated twist-3 PDFs and the d2 moment of the nucleon within a global PDF analysis.
a 2d Integrable Axion Model and Target Space Duality
NASA Astrophysics Data System (ADS)
Forgács, Péter
2001-04-01
A review is given on the recently proposed two dimensional axion model (O(3) σ-model with a dynamical θ-term) and the T-duality relating it to the SU(2)×U(1) symmetric anisotropic σ-model. The T-duality transformation leads to a new Lax-pair. Strong evidence is presented for the correctness of the proposed S-matrix for both models comparing perturbative and Thermodynamical Bethe Ansatz calculations for different types of free energies. The quantum non-integrability of the O(3) σ-model with a constant θ-term, in contradistinction to the axion model, is illustrated by calculating the 2 → 3 particle production amplitude to lowest order in θ
Adaptive target detection in foliage-penetrating SAR images using alpha-stable models.
Banerjee, A; Burlina, P; Chellappa, R
1999-01-01
Detecting targets occluded by foliage in foliage-penetrating (FOPEN) ultra-wideband synthetic aperture radar (UWB SAR) images is an important and challenging problem. Given the different nature of target returns in foliage and nonfoliage regions and very low signal-to-clutter ratio in UWB imagery, conventional detection algorithms fail to yield robust target detection results. A new target detection algorithm is proposed that (1) incorporates symmetric alpha-stable (SalphaS) distributions for accurate clutter modeling, (2) constructs a two-dimensional (2-D) site model for deriving local context, and (3) exploits the site model for region-adaptive target detection. Theoretical and empirical evidence is given to support the use of the SalphaS model for image segmentation and constant false alarm rate (CFAR) detection. Results of our algorithm on real FOPEN images collected by the Army Research Laboratory are provided.
Virtual photon structure functions and the parton content of the electron
Drees, M. ); Godbole, R.M. )
1994-09-01
We point out that in processes involving the parton content of the photon the usual effective photon approximation should be modified. The reason is that the parton content of virtual photons is logarithmically suppressed compared to real photons. We describe this suppression using several simple, physically motivated [ital Ansa]$[ital uml---tze]. Although the parton content of the electron in general no longer factorizes into an electron flux function and a photon structure function, it can still be expressed as a single integral. Numerical examples are given for the [ital e][sup +][ital e][sup [minus
Hydrodynamic modeling of laser interaction with micro-structured targets
NASA Astrophysics Data System (ADS)
Velechovsky, J.; Limpouch, J.; Liska, R.; Tikhonchuk, V.
2016-09-01
A model is developed for numerical simulations of laser absorption in plasmas made of porous materials, with particular interest in low-density foams. Laser absorption is treated on two spatial scales simultaneously. At the microscale, the expansion of a thin solid pore wall is modeled in one dimension and the information obtained is used in the macroscale fluid simulations for the description of the plasma homogenization behind the ionization front. This two-scale laser absorption model is implemented in the arbitrary Lagrangian–Eulerian hydrocode PALE. The numerical simulations of laser penetration into low-density foams compare favorably with published experimental data.
Hydrodynamic modeling of laser interaction with micro-structured targets
Velechovsky, Jan; Limpouch, Jiri; Liska, Richard; Tikhonchuk, Vladimir
2016-08-03
A model is developed for numerical simulations of laser absorption in plasmas made of porous materials, with particular interest in low-density foams. Laser absorption is treated on two spatial scales simultaneously. At the microscale, the expansion of a thin solid pore wall is modeled in one dimension and the information obtained is used in the macroscale fluid simulations for the description of the plasma homogenization behind the ionization front. This two-scale laser absorption model is implemented in the arbitrary Lagrangian–Eulerian hydrocode PALE. In conclusion, the numerical simulations of laser penetration into low-density foams compare favorably with published experimental data.
Hydrodynamic modeling of laser interaction with micro-structured targets
NASA Astrophysics Data System (ADS)
Velechovsky, J.; Limpouch, J.; Liska, R.; Tikhonchuk, V.
2016-09-01
A model is developed for numerical simulations of laser absorption in plasmas made of porous materials, with particular interest in low-density foams. Laser absorption is treated on two spatial scales simultaneously. At the microscale, the expansion of a thin solid pore wall is modeled in one dimension and the information obtained is used in the macroscale fluid simulations for the description of the plasma homogenization behind the ionization front. This two-scale laser absorption model is implemented in the arbitrary Lagrangian-Eulerian hydrocode PALE. The numerical simulations of laser penetration into low-density foams compare favorably with published experimental data.
CAD Model and Visual Assisted Control System for NIF Target Area Positioners
Tekle, E A; Wilson, E F; Paik, T S
2007-10-03
The National Ignition Facility (NIF) target chamber contains precision motion control systems that reach up to 6 meters into the target chamber for handling targets and diagnostics. Systems include the target positioner, an alignment sensor, and diagnostic manipulators (collectively called positioners). Target chamber shot experiments require a variety of positioner arrangements near the chamber center to be aligned to an accuracy of 10 micrometers. Positioners are some of the largest devices in NIF, and they require careful monitoring and control in 3 dimensions to prevent interferences. The Integrated Computer Control System provides efficient and flexible multi-positioner controls. This is accomplished through advanced video-control integration incorporating remote position sensing and realtime analysis of a CAD model of target chamber devices. The control system design, the method used to integrate existing mechanical CAD models, and the offline test laboratory used to verify proper operation of the control system are described.
Humanized Mouse Model to Study Bacterial Infections Targeting the Microvasculature
Melican, Keira; Aubey, Flore; Duménil, Guillaume
2014-01-01
Neisseria meningitidis causes a severe, frequently fatal sepsis when it enters the human blood stream. Infection leads to extensive damage of the blood vessels resulting in vascular leak, the development of purpuric rashes and eventual tissue necrosis. Studying the pathogenesis of this infection was previously limited by the human specificity of the bacteria, which makes in vivo models difficult. In this protocol, we describe a humanized model for this infection in which human skin, containing dermal microvessels, is grafted onto immunocompromised mice. These vessels anastomose with the mouse circulation while maintaining their human characteristics. Once introduced into this model, N. meningitidis adhere exclusively to the human vessels, resulting in extensive vascular damage, inflammation and in some cases the development of purpuric rash. This protocol describes the grafting, infection and evaluation steps of this model in the context of N. meningitidis infection. The technique may be applied to numerous human specific pathogens that infect the blood stream. PMID:24747976
Chude-Okonkwo, Uche A K; Malekian, Reza; Maharaj, B T Sunil
2016-04-01
Targeted drug delivery (TDD) for disease therapy using liposomes as nanocarriers has received extensive attention in the literature. The liposome's ability to incorporate capabilities such as long circulation, stimuli responsiveness, and targeting characteristics, makes it a versatile nanocarrier. Timely drug release at the targeted site requires that trigger stimuli such as pH, light, and enzymes be uniquely overexpressed at the targeted site. However, in some cases, the targeted sites may not express trigger stimuli significantly, hence, achieving effective TDD at those sites is challenging. In this paper, we present a molecular communication-based TDD model for the delivery of therapeutic drugs to multiple sites that may or may not express trigger stimuli. The nanotransmitter and nanoreceiver models for the molecular communication system are presented. Here, the nanotransmitter and nanoreceiver are injected into the targeted body system's blood network. The compartmental pharmacokinetics model is employed to model the transportation of these therapeutic nanocarriers to the targeted sites where they are meant to anchor before the delivery process commences. We also provide analytical expressions for the delivered drug concentration. The effectiveness of the proposed model is investigated for drug delivery on tissue surfaces. Results show that the effectiveness of the proposed molecular communication-based TDD depends on parameters such as the total transmitter volume capacity, the receiver radius, the diffusion characteristic of the microenvironment of the targeted sites, and the concentration of the enzymes associated with the nanotransmitter and the nanoreceiver designs.
Tang, Jing; Aittokallio, Tero
2014-01-01
Polypharmacology has emerged as novel means in drug discovery for improving treatment response in clinical use. However, to really capitalize on the polypharmacological effects of drugs, there is a critical need to better model and understand how the complex interactions between drugs and their cellular targets contribute to drug efficacy and possible side effects. Network graphs provide a convenient modeling framework for dealing with the fact that most drugs act on cellular systems through targeting multiple proteins both through on-target and off-target binding. Network pharmacology models aim at addressing questions such as how and where in the disease network should one target to inhibit disease phenotypes, such as cancer growth, ideally leading to therapies that are less vulnerable to drug resistance and side effects by means of attacking the disease network at the systems level through synergistic and synthetic lethal interactions. Since the exponentially increasing number of potential drug target combinations makes pure experimental approach quickly unfeasible, this review depicts a number of computational models and algorithms that can effectively reduce the search space for determining the most promising combinations for experimental evaluation. Such computational-experimental strategies are geared toward realizing the full potential of multi-target treatments in different disease phenotypes. Our specific focus is on system-level network approaches to polypharmacology designs in anticancer drug discovery, where we give representative examples of how network-centric modeling may offer systematic strategies toward better understanding and even predicting the phenotypic responses to multi-target therapies.
The primary target model of energetic ions penetration in thin botanic samples
NASA Astrophysics Data System (ADS)
Wang, Yugang; Du, Guanghua; Xue, Jianming; Liu, Feng; Wang, Sixue; Yan, Sha; Zhao, Weijiang
2002-08-01
The ion transmission spectra of very low current MeV H + ions through two kinds of botanic samples, kidney bean slices and onion endocuticle, were carried out. The experimental spectra confirmed the botanic sample is inhomogeneous in mass density. A target model with local density approximation was suggested to describe the penetration of the energetic ions in such kind of materials. From the fitting of proton transmission spectra of two-energies, this target model was verified primarily. Including the influence of surface roughness and irradiation damage, this target model could be improved to predict the profile of penetration depth and range distribution of the energetic ions in the botanic samples.
Analytical model for release calculations in solid thin-foils ISOL targets
NASA Astrophysics Data System (ADS)
Egoriti, L.; Boeckx, S.; Ghys, L.; Houngbo, D.; Popescu, L.
2016-10-01
A detailed analytical model has been developed to simulate isotope-release curves from thin-foils ISOL targets. It involves the separate modeling of diffusion and effusion inside the target. The former has been modeled using both first and second Fick's law. The latter, effusion from the surface of the target material to the end of the ionizer, was simulated with the Monte Carlo code MolFlow+. The calculated delay-time distribution for this process was then fitted using a double-exponential function. The release curve obtained from the convolution of diffusion and effusion shows good agreement with experimental data from two different target geometries used at ISOLDE. Moreover, the experimental yields are well reproduced when combining the release fraction with calculated in-target production.
Hepatic or splenic targeting of carrier erythrocytes: a murine model
Zocchi, E.; Guida, L.; Benatti, U.; Canepa, M.; Borgiani, L.; Zanin, T.; De Flora, A.
1987-10-01
Carrier mouse erythrocytes, i.e., red cells, subjected to a dialysis technique involving transient hypotonic hemolysis and isotonic resealing were treated in vitro in three different ways: (a) energy depletion by exposure for 90 min at 42 degrees C; (b) desialylation by incubation with neuroaminidase; and (c) oxidative stress by incubation with H/sub 2/O/sub 2/ and NaN3. Procedure (c) afforded maximal damage, as shown by analysis of biochemical properties of the treated erythrocytes. Reinfusion in mice of the variously manipulated erythrocytes following their /sup 51/Cr labeling showed extensive fragilization as indicated by rapid clearance of radioactivity from the circulation. Moreover, both the energy-depleted and the neuraminidase-treated erythrocytes showed a preferential liver uptake, reaching 50 and 75%, respectively, within 2 h. On the other hand, exposure of erythrocytes to the oxidant stress triggered a largely splenic removal, accounting for almost 40% of the reinjected cells within 4 h. Transmission electron microscopy of liver from mice receiving energy-depleted erythrocytes demonstrated remarkable erythrocyte congestion within the sinusoids, followed by hyperactivity of Kupffer cells and by subsequent thickening of the perisinusoidal Disse space. Concomitantly, levels of serum transaminase activities were moderately increased. Each of the three procedures of manipulation of carrier erythrocytes may prove applicable under conditions where selective targeting of erythrocyte-encapsulated chemicals and drugs to either the liver or the spleen has to be achieved.
Inorganic Nanovehicle Targets Tumor in an Orthotopic Breast Cancer Model
NASA Astrophysics Data System (ADS)
Choi, Goeun; Kwon, Oh-Joon; Oh, Yeonji; Yun, Chae-Ok; Choy, Jin-Ho
2014-03-01
The clinical efficacy of conventional chemotherapeutic agent, methotrexate (MTX), can be limited by its very short plasma half-life, the drug resistance, and the high dosage required for cancer cell suppression. In this study, a new drug delivery system is proposed to overcome such limitations. To realize such a system, MTX was intercalated into layered double hydroxides (LDHs), inorganic drug delivery vehicle, through a co-precipitation route to produce a MTX-LDH nanohybrid with an average particle size of approximately 130 nm. Biodistribution studies in mice bearing orthotopic human breast tumors revealed that the tumor-to-liver ratio of MTX in the MTX-LDH-treated-group was 6-fold higher than that of MTX-treated-one after drug treatment for 2 hr. Moreover, MTX-LDH exhibited superior targeting effect resulting in high antitumor efficacy inducing a 74.3% reduction in tumor volume compared to MTX alone, and as a consequence, significant survival benefits. Annexin-V and propidium iodine dual staining and TUNEL analysis showed that MTX-LDH induced a greater degree of apoptosis than free MTX. Taken together, our data demonstrate that a new MTX-LDH nanohybrid exhibits a superior efficacy profile and improved distribution compared to MTX alone and has the potential to enhance therapeutic efficacy via inhibition of tumor proliferation and induction of apoptosis.
Modeling the behavior of a light-water production reactor target rod
Sherwood, D.J.
1992-03-01
Pacific Northwest Laboratory has been conducting a series of in-reactor experiments in the Idaho National Engineering Laboratory (INEL) Advanced Test Reactor (ATR) to determine the amount of tritium released by permeation from a target rod under neutron irradiation. The model discussed in this report was developed from first principles to model the behavior of the first target rod irradiated in the ATR. The model can be used to determine predictive relationships for the amount of tritium that permeates through the target rod cladding during irradiation. The model consists of terms and equations for tritium production, gettering, partial pressure, and permeation, all of which are described in this report. The model addressed only the condition of steady state and features only a single adjustable parameter. The target rod design for producing tritium in a light-water reactor was tested first in the WC-1 in-reactor experiment. During irradiation, tritium is generated in the target rod within the ceramic lithium target material. The target rod has been engineered to limit the release of tritium to the reactor coolant during normal operations. The engineered features are a nickel-plated Zircaloy-4 getter and a barrier coating on the cladding surfaces. The ceramic target is wrapped with the getter material and the resulting ``pencils`` are inserted into the barrier coated cladding. These features of the rod are described in the report, along with the release of tritium from the ceramic target. The steady-state model could be useful for the design procedure of target rod components.
Modeling the behavior of a light-water production reactor target rod
Sherwood, D.J.
1992-03-01
Pacific Northwest Laboratory has been conducting a series of in-reactor experiments in the Idaho National Engineering Laboratory (INEL) Advanced Test Reactor (ATR) to determine the amount of tritium released by permeation from a target rod under neutron irradiation. The model discussed in this report was developed from first principles to model the behavior of the first target rod irradiated in the ATR. The model can be used to determine predictive relationships for the amount of tritium that permeates through the target rod cladding during irradiation. The model consists of terms and equations for tritium production, gettering, partial pressure, and permeation, all of which are described in this report. The model addressed only the condition of steady state and features only a single adjustable parameter. The target rod design for producing tritium in a light-water reactor was tested first in the WC-1 in-reactor experiment. During irradiation, tritium is generated in the target rod within the ceramic lithium target material. The target rod has been engineered to limit the release of tritium to the reactor coolant during normal operations. The engineered features are a nickel-plated Zircaloy-4 getter and a barrier coating on the cladding surfaces. The ceramic target is wrapped with the getter material and the resulting pencils'' are inserted into the barrier coated cladding. These features of the rod are described in the report, along with the release of tritium from the ceramic target. The steady-state model could be useful for the design procedure of target rod components.
NASA Astrophysics Data System (ADS)
Dehmollaian, Mojtaba
This thesis focuses on the application of radio waves for detection and recognition of visually obscured targets. To provide practical solutions, comprehensive forward and inverse models are needed to capture and exploit the physical phenomena involved. These models must accurately simulate wave propagation in the environment in which the target is imbedded, scattering from the target and wave interaction of the medium scatterers and the target. In this dissertation, two problems of major importance are investigated. The first problem is detection of complex targets camouflaged inside forest and the second problem pertains to imaging of building interiors and detection of targets within. In the early chapters, a hybrid target-foliage model is developed to investigate the scattering behavior of hard targets embedded inside a forest canopy. This model is composed of two parts, one for foliage and the other for hard targets. The connection between these two models that accounts for the first-order interaction between the foliage scatterers and the target is accomplished through the application of the reciprocity theorem. The foliage penetration model is based on the coherent single scattering theory, developed previously. The target scattering model is based on either exact numerical finite difference time domain technique or high frequency asymptotic iterative physical optics approximation. Having the hybrid target-foliage model, a polarization synthesis optimization method for improving signal to clutter ratio is presented, using genetic algorithms. In the later chapters, the problem of through-wall imaging using the synthetic aperture radar technique by employing ultra wideband antennas and scanning over a wide range of incidence angles is investigated. Theoretical and experimental studies on the effects of different walls on point target images are carried out and refocusing approaches are introduced to remove the wall effects and restore the image resolution
Impact modeling and prediction of attacks on cyber targets
NASA Astrophysics Data System (ADS)
Khalili, Aram; Michalk, Brian; Alford, Lee; Henney, Chris; Gilbert, Logan
2010-04-01
In most organizations, IT (information technology) infrastructure exists to support the organization's mission. The threat of cyber attacks poses risks to this mission. Current network security research focuses on the threat of cyber attacks to the organization's IT infrastructure; however, the risks to the overall mission are rarely analyzed or formalized. This connection of IT infrastructure to the organization's mission is often neglected or carried out ad-hoc. Our work bridges this gap and introduces analyses and formalisms to help organizations understand the mission risks they face from cyber attacks. Modeling an organization's mission vulnerability to cyber attacks requires a description of the IT infrastructure (network model), the organization mission (business model), and how the mission relies on IT resources (correlation model). With this information, proper analysis can show which cyber resources are of tactical importance in a cyber attack, i.e., controlling them enables a large range of cyber attacks. Such analysis also reveals which IT resources contribute most to the organization's mission, i.e., lack of control over them gravely affects the mission. These results can then be used to formulate IT security strategies and explore their trade-offs, which leads to better incident response. This paper presents our methodology for encoding IT infrastructure, organization mission and correlations, our analysis framework, as well as initial experimental results and conclusions.
Modelling Sensor and Target effects on LiDAR Waveforms
NASA Astrophysics Data System (ADS)
Rosette, J.; North, P. R.; Rubio, J.; Cook, B. D.; Suárez, J.
2010-12-01
The aim of this research is to explore the influence of sensor characteristics and interactions with vegetation and terrain properties on the estimation of vegetation parameters from LiDAR waveforms. This is carried out using waveform simulations produced by the FLIGHT radiative transfer model which is based on Monte Carlo simulation of photon transport (North, 1996; North et al., 2010). The opportunities for vegetation analysis that are offered by LiDAR modelling are also demonstrated by other authors e.g. Sun and Ranson, 2000; Ni-Meister et al., 2001. Simulations from the FLIGHT model were driven using reflectance and transmittance properties collected from the Howland Research Forest, Maine, USA in 2003 together with a tree list for a 200m x 150m area. This was generated using field measurements of location, species and diameter at breast height. Tree height and crown dimensions of individual trees were calculated using relationships established with a competition index determined for this site. Waveforms obtained by the Laser Vegetation Imaging Sensor (LVIS) were used as validation of simulations. This provided a base from which factors such as slope, laser incidence angle and pulse width could be varied. This has enabled the effect of instrument design and laser interactions with different surface characteristics to be tested. As such, waveform simulation is relevant for the development of future satellite LiDAR sensors, such as NASA’s forthcoming DESDynI mission (NASA, 2010), which aim to improve capabilities of vegetation parameter estimation. ACKNOWLEDGMENTS We would like to thank scientists at the Biospheric Sciences Branch of NASA Goddard Space Flight Center, in particular to Jon Ranson and Bryan Blair. This work forms part of research funded by the NASA DESDynI project and the UK Natural Environment Research Council (NE/F021437/1). REFERENCES NASA, 2010, DESDynI: Deformation, Ecosystem Structure and Dynamics of Ice. http
Healthy latrine development model to achieve MDGs target
NASA Astrophysics Data System (ADS)
Soedjono, Eddy S.; Arumsari, Nurvita
2014-03-01
A case happened in Pungging sub-district was one example of low level healthy habits of East Java inhabitants. According to the data of Mojokerto district Health Service until the end of 2010, there are 219 families (or about 8% of total families in Pungging sub-district) which do not have their own latrine. Moreover, if we observe closely to their prosperity level, the percentage of disadvantaged families and prosperous level I is still adequately high about 29,54% of the total number of families in Pungging sub-district. Accordingly, comprehensive studies related to basic sanitation requirement need to be done, not only in the matter of quantity but also in the matter of quality. Furthermore, further studies on people's knowledge and understanding on healthy sanitation also needed in the effort to understand people's demand to own latrine (willingness to pay) and ability to pay. Consequently, the design of healthy latrine which agrees with people's demand and ability is needed in order to achieve the target of Open Defecation Free (ODF) in 2015. The research methodology includes literary study, data collection, data analysis, and healthy latrine design. Out of 75 respondents, only 32% of them who attended counselling program on healthy latrine and only 48% of them who have knowledge on healthy latrine, but in reality 96% of respondents stated that healthy latrine is important. Healthy latrine, according to the respondents, is a place of defecation (BAB) which has components like latrine bowl or septic tank. Estimation on WTP distribution which is divided in two categories; low category with range of willingness to pay from IDR 0 to IDR 200,000 is IDR 90,048,000. On the other hand, high category with range of willingness to pay more than IDR 1,000,000 is IDR 749,964,768. Estimation on respondents' ATP in the area of study on the sanitation maintenance service is from IDR 7,000 to IDR 30,000.
Inferring multi-target QSAR models with taxonomy-based multi-task learning
2013-01-01
Background A plethora of studies indicate that the development of multi-target drugs is beneficial for complex diseases like cancer. Accurate QSAR models for each of the desired targets assist the optimization of a lead candidate by the prediction of affinity profiles. Often, the targets of a multi-target drug are sufficiently similar such that, in principle, knowledge can be transferred between the QSAR models to improve the model accuracy. In this study, we present two different multi-task algorithms from the field of transfer learning that can exploit the similarity between several targets to transfer knowledge between the target specific QSAR models. Results We evaluated the two methods on simulated data and a data set of 112 human kinases assembled from the public database ChEMBL. The relatedness between the kinase targets was derived from the taxonomy of the humane kinome. The experiments show that multi-task learning increases the performance compared to training separate models on both types of data given a sufficient similarity between the tasks. On the kinase data, the best multi-task approach improved the mean squared error of the QSAR models of 58 kinase targets. Conclusions Multi-task learning is a valuable approach for inferring multi-target QSAR models for lead optimization. The application of multi-task learning is most beneficial if knowledge can be transferred from a similar task with a lot of in-domain knowledge to a task with little in-domain knowledge. Furthermore, the benefit increases with a decreasing overlap between the chemical space spanned by the tasks. PMID:23842210
Transverse dynamics of hard partons in nuclear media and the QCD dipole
NASA Astrophysics Data System (ADS)
Wiedemann, Urs Achim
2000-08-01
We derive the non-abelian generalization of the Furry approximation which describes the transverse dynamical evolution of a hard projectile parton inside a spatially extended colour target field. This provides a unified starting point for the target rest frame description of the nuclear dependence of a large class of observables. For the case of the virtual γ ∗→q q¯ photoabsorption cross section, we investigate then in detail under which conditions the nuclear dependence encoded in the Furry wavefunctions can be parametrized by a q q¯ QCD dipole cross section. The important condition is colour triviality, i.e., the property that for arbitrary N-fold rescattering contributions the only non-vanishing colour trace is N c C FN. We give proofs for the colour triviality of the inelastic, diffractive and total photoabsorption cross section measured inclusively or with one jet resolved in the final state. Also, we list examples for which colour interference effects remain. Colour triviality allows us to write the γ ∗→q q¯ contribution to the DIS nuclear structure function F 2 for small Bjorken x Bj in terms of a path integral which describes the transverse size evolution of the q q¯ pair in the nuclear colour field. This expression reduces in an opacity expansion to the N=1 result of Nikolaev and Zakharov, and in the eikonal approximation to the Glauber-type rescattering formulas first derived by Mueller. In the harmonic oscillator approximation of the path integral, we quantify deviations from the eikonal limit. Their onset is characterized by the scales L/l f and E ⊥tot L which relate the longitudinal extension L of the nuclear target to the coherence length l f and the total transverse energy E ⊥tot accumulated by the q- q¯-pair.
Evolution of anisotropy of a partonic system from relativistic heavy-ion collisions
Jas, Weronika; Mrowczynski, Stanislaw
2007-10-15
The evolution of anisotropy in momentum and coordinate space of the parton system produced in relativistic heavy-ion collisions is discussed within the free-streaming approximation. The momentum distribution evolves from the prolate shape (elongated along the beam) to the oblate one (squeezed along the beam). At the same time, the eccentricity in coordinate space, which occurs at finite values of impact parameter, decreases. It is argued that the parton system reaches local thermodynamic equilibrium before the momentum distribution becomes oblate.
NASA Astrophysics Data System (ADS)
Hamann, C.; Zhu, M.-H.; Wünnemann, K.; Hecht, L.; Stöffler, D.
2016-08-01
We directly compare shock zoning (representing shock pressures from ~59 to ~5 GPa) preserved in layered melt particles recovered from impact experiments with quartz sand targets with numerical models of crater formation and shock wave attenuation.
Theory of high-energy electron scattering by composite targets
Coester, F.
1988-01-01
The emphasis of these expository lectures is on the role of relativistic invariance and the unity of the theory for medium and high energies. Sec. 2 introduces the kinematic notation and provides an elementary derivation of the general cross section. The relevant properties of the Poincare group and the transformation properties of current operators and target states are described in Sec 3. In Sec. 4 representations of target states with kinematic light-front symmetry are briefly discussed. The focus is on two applications. An impulse approximation of inclusive electron nucleus scattering at both medium and high energies. A parton model of the proton applied to deep inelastic scattering of polarized electrons by polarized protons. 19 refs.
The Brain-Targeted Teaching Model for 21st-Century Schools
ERIC Educational Resources Information Center
Hardiman, Mariale
2012-01-01
"The Brain-Targeted Teaching Model for 21st-Century Schools" serves as a bridge between research and practice by providing a cohesive, proven, and usable model of effective instruction. Compatible with other professional development programs, this model shows how to apply relevant research from educational and cognitive neuroscience to classroom…
Targeting Forest Management through Fire and Erosion Modeling
NASA Astrophysics Data System (ADS)
Elliot, William J.; Miller, Mary Ellen; MacDonald, Lee H.
2013-04-01
Forests deliver a number of ecosystem services, including clean water. When forests are disturbed by wildfire, the timing and quantity of runoff can be altered, and the quality can be severely degraded. A modeling study for about 1500 km2 in the Upper Mokelumne River Watershed in California was conducted to determine the risk of wildfire and the associated potential sediment delivery should a wildfire occur, and to calculate the potential reduction in sediment delivery that might result from fuel reduction treatments. The first step was to predict wildfire severity and probability of occurrence under current vegetation conditions with FlamMap fire prediction tool. FlamMap uses current vegetation, topography, and wind characteristics to predict the speed, flame length, and direction of a simulated flame front for each 30-m pixel. As the first step in the erosion modeling, a geospatial interface for the WEPP model (GeoWEPP) was used to delineate approximately 6-ha hillslope polygons for the study area. The flame length values from FlamMap were then aggregated for each hillslope polygon to yield a predicted fire intensity. Fire intensity and pre-fire vegetation conditions were used to estimate fire severity (either unburned, low, moderate or high). The fire severity was combined with soil properties from the STATSGO database to build the vegetation and soil files needed to run WEPP for each polygon. Eight different stochastic climates were generated to account for the weather variability within the basin. A modified batching version of GeoWEPP was used to predict the first-year post-fire sediment yield from each hillslope and subwatershed. Estimated sediment yields ranged from 0 to more than 100 Mg/ha, and were typical of observed values. The polygons that generated the greatest amount of sediment or that were critical for reducing fire spread were identified, and these were "treated" by reducing the amount of fuel available for a wildfire. The erosion associated with
Computational modeling of MAGO'' and other magnetized target fusion concepts
Lindemuth, I.R.; Kirkpatrick, R.C.; Reinovsky, R.E.; Sheehey, P.T.; Thurston, R.S.; Wysocki, F.J.
1993-01-01
One possible way to obtain a preheated and magnetized plasma suitable for subsequent implosion is the MAGO'' concept. The unique MAGO discharge consists of a two chambers, with electrical current flowing in one chamber accelerating plasma flow into an implosion chamber. Up to 4 [times] 10[sup 13] D-T neutrons have been produced in the MAGO discharge. In this paper, we discuss our computational modeling of MAGO. Our objectives are to characterize the plasma, compare with the limited diagnostics available, and to understand the neutron production. We also discuss, briefly, some other possible means for creating a magnetized plasma.
Computational modeling of ``MAGO`` and other magnetized target fusion concepts
Lindemuth, I.R.; Kirkpatrick, R.C.; Reinovsky, R.E.; Sheehey, P.T.; Thurston, R.S.; Wysocki, F.J.
1993-02-01
One possible way to obtain a preheated and magnetized plasma suitable for subsequent implosion is the ``MAGO`` concept. The unique MAGO discharge consists of a two chambers, with electrical current flowing in one chamber accelerating plasma flow into an implosion chamber. Up to 4 {times} 10{sup 13} D-T neutrons have been produced in the MAGO discharge. In this paper, we discuss our computational modeling of MAGO. Our objectives are to characterize the plasma, compare with the limited diagnostics available, and to understand the neutron production. We also discuss, briefly, some other possible means for creating a magnetized plasma.
Automated parton-shower variations in pythia 8
NASA Astrophysics Data System (ADS)
Mrenna, S.; Skands, P.
2016-10-01
In the era of precision physics measurements at the LHC, efficient and exhaustive estimations of theoretical uncertainties play an increasingly crucial role. In the context of Monte Carlo (MC) event generators, the estimation of such uncertainties traditionally requires independent MC runs for each variation, for a linear increase in total run time. In this work, we report on an automated evaluation of the dominant (renormalization-scale and nonsingular) perturbative uncertainties in the pythia 8 event generator, with only a modest computational overhead. Each generated event is accompanied by a vector of alternative weights (one for each uncertainty variation), with each set separately preserving the total cross section. Explicit scale-compensating terms can be included, reflecting known coefficients of higher-order splitting terms and reducing the effect of the variations. The formalism also allows for the enhancement of rare partonic splittings, such as g →b b ¯ and q →q γ , to obtain weighted samples enriched in these splittings while preserving the correct physical Sudakov factors.
Energy flow along the medium-induced parton cascade
NASA Astrophysics Data System (ADS)
Blaizot, J.-P.; Mehtar-Tani, Y.
2016-05-01
We discuss the dynamics of parton cascades that develop in dense QCD matter, and contrast their properties with those of similar cascades of gluon radiation in vacuum. We argue that such cascades belong to two distinct classes that are characterized respectively by an increasing or a constant (or decreasing) branching rate along the cascade. In the former class, of which the BDMPS, medium-induced, cascade constitutes a typical example, it takes a finite time to transport a finite amount of energy to very soft quanta, while this time is essentially infinite in the latter case, to which the DGLAP cascade belongs. The medium induced cascade is accompanied by a constant flow of energy towards arbitrary soft modes, leading eventually to the accumulation of the initial energy of the leading particle at zero energy. It also exhibits scaling properties akin to wave turbulence. These properties do not show up in the cascade that develops in vacuum. There, the energy accumulates in the spectrum at smaller and smaller energy as the cascade develops, but the energy never flows all the way down to zero energy. Our analysis suggests that the way the energy is shared among the offsprings of a splitting gluon has little impact on the qualitative properties of the cascades, provided the kernel that governs the splittings is not too singular.
Deeply virtual Compton scattering and generalized parton distributions at CLAS
Niccolai, Silvia
2008-11-01
The exclusive electroproduction of real photons and mesons at high momentum transfer allows us to access the Generalized Parton Distributions (GPDs). The formalism of the GPDs provides a unified description of the hadronic structure in terms of quark and gluonic degrees of freedom. In particular, the Deeply Virtual Compton Scattering (DVCS), ep â e2p2Å , is one of the key reactions to determine the GPDs experimentally, as it is the simplest process that can be described in terms of GPDs. A dedicated experiment to study DVCS has been carried out in Hall B at Jefferson Lab. Beam-spin asymmetries, resulting from the interference of the Bethe-Heitler process and DVCS have been extracted over the widest kinematic range ever accessed for this reaction ( 1.2 < Q 2 < 3.7 (GeV/c 2, 0.09 < - t < 1.3 (GeV/c 2, 0.13 < x B < 0.46 . In this paper, the results obtained experimentally are shown and compared to GPD parametrizations.
Redux on "When is the top quark a parton?"
NASA Astrophysics Data System (ADS)
Dawson, S.; Ismail, A.; Low, Ian
2014-07-01
If a new heavy particle ϕ is produced in association with the top quark in a hadron collider, the production cross section exhibits a collinear singularity of the form log(mϕ/mt), which can be resummed by introducing a top quark parton distribution function (PDF). We reassess the necessity of such resummation in the context of a high-energy pp collider. We find that the introduction of a top PDF typically has a small effect at √S ˜100 TeV due to three factors: (1) αs at the scale μ =mϕ, which is quite small when log(mϕ/mt) is large, (2) the Bjorken x≪1 for mϕ≲10 TeV, and (3) the kinematic region where log(mϕ/mt)≫1 is suppressed by phase space. We consider the example of pp→tH+ at next-to-leading logarithm (NLL) order and show that, in terms of the total cross section, the effect of a top PDF is generically smaller than that of a bottom PDF in the associated production of bϕ. However, in the pT distribution of the charged Higgs, the NLL calculation using a top PDF is crucial to generate the pT distribution for pT≲mt.
A global reanalysis of nuclear parton distribution functions
NASA Astrophysics Data System (ADS)
Eskola, Kari J.; Kolhinen, Vesa J.; Paukkunen, Hannu; Salgado, Carlos A.
2007-05-01
We determine the nuclear modifications of parton distribution functions of bound protons at scales Q2 >= 1.69 GeV2 and momentum fractions 10-5 <= x <= 1 in a global analysis which utilizes nuclear hard process data, sum rules and leading-order DGLAP scale evolution. The main improvements over our earlier work EKS98 are the automated χ2 minimization, simplified and better controllable fit functions, and most importantly, the possibility for error estimates. The resulting 16-parameter fit to the N = 514 datapoints is good, χ2/d.o.f = 0.82. Within the error estimates obtained, the old EKS98 parametrization is found to be fully consistent with the present analysis, with no essential difference in terms of χ2 either. We also determine separate uncertainty bands for the nuclear gluon and sea quark modifications in the large-x region where they are not stringently constrained by the available data. Comparison with other global analyses is shown and uncertainties demonstrated. Finally, we show that RHIC-BRAHMS data for inclusive hadron production in d+Au collisions lend support for a stronger gluon shadowing at x < 0.01 and also that fairly large changes in the gluon modifications do not rapidly deteriorate the goodness of the overall fits, as long as the initial gluon modifications in the region x ~ 0.02-0.04 remain small.
Drug-target interaction prediction: databases, web servers and computational models.
Chen, Xing; Yan, Chenggang Clarence; Zhang, Xiaotian; Zhang, Xu; Dai, Feng; Yin, Jian; Zhang, Yongdong
2016-07-01
Identification of drug-target interactions is an important process in drug discovery. Although high-throughput screening and other biological assays are becoming available, experimental methods for drug-target interaction identification remain to be extremely costly, time-consuming and challenging even nowadays. Therefore, various computational models have been developed to predict potential drug-target associations on a large scale. In this review, databases and web servers involved in drug-target identification and drug discovery are summarized. In addition, we mainly introduced some state-of-the-art computational models for drug-target interactions prediction, including network-based method, machine learning-based method and so on. Specially, for the machine learning-based method, much attention was paid to supervised and semi-supervised models, which have essential difference in the adoption of negative samples. Although significant improvements for drug-target interaction prediction have been obtained by many effective computational models, both network-based and machine learning-based methods have their disadvantages, respectively. Furthermore, we discuss the future directions of the network-based drug discovery and network approach for personalized drug discovery based on personalized medicine, genome sequencing, tumor clone-based network and cancer hallmark-based network. Finally, we discussed the new evaluation validation framework and the formulation of drug-target interactions prediction problem by more realistic regression formulation based on quantitative bioactivity data.
Multiple-model nonlinear filtering for low-signal ground target applications
NASA Astrophysics Data System (ADS)
Kreucher, Chris M.; Kastella, Keith D.
2001-08-01
This paper describes the design and implementation of multiple model nonlinear filters (MMNLF) for ground target tracking using Ground Moving Target Indicator (GMTI) radar measurements. The MMNLF is based on a general theory of hybrid continuous-discrete dynamics. The motion model state is discrete and its stochastic dynamics are a continuous- time Markov chain. For each motion model, the continuum dynamics are a continuous-state Markov process described here by appropriate Fokker-Plank equations. This is illustrated here by a specific two-model MMNLF in which one motion model incorporates terrain, road, and vehicle motion constraints derived from battlefield observations. The second model is slow diffusion in speed and heading. The target state conditional probability density is discretized on a moving grid and recursively updated with sensor measurements via Bayes' formula. The conditional density is time updated between sensor measurements using Alternating Direction Implicit (ADI) finite difference methods. In simulation testing against low signal to clutter + noise Ratio (SNCR) targets, the MMNLF is able to maintain track in situations where single model filters based on either of the component models fail. Potential applications of this work include detection and tracking of foliage-obscured moving targets.
Knowledge-based approach for generating target system specifications from a domain model
NASA Technical Reports Server (NTRS)
Gomaa, Hassan; Kerschberg, Larry; Sugumaran, Vijayan
1992-01-01
Several institutions in industry and academia are pursuing research efforts in domain modeling to address unresolved issues in software reuse. To demonstrate the concepts of domain modeling and software reuse, a prototype software engineering environment is being developed at George Mason University to support the creation of domain models and the generation of target system specifications. This prototype environment, which is application domain independent, consists of an integrated set of commercial off-the-shelf software tools and custom-developed software tools. This paper describes the knowledge-based tool that was developed as part of the environment to generate target system specifications from a domain model.
Model emulates human smooth pursuit system producing zero-latency target tracking.
Bahill, A T; McDonald, J D
1983-01-01
Humans can overcome the 150 ms time delay of the smooth pursuit eye movement system and track smoothly moving visual targets with zero-latency. Our target-selective adaptive control model can also overcome an inherent time delay and produce zero-latency tracking. No other model or man-made system can do this. Our model is physically realizable and physiologically realistic. The technique used in our model should be useful for analyzing other time-delay systems, such as man-machine systems and robots.
Lai, Massimo; Brun, Denis; Edelstein, Stuart J.; Le Novère, Nicolas
2015-01-01
Calmodulin is a calcium-binding protein ubiquitous in eukaryotic cells, involved in numerous calcium-regulated biological phenomena, such as synaptic plasticity, muscle contraction, cell cycle, and circadian rhythms. It exibits a characteristic dumbell shape, with two globular domains (N- and C-terminal lobe) joined by a linker region. Each lobe can take alternative conformations, affected by the binding of calcium and target proteins. Calmodulin displays considerable functional flexibility due to its capability to bind different targets, often in a tissue-specific fashion. In various specific physiological environments (e.g. skeletal muscle, neuron dendritic spines) several targets compete for the same calmodulin pool, regulating its availability and affinity for calcium. In this work, we sought to understand the general principles underlying calmodulin modulation by different target proteins, and to account for simultaneous effects of multiple competing targets, thus enabling a more realistic simulation of calmodulin-dependent pathways. We built a mechanistic allosteric model of calmodulin, based on an hemiconcerted framework: each calmodulin lobe can exist in two conformations in thermodynamic equilibrium, with different affinities for calcium and different affinities for each target. Each lobe was allowed to switch conformation on its own. The model was parameterised and validated against experimental data from the literature. In spite of its simplicity, a two-state allosteric model was able to satisfactorily represent several sets of experiments, in particular the binding of calcium on intact and truncated calmodulin and the effect of different skMLCK peptides on calmodulin’s saturation curve. The model can also be readily extended to include multiple targets. We show that some targets stabilise the low calcium affinity T state while others stabilise the high affinity R state. Most of the effects produced by calmodulin targets can be explained as modulation
A Target Model Construction Algorithm for Robust Real-Time Mean-Shift Tracking
Choi, Yoo-Joo; Kim, Yong-Goo
2014-01-01
Mean-shift tracking has gained more interests, nowadays, aided by its feasibility of real-time and reliable tracker implementation. In order to reduce background clutter interference to mean-shift object tracking, this paper proposes a novel indicator function generation method. The proposed method takes advantage of two ‘a priori’ knowledge elements, which are inherent to a kernel support for initializing a target model. Based on the assured background labels, a gradient-based label propagation is performed, resulting in a number of objects differentiated from the background. Then the proposed region growing scheme picks up one largest target object near the center of the kernel support. The grown object region constitutes the proposed indicator function and this allows an exact target model construction for robust mean-shift tracking. Simulation results demonstrate the proposed exact target model could significantly enhance the robustness as well as the accuracy of mean-shift object tracking. PMID:25372619
Designing and modeling a centrifugal microfluidic device to separate target blood cells
NASA Astrophysics Data System (ADS)
Shamloo, Amir; Selahi, AmirAli; Madadelahi, Masoud
2016-03-01
The objective of this study is to design a novel and efficient portable lab-on-a-CD (LOCD) microfluidic device for separation of specific cells (target cells) using magnetic beads. In this study the results are shown for neutrophils as target cells. However, other kinds of target cells can be separated in a similar approach. The designed microfluidics can be utilized as a point of care system for neutrophil detection. This microfluidic system employs centrifugal and magnetic forces for separation. After model validation by the experimental data in the literature (that may be used as a design tool for developing centrifugo-magnetophoretic devices), two models are presented for separation of target cells using magnetic beads. The first model consists of one container in the inlet section and two containers in the outlets. Initially, the inlet container is filled with diluted blood sample which is a mixture of red blood cells (RBCs) plus neutrophils which are attached to Magnetic beads. It is shown that by using centrifugal and magnetic forces, this model can separate all neutrophils with recovery factor of ~100%. In the second model, due to excess of magnetic beads in usual experimental analysis (to ensure that all target cells are attached to them) the geometry is improved by adding a third outlet for these free magnetic beads. It is shown that at angular velocity of 45 rad s-1, recovery factor of 100% is achievable for RBCs, free magnetic beads and neutrophils as target cells.
Computational Modeling and Neuroimaging Techniques for Targeting during Deep Brain Stimulation
Sweet, Jennifer A.; Pace, Jonathan; Girgis, Fady; Miller, Jonathan P.
2016-01-01
Accurate surgical localization of the varied targets for deep brain stimulation (DBS) is a process undergoing constant evolution, with increasingly sophisticated techniques to allow for highly precise targeting. However, despite the fastidious placement of electrodes into specific structures within the brain, there is increasing evidence to suggest that the clinical effects of DBS are likely due to the activation of widespread neuronal networks directly and indirectly influenced by the stimulation of a given target. Selective activation of these complex and inter-connected pathways may further improve the outcomes of currently treated diseases by targeting specific fiber tracts responsible for a particular symptom in a patient-specific manner. Moreover, the delivery of such focused stimulation may aid in the discovery of new targets for electrical stimulation to treat additional neurological, psychiatric, and even cognitive disorders. As such, advancements in surgical targeting, computational modeling, engineering designs, and neuroimaging techniques play a critical role in this process. This article reviews the progress of these applications, discussing the importance of target localization for DBS, and the role of computational modeling and novel neuroimaging in improving our understanding of the pathophysiology of diseases, and thus paving the way for improved selective target localization using DBS. PMID:27445709
Development of a target-site based regional frequency model using historical information
NASA Astrophysics Data System (ADS)
Hamdi, Yasser; Bardet, Lise; Duluc, Claire-Marie; Rebour, Vincent
2016-04-01
Nuclear power facilities in France were designed to withstand extreme environmental conditions with a very low probability of failure. Nevertheless, some exceptional surges considered as outliers are not properly addressed by classical frequency analysis models. If available data at the site of interest (target-site) is sufficiently complete on a long period and not characterized by the presence of an outlier, at-site frequency analysis can be used to estimate quantiles with acceptable uncertainties. Otherwise, regional and historical information (HI) may be used to mitigate the lack of data and the influence of the outlier by increasing its representativeness in the sample. several models have been proposed over the last years for regional extreme surges frequency analysis in France to take into account these outliers in the frequency analysis. However, these models do not give a specific weight to the target site and cannot take into account HI. The objective of the present work is to develop a regional frequency model (RFM) centered on a target-site and using HI. The neighborhood between sites is measured by a degree of physical and statistical dependence between observations (with a prior confidence level). Unlike existing models, the obtained region around the target site (and constituting the neighboring sites) is sliding from a target-site to another. In other words, the developed model assigns a region for each target site. The idea behind the construction of a frequency model favoring target sites and the principle of moving regions around these target-sites is the original key point of the developed model. A related issue regards the estimation of missed and/or ungauged surges at target-sites from those of gauged potential neighboring sites, a multiple linear regression (MLR) is used and it can be extended to other reconstitutions models. MLR analysis can be considered conclusive only if available observations at neighboring sites are informative enough
Jung, Joe; Longcope, Donald B.; Tabbara, Mazen R.
1999-06-01
A procedure has been developed to represent the loading on a penetrator and its motion during oblique penetration into geologic media. The penetrator is modeled with the explicit dynamics, finite element computer program PRONTO 3D and the coupled pressure on the penetrator is given in a new loading option based on a separate cavity expansion (CE) solution that accounts for the pressure reduction from a nearby target free surface. The free-surface influence distance is selected in a predictive manner by considering the pressure to expand a spherical cavity in a finite radius sphere of the target material. The CE/PRONTO 3D procedure allows a detailed description of the penetrator for predicting shock environments or structural failure during the entire penetra- tion event and is sufficiently rapid to be used in design optimization. It has been evaluated by comparing its results with data from two field tests of a full-scale penetrator into frozen soil at an impact angles of 49.6 and 52.5 degrees from the horizontal. The measured penetrator rotations were 24 and 22 degrees, respectively. In the simulation, the rotation was 21 degrees and predom- inately resulted from the pressure reduction of the free surface. Good agreement was also found for the penetration depth and axial and lateral acceleration at two locations in the penetrator.
Jung, Joe; Longcope, Donald B.; Tabbara, Mazen R.
1999-05-03
A procedure has been developed to represent the loading on a penetrator and its motion during oblique penetration into geologic media. The penetrator is modeled with the explicit dynamics, finite element computer program PRONTO 3D and the coupled pressure on the penetrator is given in a new loading option based on a separate cavity expansion (CE) solution that accounts for the pressure-reduction from a nearby target free surface. The free-surface influ- ence distance is selected in a predictive manner by considering the pressure to expand a spherical cavity in a finite radius sphere of the target material. The CE/PRONTO 3D procedure allows a detailed description of the penetrator for predicting shock environments or structural failure dur- ing the entire penetration event and is sufficiently rapid to be used in design optimization. It has been evaluated by comparing its results with data from two field tests of a full-scale penetrator into frozen soil at an impact angles of 49.6 and 52.5 degrees from the horizontal. The measured penetrator rotations were 24 and 22 degrees, respectively. In the simulation, the rotation was21 degrees and predominately resulted from the pressure reduction of the free surface. Good agree- ment was also found for the penetration depth and axial and lateral acceleration at two locations in the penetrator.
ERIC Educational Resources Information Center
Gabdulchakov, Valerian F.
2016-01-01
The subject of the study in the article is conceptual basis of construction of the target model of interaction between University and region. Hence the topic of the article "the Target model of strategic interaction between the University and the region in the field of education." The objective was to design a target model of this…
NASA Astrophysics Data System (ADS)
Trainor, Thomas A.
2015-03-01
The expression "multiple parton interactions" (MPI) denotes a conjectured QCD mechanism representing contributions from secondary (semi)hard parton scattering to the transverse azimuth region (TR) of jet-triggered p-p collisions. MPI is an object of underlying-event (UE) studies that consider variation of TR nch or pt yields relative to a trigger condition (leading hadron or jet pt). An alternative approach is 2D trigger-associated (TA) correlations on hadron transverse momentum pt or rapidity yt in which all hadrons from all p-p events are included. Based on a two-component (soft+hard) model (TCM) of TA correlations a jet-related TA hard component is isolated. Contributions to the hard component from the triggered dijet and from secondary dijets (MPI) can be distinguished, including their azimuth dependence relative to the trigger direction. Measured e+-e- and p-p¯ fragmentation functions and a minimum-bias jet spectrum from 200 GeV p-p¯ collisions are convoluted to predict the 2D hard component of TA correlations as a function of p-p collision multiplicity. The agreement between QCD predictions and TA correlation data is quantitative, confirming a dijet interpretation for the TCM hard component. The TA azimuth dependence is inconsistent with conventional UE assumptions.
NASA Astrophysics Data System (ADS)
Herschtal, A.; Foroudi, F.; Greer, P. B.; Eade, T. N.; Hindson, B. R.; Kron, T.
2012-05-01
Early approaches to characterizing errors in target displacement during a fractionated course of radiotherapy assumed that the underlying fraction-to-fraction variability in target displacement, known as the ‘treatment error’ or ‘random error’, could be regarded as constant across patients. More recent approaches have modelled target displacement allowing for differences in random error between patients. However, until recently it has not been feasible to compare the goodness of fit of alternate models of random error rigorously. This is because the large volumes of real patient data necessary to distinguish between alternative models have only very recently become available. This work uses real-world displacement data collected from 365 patients undergoing radical radiotherapy for prostate cancer to compare five candidate models for target displacement. The simplest model assumes constant random errors across patients, while other models allow for random errors that vary according to one of several candidate distributions. Bayesian statistics and Markov Chain Monte Carlo simulation of the model parameters are used to compare model goodness of fit. We conclude that modelling the random error as inverse gamma distributed provides a clearly superior fit over all alternatives considered. This finding can facilitate more accurate margin recipes and correction strategies.
Pentameric models as alternative molecular targets for the design of new antiaggregant agents.
Barrera Guisasola, Exequiel E; Gutierrez, Lucas J; Andujar, Sebastián A; Angelina, Emilio; Rodríguez, Ana M; Enriz, Ricardo D
2016-01-01
The structure-based drug design has been an extremely useful technique used for searching and developing of new therapeutic agents in various biological systems. In the case of AD, this approach has been difficult to implement. Among other several causes, the main problem might be the lack of a specific stable and reliable molecular target. In this paper the results obtained using a pentameric amyloid beta (Aβ) model as a molecular target are discussed. Our MD simulations have shown that this system is relatively structured and stable, displaying a lightly conformational flexibility during 2.0 μs of simulation time. This study allowed us to distinguish characteristic structural features in specific regions of the pentamer which should be taken into account when choosing this model as a molecular target. This represents a clear advantage compared to the monomer or dimer models which are highly flexible structures with large numbers of possible conformers. Using this pentameric model we performed two types of studies usually carried out on a molecular target: a virtual screening and the design on structural basis of new mimetic peptides with antiaggregant properties. Our results indicate that this pentameric model might be a good molecular target for these particular studies of molecular modeling. Details about the predictive power of our virtual screening as well as about the molecular interactions that stabilize the mimetic peptide-pentamer Aβ complexes are discussed in this paper.
An in vitro model of mesenchymal stem cell targeting using magnetic particle labelling.
El Haj, Alicia J; Glossop, John R; Sura, Harpal S; Lees, Martin R; Hu, Bin; Wolbank, Susanne; van Griensven, Martijn; Redl, Heinz; Dobson, Jon
2015-06-01
The specific targeting of cells to sites of tissue damage in vivo is a major challenge precluding the success of stem cell-based therapies. Magnetic particle-based targeting may provide a solution. Our aim was to provide a model system to study the trapping and potential targeting of human mesenchymal stem cells (MSCs) during in vitro fluid flow, which ultimately will inform cell targeting in vivo. In this system magnet arrays were used to trap superparamagnetic iron oxide particle-doped MSCs. The in vitro experiments demonstrated successful cell trapping, where the volume of cells trapped increased with magnetic particle concentration and decreased with increasing flow rate. Analysis of gene expression revealed significant increases in COL1A2 and SOX9. Using principles established in vitro, a proof-of-concept in vivo experiment demonstrated that magnetic particle-doped, luciferase-expressing MSCs were trapped by an implanted magnet in a subcutaneous wound model in nude mice. Our results demonstrate the effectiveness of using an in vitro model for testing superparamagnetic iron oxide particles to develop successful MSC targeting strategies during fluid flow, which ultimately can be translated to in vivo targeted delivery of cells via the circulation in a variety of tissue-repair models.
3D modelling of the electromagnetic response of geophysical targets using the FDTD method
Debroux, P.S.
1996-05-01
A publicly available and maintained electromagnetic finite-difference time domain (FDTD) code has been applied to the forward modelling of the response of 1D, 2D and 3D geophysical targets to a vertical magnetic dipole excitation. The FDTD method is used to analyze target responses in the 1 MHz to 100 MHz range, where either conduction or displacement currents may have the controlling role. The response of the geophysical target to the excitation is presented as changes in the magnetic field ellipticity. The results of the FDTD code compare favorably with previously published integral equation solutions of the response of 1D targets, and FDTD models calculated with different finite-difference cell sizes are compared to find the effect of model discretization on the solution. The discretization errors, calculated as absolute error in ellipticity, are presented for the different ground geometry models considered, and are, for the most part, below 10% of the integral equation solutions. Finally, the FDTD code is used to calculate the magnetic ellipticity response of a 2D survey and a 3D sounding of complicated geophysical targets. The response of these 2D and 3D targets are too complicated to be verified with integral equation solutions, but show the proper low- and high-frequency responses.
Human search for a target on a textured background is consistent with a stochastic model.
Clarke, Alasdair D F; Green, Patrick; Chantler, Mike J; Hunt, Amelia R
2016-05-01
Previous work has demonstrated that search for a target in noise is consistent with the predictions of the optimal search strategy, both in the spatial distribution of fixation locations and in the number of fixations observers require to find the target. In this study we describe a challenging visual-search task and compare the number of fixations required by human observers to find the target to predictions made by a stochastic search model. This model relies on a target-visibility map based on human performance in a separate detection task. If the model does not detect the target, then it selects the next saccade by randomly sampling from the distribution of saccades that human observers made. We find that a memoryless stochastic model matches human performance in this task. Furthermore, we find that the similarity in the distribution of fixation locations between human observers and the ideal observer does not replicate: Rather than making the signature doughnut-shaped distribution predicted by the ideal search strategy, the fixations made by observers are best described by a central bias. We conclude that, when searching for a target in noise, humans use an essentially random strategy, which achieves near optimal behavior due to biases in the distributions of saccades we have a tendency to make. The findings reconcile the existence of highly efficient human search performance with recent studies demonstrating clear failures of optimality in single and multiple saccade tasks. PMID:27145531
An evaluation of the PENCURV model for penetration events in complex targets.
Broyles, Todd P.
2004-07-01
Three complex target penetration scenarios are run with a model developed by the U. S. Army Engineer Waterways Experiment Station, called PENCURV. The results are compared with both test data and a Zapotec model to evaluate PENCURV's suitability for conducting broad-based scoping studies on a variety of targets to give first order solutions to the problem of G-loading. Under many circumstances, the simpler, empirically based PENCURV model compares well with test data and the much more sophisticated Zapotec model. The results suggest that, if PENCURV were enhanced to include rotational acceleration in its G-loading computations, it would provide much more accurate solutions for a wide variety of penetration problems. Data from an improved PENCURV program would allow for faster, lower cost optimization of targets, test parameters and penetration bodies as Sandia National Laboratories continues in its evaluation of the survivability requirements for earth penetrating sensors and weapons.
ERIC Educational Resources Information Center
Thompson, Bruce
2006-01-01
Value-added models, which rate schools for effectiveness while taking into account the poverty and other socioeconomic status of the students, are generating increased interest. This paper describes the use of one such model to evaluate whether school ratings changed when three new programs were introduced: the "Target Teach" curriculum alignment,…
Dynamical next-to-next-to-leading order parton distributions
Jimenez-Delgado, P.; Reya, E.
2009-04-01
Utilizing recent deep inelastic scattering measurements ({sigma}{sub r},F{sub 2,3,L}) and data on hadronic dilepton production we determine at next-to-next-to-leading order (NNLO) (3-loop) of QCD the dynamical parton distributions of the nucleon generated radiatively from valencelike positive input distributions at an optimally chosen low resolution scale (Q{sub 0}{sup 2}<1 GeV{sup 2}). These are compared with 'standard' NNLO distributions generated from positive input distributions at some fixed and higher resolution scale (Q{sub 0}{sup 2}>1 GeV{sup 2}). Although the NNLO corrections imply in both approaches an improved value of {chi}{sup 2}, typically {chi}{sub NNLO}{sup 2}{approx_equal}0.9{chi}{sub NLO}{sup 2}, present deep inelastic scattering data are still not sufficiently accurate to distinguish between NLO results and the minute NNLO effects of a few percent, despite the fact that the dynamical NNLO uncertainties are somewhat smaller than the NLO ones and both are, as expected, smaller than those of their standard counterparts. The dynamical predictions for F{sub L}(x,Q{sup 2}) become perturbatively stable already at Q{sup 2}=2-3 GeV{sup 2} where precision measurements could even delineate NNLO effects in the very small-x region. This is in contrast to the common standard approach but NNLO/NLO differences are here less distinguishable due to the larger 1{sigma} uncertainty bands. Within the dynamical approach we obtain {alpha}{sub s}(M{sub Z}{sup 2})=0.1124{+-}0.0020, whereas the somewhat less constrained standard fit gives {alpha}{sub s}(M{sub Z}{sup 2})=0.1158{+-}0.0035.
A model for sonar interrogation of complex bottom and surface targets in shallow-water waveguides.
Giddings, Thomas E; Shirron, Joseph J
2008-04-01
Many problems of current interest in underwater acoustics involve low-frequency broadband sonar interrogation of objects near the sea surface or sea floor of a shallow-water environment. When the target is situated near the upper or lower boundary of the water column the acoustic interactions with the target objects are complicated by interactions with the nearby free surface or fluid-sediment interface, respectively. A practical numerical method to address such situations is presented. The model provides high levels of accuracy with the flexibility to handle complex, three-dimensional targets in range-independent environments. The model is demonstrated using several bottom target scenarios, with and without locally undulating seabeds. The impact of interface and boundary interactions is considered with an eye toward using the sonar return signal as the basis for acoustic imaging or spectral classification.
Fakhar, Zeynab; Naiker, Suhashni; Alves, Claudio N; Govender, Thavendran; Maguire, Glenn E M; Lameira, Jeronimo; Lamichhane, Gyanu; Kruger, Hendrik G; Honarparvar, Bahareh
2016-11-01
An alarming rise of multidrug-resistant Mycobacterium tuberculosis strains and the continuous high global morbidity of tuberculosis have reinvigorated the need to identify novel targets to combat the disease. The enzymes that catalyze the biosynthesis of peptidoglycan in M. tuberculosis are essential and noteworthy therapeutic targets. In this study, the biochemical function and homology modeling of MurI, MurG, MraY, DapE, DapA, Alr, and Ddl enzymes of the CDC1551 M. tuberculosis strain involved in the biosynthesis of peptidoglycan cell wall are reported. Generation of the 3D structures was achieved with Modeller 9.13. To assess the structural quality of the obtained homology modeled targets, the models were validated using PROCHECK, PDBsum, QMEAN, and ERRAT scores. Molecular dynamics simulations were performed to calculate root mean square deviation (RMSD) and radius of gyration (Rg) of MurI and MurG target proteins and their corresponding templates. For further model validation, RMSD and Rg for selected targets/templates were investigated to compare the close proximity of their dynamic behavior in terms of protein stability and average distances. To identify the potential binding mode required for molecular docking, binding site information of all modeled targets was obtained using two prediction algorithms. A docking study was performed for MurI to determine the potential mode of interaction between the inhibitor and the active site residues. This study presents the first accounts of the 3D structural information for the selected M. tuberculosis targets involved in peptidoglycan biosynthesis.
Onunka, Chiemela; Nnadozie, Remigius Chidozie
2013-12-01
The performance of frequency modulated continuous wave (FMCW) radar in tracking targets is presented and analysed. Obstacle detection, target tracking and radar target tracking performance models are developed and were used to investigate and to propose ways of improving the autonomous motion of unmanned surface vehicle (USV). Possible factors affecting the performance of FMCW radar in tracking targets are discussed and analysed. PMID:23853743
Advanced EMI models for survey data processing: targets detection and classification
NASA Astrophysics Data System (ADS)
Shubitidze, F.; Barrowes, B. E.; Wang, Yinlin; Shamatava, Irma; Sigman, J. B.; O'Neill, K.
2016-05-01
This paper describes procedures and approaches our team took to demonstrate the capability of advanced electromagnetic induction (EMI) forward and inverse models to perform subsurface metallic objects picking and classification at live-UXO sites from dynamic data sets. Over the past seven years, blind classification tests at live-UXO sites have revealed two main challenges: 1) consistent selection of targets for cued interrogation, (e.g., for the recent SWPG2 study, two independent performers that processed the same MetalMapper dynamic data picked different targets for cued interrogation); and 2) positioning of the cued sensor close enough to the actual cued target to accurately perform classification (particularly when multiple targets or magnetic soils are present). To overcome these problems, in this paper we introduced an innovative and robust approach for subsurface metallic targets picking and classification from dynamic data sets. This approach first inverts for target locations and polarizabilities from each dynamic data point, and then clusters the inverted locations and defines each cluster as a target/source. Finally, the method uses the extracted polarizabilities for classifying UXO from non-UXO items. The studies are done for the 2x2 TEMTADS dynamic data set collected at Camp Hale, CO. The targets picking and classification results are illustrated and validated against ground truth.
Albright, Brian J; Yin, Lin; Hegelich, Bjoorn M; Bowers, Kevin J; Huang, Chengkun; Fernandez, Juan C; Flippo, Kirk A; Gaillard, Sandrine; Kwan, Thomas J T; Henig, Andreas; Yan, Xue Q; Tajima, Toshi; Habs, Dieter
2009-01-01
A simple model has been derived for the expansion of a thin (up to 100s of nm thickness), solid-density target driven by an u.ltraintense laser. In this regime, new ion acceleration mechanisms, such as the Break-Out Afterburner (BOA) [1], emerge with the potential to dramatically improve energy, efficiency, and energy spread of laser-driven ion beams. Such beams have been proposed [2] as drivers for fast ignition inertial confinement fusion [3]. Analysis of kinetic simulations of the BOA shows two dislinct times that bound the period of enhanced acceleration: t{sub 1}, when the target becomes relativistically transparent to the laser, and t{sub 2}, when the target becomes classically underdense and the enhanced acceleration terminates. A silllple dynamical model for target expansion has been derived that contains both the early, one-dimensional (lD) expansion of the target as well as three-dimensional (3D) expansion of the plasma at late times, The model assumes that expansion is slab-like at the instantaneous ion sound speed and requires as input target composition, laser intensity, laser spot area, and the efficiency of laser absorption into electron thermal energy.
Masked target transform volume clutter metric for human observer visual search modeling
NASA Astrophysics Data System (ADS)
Moore, Richard Kirk
The Night Vision and Electronic Sensors Directorate (NVESD) develops an imaging system performance model to aid in the design and comparison of imaging systems for military use. It is intended to approximate visual task performance for a typical human observer with an imaging system of specified optical, electrical, physical, and environmental parameters. When modeling search performance, the model currently uses only target size and target-to-background contrast to describe a scene. The presence or absence of other non-target objects and textures in the scene also affect search performance, but NVESD's targeting task performance metric based time limited search model (TTP/TLS) does not currently account for them explicitly. Non-target objects in a scene that impact search performance are referred to as clutter. A universally accepted mathematical definition of clutter does not yet exist. Researchers have proposed a number of clutter metrics based on very different methods, but none account for display geometry or the varying spatial frequency sensitivity of the human visual system. After a review of the NVESD search model, properties of the human visual system, and a literature review of clutter metrics, the new masked target transform volume clutter metric will be presented. Next the results of an experiment designed to show performance variation due to clutter alone will be presented. Then, the results of three separate perception experiments using real or realistic search imagery will be used to show that the new clutter metric better models human observer search performance than the current NVESD model or any of the reviewed clutter metrics.
Nachtmann, O.
2014-11-15
We review ideas on the structure of the QCD vacuum which had served as motivation for the discussion of various non-standard QCD effects in high-energy reactions in articles from 1984 to 1995. These effects include, in particular, transverse-momentum and spin correlations in the Drell–Yan process and soft photon production in hadron–hadron collisions. We discuss the relation of the approach introduced in the above-mentioned articles to the approach, developed later, using transverse-momentum-dependent parton distributions (TDMs). The latter approach is a special case of our more general one which allows for parton entanglement in high-energy reactions. We discuss signatures of parton entanglement in the Drell–Yan reaction. Also for Higgs-boson production in pp collisions via gluon–gluon annihilation effects of entanglement of the two gluons are discussed and are found to be potentially important. These effects can be looked for in the current LHC experiments. In our opinion studying parton-entanglement effects in high-energy reactions is, on the one hand, very worthwhile by itself and, on the other hand, it allows to perform quantitative tests of standard factorisation assumptions. Clearly, the experimental observation of parton-entanglement effects in the Drell–Yan reaction and/or in Higgs-boson production would have a great impact on our understanding how QCD works in high-energy collisions.
Zago, Myrka; Bosco, Gianfranco; Maffei, Vincenzo; Iosa, Marco; Ivanenko, Yuri P; Lacquaniti, Francesco
2004-04-01
Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. Here we present evidence in favor of a different view: the brain makes the best estimate about target motion based on measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from expected dynamics (kinetics). We projected a virtual target moving vertically downward on a wide screen with different randomized laws of motion. In the first series of experiments, subjects were asked to intercept this target by punching a real ball that fell hidden behind the screen and arrived in synchrony with the visual target. Subjects systematically timed their motor responses consistent with the assumption of gravity effects on an object's mass, even when the visual target did not accelerate. With training, the gravity model was not switched off but adapted to nonaccelerating targets by shifting the time of motor activation. In the second series of experiments, there was no real ball falling behind the screen. Instead the subjects were required to intercept the visual target by clicking a mousebutton. In this case, subjects timed their responses consistent with the assumption of uniform motion in the absence of forces, even when the target actually accelerated. Overall, the results are in accord with the theory that motor responses evoked by visual kinematics are modulated by a prior of the target dynamics. The prior appears surprisingly resistant to modifications based on performance errors. PMID:14627663
3D modeling of large targets and clutter utilizing Ka band monopulse SAR
NASA Astrophysics Data System (ADS)
Ray, Jerry A.; Barr, Doug; Shurtz, Ric; Channell, Rob
2006-05-01
The U.S. Army Research, Development and Engineering Command at Redstone Arsenal, Alabama have developed a dual mode, Ka Band Radar and IIR system for the purpose of data collection and tracker algorithm development. The system is comprised of modified MMW and IIR sensors and is mounted in a stabilized ball on a UH-1 helicopter operated by Redstone Technical Test Center. Several missile programs under development require MMW signatures of multiple target and clutter scenes. Traditionally these target signatures have been successfully collected using static radars and targets mounted on a turntable to produce models from ISAR images; clutter scenes have been homogeneously characterized using information on various classes of clutter. However, current and future radar systems require models of many targets too large for turntables, as well as high resolution 3D scattering characteristics of urban and other non-homogenous clutter scenes. In partnership with industry independent research and development (IRAD) activities the U.S. Army RDEC has developed a technique for generating 3D target and clutter models using SAR imaging in the MMW spectrum. The purpose of this presentation is to provide an overview of funded projects and resulting data products with an emphasis on MMW data reduction and analysis, especially the unique 3D modeling capabilities of the monopulse radar flying SAR profiles. Also, a discussion of lessons learned and planned improvements will be presented.
Integrated modeling/analyses of thermal-shock effects in SNS targets
Taleyarkhan, R.P.; Haines, J.
1996-06-01
In a spallation neutron source (SNS), extremely rapid energy pulses are introduced in target materials such as mercury, lead, tungsten, uranium, etc. Shock phenomena in such systems may possibly lead to structural material damage beyond the design basis. As expected, the progression of shock waves and interaction with surrounding materials for liquid targets can be quite different from that in solid targets. The purpose of this paper is to describe ORNL`s modeling framework for `integrated` assessment of thermal-shock issues in liquid and solid target designs. This modeling framework is being developed based upon expertise developed from past reactor safety studies, especially those related to the Advanced Neutron Source (ANS) Project. Unlike previous separate-effects modeling approaches employed (for evaluating target behavior when subjected to thermal shocks), the present approach treats the overall problem in a coupled manner using state-of-the-art equations of state for materials of interest (viz., mercury, tungsten and uranium). That is, the modeling framework simultaneously accounts for localized (and distributed) compression pressure pulse generation due to transient heat deposition, the transport of this shock wave outwards, interaction with surrounding boundaries, feedback to mercury from structures, multi-dimensional reflection patterns & stress induced (possible) breakup or fracture.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
Optimized model of oriented-line-target detection using vertical and horizontal filters
NASA Astrophysics Data System (ADS)
Westland, Stephen; Foster, David H.
1995-08-01
A line-element target differing sufficiently in orientation from a background of line elements can be visually detected easily and quickly; orientation thresholds for such detection are lowest when the background elements are all vertical or all horizontal. A simple quantitative model of this performance was constructed from two classes of anisotropic filters, (2) nonlinear point transformation, and (3) estimation of a signal-to-noise ratio based on responses to images with and without a target. A Monte Carlo optimization procedure (simulated annealing) was used to determine the model parameter values required for providing an accurate description of psychophysical data on orientation increment thresholds.
Point reflector model for the simulation of radar target glint and Doppler phenomena
NASA Astrophysics Data System (ADS)
Grubeck, H.
1995-02-01
This report describes the mathematics of a radar model for the simulation of glint and Doppler phenomena. Glint is an unwanted phenomenon, which deteriorates radar tracking performance. The term denotes a fluctuation of the target direction, experienced by a radar, tracking a complex target. Doppler shift refers to the frequency change of a radar wave, due to the reflection against a moving target. It can be used by modern so-called coherent radar systems for velocity determination. A three-dimensional space is modeled, containing a rigid body of point scattering reflectors (the target) and a point of measuring (the radar). The target and the radar can move freely and independently in the space. The movement of the target and the radar is described with a number of coordinate systems, which are presented in this report. Some simple simulations are also presented in this report. A simulation tool is available for interested users and the purpose of this report is to announce its existence. The program is written in MATLAB Simulink.
Modeling human target acquisition in ground-to-air weapon systems
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Mohr, R. L.; Vikmanis, M.; Wei, K. C.
1982-01-01
The problems associated with formulating and validating mathematical models for describing and predicting human target acquisition response are considered. In particular, the extension of the human observer model to include the acquisition phase as well as the tracking segment is presented. Relationship of the Observer model structure to the more complex Standard Optimal Control model formulation and to the simpler Transfer Function/Noise representation is discussed. Problems pertinent to structural identifiability and the form of the parameterization are elucidated. A systematic approach toward the identification of the observer acquisition model parameters from ensemble tracking error data is presented.
A Higher Order Perturbative Parton Evolution Toolkit (HOPPET)
NASA Astrophysics Data System (ADS)
Salam, G. P.; Rojo, J.
2009-01-01
This document describes a Fortran 95 package for carrying out DGLAP evolution and other common manipulations of parton distribution functions (PDFs). The PDFs are represented on a grid in x-space so as to avoid limitations on the functional form of input distributions. Good speed and accuracy are obtained through the representation of splitting functions in terms of their convolution with a set of piecewise polynomial basis functions, and Runge-Kutta techniques are used for the evolution in Q. Unpolarised evolution is provided to NNLO, including heavy-quark thresholds in the MS¯ scheme, and longitudinally polarised evolution to NLO. The code is structured so as to provide simple access to the objects representing splitting functions and PDFs, making it possible for a user to extend the facilities already provided. A streamlined interface is also available, facilitating use of the evolution part of the code from F77 and C/C++. Program summaryProgram title: HOPPET Catalogue identifier: AEBZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public License No. of lines in distributed program, including test data, etc.: 61 001 No. of bytes in distributed program, including test data, etc.: 270 312 Distribution format: tar.gz Programming language: Fortran 95 Computer: All Operating system: All RAM: ≲10 MB Classification: 11.5 Nature of problem: Solution of the DGLAP evolution equations up to NNLO (NLO) for unpolarised (longitudinally polarised) PDFs, and provision of tools to facilitate manipulation (convolutions, etc.) of PDFs with user-defined coefficient and splitting functions. Solution method: Representation of PDFs on a grid in x, adaptive integration of splitting functions to reduce them to a discretised form, obtaining fast convolutions that are equivalent to integration with an interpolated form of the PDFs; Runge
2011-01-01
Background Zinc Finger Nucleases (ZFNs) have tremendous potential as tools to facilitate genomic modifications, such as precise gene knockouts or gene replacements by homologous recombination. ZFNs can be used to advance both basic research and clinical applications, including gene therapy. Recently, the ability to engineer ZFNs that target any desired genomic DNA sequence with high fidelity has improved significantly with the introduction of rapid, robust, and publicly available techniques for ZFN design such as the Oligomerized Pool ENgineering (OPEN) method. The motivation for this study is to make resources for genome modifications using OPEN-generated ZFNs more accessible to researchers by creating a user-friendly interface that identifies and provides quality scores for all potential ZFN target sites in the complete genomes of several model organisms. Description ZFNGenome is a GBrowse-based tool for identifying and visualizing potential target sites for OPEN-generated ZFNs. ZFNGenome currently includes a total of more than 11.6 million potential ZFN target sites, mapped within the fully sequenced genomes of seven model organisms; S. cerevisiae, C. reinhardtii, A. thaliana, D. melanogaster, D. rerio, C. elegans, and H. sapiens and can be visualized within the flexible GBrowse environment. Additional model organisms will be included in future updates. ZFNGenome provides information about each potential ZFN target site, including its chromosomal location and position relative to transcription initiation site(s). Users can query ZFNGenome using several different criteria (e.g., gene ID, transcript ID, target site sequence). Tracks in ZFNGenome also provide "uniqueness" and ZiFOpT (Zinc Finger OPEN Targeter) "confidence" scores that estimate the likelihood that a chosen ZFN target site will function in vivo. ZFNGenome is dynamically linked to ZiFDB, allowing users access to all available information about zinc finger reagents, such as the effectiveness of a given
Validation of a target acquisition model for active imager using perception experiments
NASA Astrophysics Data System (ADS)
Lapaz, Frédéric; Canevet, Loïc
2007-10-01
Active night vision systems based on laser diodes emitters have now reached a technology level allowing military applications. In order to predict the performance of observers using such systems, we built an analytic model including sensor, atmosphere, visualization and eye effects. The perception task has been modelled using the Targeting Task Performance metric (TTP metric) developed by R. Vollmerhausen from the Night Vision and Electronic Sensors Directorate (NVESD). Sensor and atmosphere models have been validated separately. In order to validate the whole model, two identification tests have been set up. The first set submitted to trained observers was made of hybrid images. The target to background contrast, the blur and the noise were added to armoured vehicles signatures in accordance to sensor and atmosphere models. The second set of images was made with the same targets, sensed by a real active sensor during field trials. Images were recorded, showing different vehicles, at different ranges and orientations, under different illumination and acquisition configurations. Indeed, this set of real images was built with three different types of gating: wide illumination, illumination of the background and illumination of the target. Analysis of the perception experiments results showed a good concordance between the two sets of images. The calculation of an identification criterion, related to this set of vehicles in the near infrared, gave the same results in both cases. The impact of gating on observer's performance was also evaluated.
Sound Produced by a Fast Parton in the Quark-Gluon Plasma is a ``Crescendo''
NASA Astrophysics Data System (ADS)
Neufeld, R. B.; Müller, B.
2009-07-01
We calculate the total energy deposited into the medium per unit length by fast partons traversing a quark-gluon plasma. The medium excitation due to collisions is taken to be given by the well-known expression for the collisional drag force. The radiative energy loss of the parton contributes to the energy deposition because each radiated gluon acts as an additional source of collisional energy loss in the medium. We derive a differential equation which governs how the spectrum of radiated gluons is modified when this energy loss is taken into account. This modified spectrum is then used to calculate the additional energy deposition due to the interactions of radiated gluons with the medium. Numerical results are presented for the medium response for the case of two energetic back-to-back partons created in a hard interaction.
Sound produced by a fast parton in the quark-gluon plasma is a "crescendo".
Neufeld, R B; Müller, B
2009-07-24
We calculate the total energy deposited into the medium per unit length by fast partons traversing a quark-gluon plasma. The medium excitation due to collisions is taken to be given by the well-known expression for the collisional drag force. The radiative energy loss of the parton contributes to the energy deposition because each radiated gluon acts as an additional source of collisional energy loss in the medium. We derive a differential equation which governs how the spectrum of radiated gluons is modified when this energy loss is taken into account. This modified spectrum is then used to calculate the additional energy deposition due to the interactions of radiated gluons with the medium. Numerical results are presented for the medium response for the case of two energetic back-to-back partons created in a hard interaction.
QCD-aware partonic jet clustering for truth-jet flavour labelling
NASA Astrophysics Data System (ADS)
Buckley, Andy; Pollard, Chris
2016-02-01
We present an algorithm for deriving partonic flavour labels to be applied to truth particle jets in Monte Carlo event simulations. The inputs to this approach are final pre-hadronisation partons, to remove dependence on unphysical details such as the order of matrix element calculation and shower generator frame recoil treatment. These are clustered using standard jet algorithms, modified to restrict the allowed pseudojet combinations to those in which tracked flavour labels are consistent with QCD and QED Feynman rules. The resulting algorithm is shown to be portable between the major families of shower generators, and largely insensitive to many possible systematic variations: it hence offers significant advantages over existing ad hoc labelling schemes. However, it is shown that contamination from multi-parton scattering simulations can disrupt the labelling results. Suggestions are made for further extension to incorporate more detailed QCD splitting function kinematics, robustness improvements, and potential uses for truth-level physics object definitions and tagging.
The heavy quark parton oxymoron: A mini-review of heavy quark production theory in PQCD
Tung, W.-K.
1997-04-20
Conventional perturbative QCD calculations on the production of a heavy quark 'H' consist of two contrasting approaches: the usual QCD parton formalism uses the zero-mass approximation (m{sub H}=0) once above threshold, and treats H just like the other light partons; on the other hand, most recent 'NLO' heavy quark calculations treat m{sub H} as a large parameter and always consider H as a heavy particle, never as a parton, irrespective of the energy scale of the physical process. By their very nature, both these approaches are limited in their regions of applicability. This dichotomy can be resolved in a unified general-mass variable-flavor-number scheme, which retains the m{sub H} dependence at all energies, and which naturally reduces to the two conventional approaches in their respective region of validity. Recent applications to lepto- and hadro-production of heavy quarks are briefly summarized.
Electroweakino pair production at the LHC: NLO SUSY-QCD corrections and parton-shower effects
NASA Astrophysics Data System (ADS)
Baglio, Julien; Jäger, Barbara; Kesenheimer, Matthias
2016-07-01
We present a set of NLO SUSY-QCD calculations for the pair production of neutralinos and charginos at the LHC, and their matching to parton-shower programs in the framework of the POWHEG-BOX program package. The code we have developed provides a SUSY Les Houches Accord interface for setting supersymmetric input parameters. Decays of the neutralinos and charginos and parton-shower effects can be simulated with PYTHIA. To illustrate the capabilities of our program, we present phenomenological results for a representative SUSY parameter point. We find that NLO-QCD corrections increase the production rates for neutralinos and charginos significantly. The impact of parton-shower effects on distributions of the weakinos is small, but non-negligible for jet distributions.
Soft factor subtraction and transverse momentum dependent parton distributions on the lattice
NASA Astrophysics Data System (ADS)
Ji, Xiangdong; Sun, Peng; Xiong, Xiaonu; Yuan, Feng
2015-04-01
We study the transverse momentum dependent (TMD) parton distributions in the newly proposed quasiparton distribution function framework in Euclidean space. In this framework, the parton distributions can be extracted from lattice observables in a systematic expansion of 1 /Pz where Pz is the hadron momentum. A soft factor subtraction is found to be essential to make the TMDs calculable on the lattice. We show that the quasi-TMDs with the associated soft factor subtraction can be applied in hard QCD scattering processes such as Drell-Yan lepton pair production in hadronic collisions. This allows future lattice calculations to provide information on the nonperturbative inputs and energy evolutions for the TMDs. Extension to the generalized parton distributions and quantum phase space Wigner distributions will lead to a complete nucleon tomography on the lattice.
An Analytic Model for the Success Rate of a Robotic Actuator System in Hitting Random Targets.
Bradley, Stuart
2015-11-20
Autonomous robotic systems are increasingly being used in a wide range of applications such as precision agriculture, medicine, and the military. These systems have common features which often includes an action by an "actuator" interacting with a target. While simulations and measurements exist for the success rate of hitting targets by some systems, there is a dearth of analytic models which can give insight into, and guidance on optimization, of new robotic systems. The present paper develops a simple model for estimation of the success rate for hitting random targets from a moving platform. The model has two main dimensionless parameters: the ratio of actuator spacing to target diameter; and the ratio of platform distance moved (between actuator "firings") to the target diameter. It is found that regions of parameter space having specified high success are described by simple equations, providing guidance on design. The role of a "cost function" is introduced which, when minimized, provides optimization of design, operating, and risk mitigation costs.
An Analytic Model for the Success Rate of a Robotic Actuator System in Hitting Random Targets
Bradley, Stuart
2015-01-01
Autonomous robotic systems are increasingly being used in a wide range of applications such as precision agriculture, medicine, and the military. These systems have common features which often includes an action by an “actuator” interacting with a target. While simulations and measurements exist for the success rate of hitting targets by some systems, there is a dearth of analytic models which can give insight into, and guidance on optimization, of new robotic systems. The present paper develops a simple model for estimation of the success rate for hitting random targets from a moving platform. The model has two main dimensionless parameters: the ratio of actuator spacing to target diameter; and the ratio of platform distance moved (between actuator “firings”) to the target diameter. It is found that regions of parameter space having specified high success are described by simple equations, providing guidance on design. The role of a “cost function” is introduced which, when minimized, provides optimization of design, operating, and risk mitigation costs. PMID:26610500
NASA Astrophysics Data System (ADS)
Thajudeen, Christopher; Hoorfar, Ahmad; Ahmad, Fauzia; Dogaru, Traian
2010-04-01
With recent advances in both algorithm and component technologies, through-the-wall sensing and imaging is emerging as an affordable sensor technology in civilian and military settings. One of the primary objectives of through-the-wall sensing systems is to detect and identify targets of interest, such as humans and cache of weapons, enclosed in building structures. Effective approaches that achieve proper target radar cross section (RCS) registration behind walls must, in general, exploit a detailed understanding of the radar phenomenology and more specifically, knowledge of the expected strength of the radar return from targets of interest. In this paper, we investigate the effects of various wall types on the received power of the target return through the use of a combined measurement and electromagnetic modeling approach. The RCS of material-exact rifle and human models are investigated in free-space using numerical electromagnetic modeling tools. A modified radar range equation, which analytically accounts for the wall effects, including multiple reflections within a given homogeneous or layered wall, is then employed in conjunction with wideband measured parameters of various common wall types, to estimate the received power versus frequency from the aforementioned targets. The proposed technique is, in principle, applicable to both bistatic and mono-static operations.
Numerical modeling for energy transport and isochoric heating in ultra-fast heated high Z target
NASA Astrophysics Data System (ADS)
Mishra, Rohini; Sentoku, Yasuhiko; Hakel, Peter; Mancini, Roberto C.
2010-11-01
Collisional Particle-in-Cell (PIC) code is an effective tool to study extreme energy density conditions achieved in intense laser-solid interactions. In the continuous process of developing PIC code, we have recently implemented models to incorporate dynamic ionizations, namely Saha and Thomas Fermi, and radiation cooling (due to Bremsstrahlung and line emissions). We have also revised the existing collision model to take into account bounded electrons in dynamically ionizing target (partially ionized target). One-dimensional PIC simulation of a gold target with new collision model shows strong local heating in a micron distance due to shorter stopping range of fast electrons, which reflects the increased collision frequency due to bound electrons. The peak temperature in the heated region drops significantly due to the radiation cooling to a level of a few hundred eV from keV. We also discuss the target Z dependence on radiation loss and two-dimensional effects such as the resistive magnetic fields in the hot electron transport in metal targets.
Vapor shielding models and the energy absorbed by divertor targets during transient events
NASA Astrophysics Data System (ADS)
Skovorodin, D. I.; Pshenov, A. A.; Arakcheev, A. S.; Eksaeva, E. A.; Marenkov, E. D.; Krasheninnikov, S. I.
2016-02-01
The erosion of divertor targets caused by high heat fluxes during transients is a serious threat to ITER operation, as it is going to be the main factor determining the divertor lifetime. Under the influence of extreme heat fluxes, the surface temperature of plasma facing components can reach some certain threshold, leading to an onset of intense material evaporation. The latter results in formation of cold dense vapor and secondary plasma cloud. This layer effectively absorbs the energy of the incident plasma flow, turning it into its own kinetic and internal energy and radiating it. This so called vapor shielding is a phenomenon that may help mitigating the erosion during transient events. In particular, the vapor shielding results in saturation of energy (per unit surface area) accumulated by the target during single pulse of heat load at some level Emax. Matching this value is one of the possible tests to verify complicated numerical codes, developed to calculate the erosion rate during abnormal events in tokamaks. The paper presents three very different models of vapor shielding, demonstrating that Emax depends strongly on the heat pulse duration, thermodynamic properties, and evaporation energy of the irradiated target material. While its dependence on the other shielding details such as radiation capabilities of material and dynamics of the vapor cloud is logarithmically weak. The reason for this is a strong (exponential) dependence of the target material evaporation rate, and therefore the "strength" of vapor shield on the target surface temperature. As a result, the influence of the vapor shielding phenomena details, such as radiation transport in the vapor cloud and evaporated material dynamics, on the Emax is virtually completely masked by the strong dependence of the evaporation rate on the target surface temperature. However, the very same details define the amount of evaporated particles, needed to provide an effective shielding to the target, and
Al-Jamal, Khuloud T; Bai, Jie; Wang, Julie Tzu-Wen; Protti, Andrea; Southern, Paul; Bogart, Lara; Heidari, Hamed; Li, Xinjia; Cakebread, Andrew; Asker, Dan; Al-Jamal, Wafa T; Shah, Ajay; Bals, Sara; Sosabowski, Jane; Pankhurst, Quentin A
2016-09-14
A sound theoretical rationale for the design of a magnetic nanocarrier capable of magnetic capture in vivo after intravenous administration could help elucidate the parameters necessary for in vivo magnetic tumor targeting. In this work, we utilized our long-circulating polymeric magnetic nanocarriers, encapsulating increasing amounts of superparamagnetic iron oxide nanoparticles (SPIONs) in a biocompatible oil carrier, to study the effects of SPION loading and of applied magnetic field strength on magnetic tumor targeting in CT26 tumor-bearing mice. Under controlled conditions, the in vivo magnetic targeting was quantified and found to be directly proportional to SPION loading and magnetic field strength. Highest SPION loading, however, resulted in a reduced blood circulation time and a plateauing of the magnetic targeting. Mathematical modeling was undertaken to compute the in vivo magnetic, viscoelastic, convective, and diffusive forces acting on the nanocapsules (NCs) in accordance with the Nacev-Shapiro construct, and this was then used to extrapolate to the expected behavior in humans. The model predicted that in the latter case, the NCs and magnetic forces applied here would have been sufficient to achieve successful targeting in humans. Lastly, an in vivo murine tumor growth delay study was performed using docetaxel (DTX)-encapsulated NCs. Magnetic targeting was found to offer enhanced therapeutic efficacy and improve mice survival compared to passive targeting at drug doses of ca. 5-8 mg of DTX/kg. This is, to our knowledge, the first study that truly bridges the gap between preclinical experiments and clinical translation in the field of magnetic drug targeting. PMID:27541372
A model for combined targeting and tracking tasks in computer applications.
Senanayake, Ransalu; Hoffmann, Errol R; Goonetilleke, Ravindra S
2013-11-01
Current models for targeted-tracking are discussed and shown to be inadequate as a means of understanding the combined task of tracking, as in the Drury's paradigm, and having a final target to be aimed at, as in the Fitts' paradigm. It is shown that the task has to be split into components that are, in general, performed sequentially and have a movement time component dependent on the difficulty of the individual component of the task. In some cases, the task time may be controlled by the Fitts' task difficulty, and in others, it may be dominated by the Drury's task difficulty. Based on an experiment carried out that captured movement time in combinations of visually controlled and ballistic movements, a model for movement time in targeted-tracking was developed. PMID:24081679
Advanced models of targets and disturbances and related radar signal processors
NASA Astrophysics Data System (ADS)
Farina, A.; Russo, A.; Studer, F. A.
The first part of the paper provides flexible and reliable stochastic models for the radar signals scattered by target and clutter sources. The models make it possible to consider any shape of autocorrelation function between consecutive pulse echoes and any probability density function for their in-phase and quadrature components. The second part of the paper revises the theory of detecting targets, with any type of probability density and autocorrelation function, embedded in a disturbance having any type of probability density and autocorrelation function. In the third part of the paper, the theory is applied to the cases in which target and/or disturbance may have a log-normal probability density for the amplitudes. Several processing schemes are suggested and corresponding detection performances evaluated. Finally, adaptive implementation schematics are suggested for some of the processors presented.
A model for combined targeting and tracking tasks in computer applications.
Senanayake, Ransalu; Hoffmann, Errol R; Goonetilleke, Ravindra S
2013-11-01
Current models for targeted-tracking are discussed and shown to be inadequate as a means of understanding the combined task of tracking, as in the Drury's paradigm, and having a final target to be aimed at, as in the Fitts' paradigm. It is shown that the task has to be split into components that are, in general, performed sequentially and have a movement time component dependent on the difficulty of the individual component of the task. In some cases, the task time may be controlled by the Fitts' task difficulty, and in others, it may be dominated by the Drury's task difficulty. Based on an experiment carried out that captured movement time in combinations of visually controlled and ballistic movements, a model for movement time in targeted-tracking was developed.
Empirical modeling of renal motion for improved targeting during focused ultrasound surgery.
Abhilash, R H; Chauhan, Sunita
2013-05-01
Non-invasive surgery looks at ways of eliminating physical contact with the target tissues while maintaining necessary levels of accuracy. Focused Ultrasound Surgery (FUS) is one such treatment modality, which uses a tightly focused beam of high intensity ultrasound to ablate tumors in various parts of the body. For trans-abdominal access, respiration induced movement of the tissue targets remains a major issue during FUS. Respiration induced movements are known to be significant in liver and kidney. In this paper, we attempt to address this problem using non-linear prediction and modeling techniques as applicable to kidney movement patterns. Kidney movement patterns are known to be three dimensional and vastly complicated compared to movement patterns of the liver. Monitoring and quantification of the nature and extent of kidney movement is yet to be explored in depth for effective compensation and accurate targeting. Apart from the respiratory cycle, the movement of the kidney is also affected by several factors, such as the movement of the ribs, spleen and liver. Modeling of these movements is imperative for motion adaptive FUS. Since kidney movements are highly subject specific, generic statistical models cannot be used for compensation. The system latency and real-time performance of the imaging modality also induce additional parametric dependence in target tracking. In this work, we focus on empirical modeling and prediction of the kidney movement to for error analysis and computing system latency. The accuracy of existing modeling techniques is compared with a newly developed empirical model. From the study conducted in healthy volunteers, it was found that the kidney movement was complex and subject specific and could be effectively modeled using the new shape function based model. The model was further fine-tuned using Kalman filter based predictors and Adaptive Neuro-Fuzzy Inference System (ANFIS) which gave more than 85% accuracy in prediction. PMID
Patient-derived xenograft models to improve targeted therapy in epithelial ovarian cancer treatment.
Scott, Clare L; Becker, Marc A; Haluska, Paul; Samimi, Goli
2013-12-04
Despite increasing evidence that precision therapy targeted to the molecular drivers of a cancer has the potential to improve clinical outcomes, high-grade epithelial ovarian cancer (OC) patients are currently treated without consideration of molecular phenotype, and predictive biomarkers that could better inform treatment remain unknown. Delivery of precision therapy requires improved integration of laboratory-based models and cutting-edge clinical research, with pre-clinical models predicting patient subsets that will benefit from a particular targeted therapeutic. Patient-derived xenografts (PDXs) are renewable tumor models engrafted in mice, generated from fresh human tumors without prior in vitro exposure. PDX models allow an invaluable assessment of tumor evolution and adaptive response to therapy. PDX models have been applied to pre-clinical drug testing and biomarker identification in a number of cancers including ovarian, pancreatic, breast, and prostate cancers. These models have been shown to be biologically stable and accurately reflect the patient tumor with regards to histopathology, gene expression, genetic mutations, and therapeutic response. However, pre-clinical analyses of molecularly annotated PDX models derived from high-grade serous ovarian cancer (HG-SOC) remain limited. In vivo response to conventional and/or targeted therapeutics has only been described for very small numbers of individual HG-SOC PDX in conjunction with sparse molecular annotation and patient outcome data. Recently, two consecutive panels of epithelial OC PDX correlate in vivo platinum response with molecular aberrations and source patient clinical outcomes. These studies underpin the value of PDX models to better direct chemotherapy and predict response to targeted therapy. Tumor heterogeneity, before and following treatment, as well as the importance of multiple molecular aberrations per individual tumor underscore some of the important issues addressed in PDX models.
Bracke, S; Desmet, E; Guerrero-Aspizua, S; Tjabringa, S G; Schalkwijk, J; Van Gele, M; Carretero, M; Lambert, J
2013-08-01
Diseases of the skin are amenable to RNAi-based therapies and targeting key components in the pathophysiology of psoriasis using RNAi may represent a successful new therapeutic strategy. We aimed to develop a straightforward and highly reproducible in vitro psoriasis model useful to study the effects of gene knockdown by RNAi and to identify new targets for topical RNAi therapeutics. We evaluated the use of keratinocytes derived from psoriatic plaques and normal human keratinocytes (NHKs). To induce a psoriatic phenotype in NHKs, combinations of pro-inflammatory cytokines (IL-1α, IL-17A, IL-6 and TNF-α) were tested. The model based on NHK met our needs of a reliable and predictive preclinical model, and this model was further selected for gene expression analyses, comprising a panel of 55 psoriasis-associated genes and five micro-RNAs (miRNAs). Gene silencing studies were conducted by using small interfering RNAs (siRNAs) and miRNA inhibitors directed against potential target genes such as CAMP and DEFB4 and miRNAs such as miR-203. We describe a robust and highly reproducible in vitro psoriasis model that recapitulates expression of a large panel of genes and miRNAs relevant to the pathogenesis of psoriasis. Furthermore, we show that our model is a powerful first step model system for testing and screening RNAi-based therapeutics.
Pisano, S.; Biselli, A.; Niccolai, S.; Seder, E.; Guidal, M.; Mirazita, M.; Adhikari, K. P.; Adikaram, D.; Amaryan, M. J.; Anderson, M. D.; Anefalos Pereira, S.; Avakian, H.; Ball, J.; Battaglieri, M.; Batourine, V.; Bedlinskiy, I.; Bosted, P.; Briscoe, B.; Brock, J.; Brooks, W. K.; Burkert, V. D.; Carlin, C.; Carman, D. S.; Celentano, A.; Chandavar, S.; Charles, G.; Colaneri, L.; Cole, P. L.; Compton, N.; Contalbrigo, M.; Cortes, O.; Crabb, D. G.; Crede, V.; D' Angelo, A.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Dupre, R.; Egiyan, H.; El Alaoui, A.; El Fassi, L.; Elouadrhiri, L.; Eugenio, P.; Fedotov, G.; Fegan, S.; Fersch, R.; Filippi, A.; Fleming, J. A.; Fradi, A.; Garillon, B.; Garcon, M.; Ghandilyan, Y.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Gohn, W.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guo, L.; Hafidi, K.; Hanretty, C.; Hattawy, M.; Hicks, K.; Holtrop, M.; Hughes, S. M.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Jenkins, D.; Jiang, X.; Jo, H. S.; Joo, K.; Joosten, S.; Keith, C. D.; Keller, D.; Kim, A.; Kim, W.; Klein, F. J.; Kubarovsky, V.; Kuhn, S. E.; Lenisa, P.; Livingston, K.; Lu, H. Y.; MacCormick, M.; MacGregor, Ian J. D.; Mayer, M.; McKinnon, B.; Meekins, D. G.; Meyer, C. A.; Mokeev, V.; Montgomery, R. A.; Moody, C. I.; Munoz Camacho, C.; Nadel-Turonski, P.; Osipenko, M.; Ostrovidov, A. I.; Park, K.; Phelps, W.; Phillips, J. J.; Pogorelko, O.; Price, J. W.; Procureur, S.; Prok, Y.; Puckett, A. J. R.; Ripani, M.; Rizzo, A.; Rosner, G.; Rossi, P.; Roy, P.; Sabatie, F.; Salgado, C.; Schott, D.; Schumacher, R. A.; Skorodumina, I.; Smith, G. D.; Sober, D. I.; Sokhan, D.; Sparveris, N.; Stepanyan, S.; Stoler, P.; Strauch, S.; Sytnik, V.; Tian, Ye; Tkachenko, S.; Turisini, M.; Ungaro, M.; Voutier, E.; Walford, N. K.; Watts, D. P.; Wei, X.; Weinstein, L. B.; Wood, M. H.; Zachariou, N.; Zana, L.; Zhang, J.; Zhao, Z. W.; Zonta, I.
2015-03-19
Single-beam, single-target, and double-spin asymmetries for hard exclusive photon production on the proton e→p→e'p'γ are presented. The data were taken at Jefferson Lab using the CLAS detector and a longitudinally polarized ^{14}NH_{3} target. The three asymmetries were measured in 165 4-dimensional kinematic bins, covering the widest kinematic range ever explored simultaneously for beam and target-polarization observables in the valence quark region. The kinematic dependences of the obtained asymmetries are discussed and compared to the predictions of models of Generalized Parton Distributions. As a result, the measurement of three DVCS spin observables at the same kinematic points allows a quasi-model-independent extraction of the imaginary parts of the H and H~ Compton Form Factors, which give insight into the electric and axial charge distributions of valence quarks in the proton.
Coherent radiative parton energy loss beyond the BDMPS-Z limit
NASA Astrophysics Data System (ADS)
Zapp, Korinna Christine; Wiedemann, Urs Achim
2012-06-01
It is widely accepted that a phenomenologically viable theory of jet quenching for heavy ion collisions requires the understanding of medium-induced parton energy loss beyond the limit of eikonal kinematics formulated by Baier-Dokshitzer-Mueller-Peigné-Schiff and Zakharov (BDMPS-Z). Here, we supplement a recently developed exact Monte Carlo implementation of the BDMPS-Z formalism with elementary physical requirements including exact energy-momentum conservation, a refined formulation of jet-medium interactions and a treatment of all parton branchings on the same footing. We document the changes induced by these physical requirements and we describe their kinematic origin.
An O([alpha][sub s]) Monte Carlo for W production with parton showering
Baer, H.A.
1991-01-01
We construct an event generator for p[bar p][yields]W[sup +]X[yields]e[sup +][nu]X including complete O([alpha][sub s]) corrections, and interface with initial and final state parton showers. Problems with negative weights and with double counting higher order parton radiation are averted. We present results for W+n-jet production, and compare with results from complete tree-level calculations, and shower calculations off of the lowest order 2[yields]2 sub-process. We also compute the [sub qT](W) distribution, and compare with data.
An O({alpha}{sub s}) Monte Carlo for W production with parton showering
Baer, H.A.
1991-12-31
We construct an event generator for p{bar p}{yields}W{sup +}X{yields}e{sup +}{nu}X including complete O({alpha}{sub s}) corrections, and interface with initial and final state parton showers. Problems with negative weights and with double counting higher order parton radiation are averted. We present results for W+n-jet production, and compare with results from complete tree-level calculations, and shower calculations off of the lowest order 2{yields}2 sub-process. We also compute the {sub qT}(W) distribution, and compare with data.
Studies of Parton Propagation and Hadron Formation in the Space-Time Domain
Brooks, Will; Hakobyan, Hayk
2008-10-13
Over the past decade, new data from HERMES, Jefferson Lab, Fermilab, and RHIC that connect to parton propagation and hadron formation have become available. Semi-inclusive DIS on nuclei, the Drell-Yan reaction, and heavy-ion collisions all bring different kinds of information on parton propagation within a medium, while the most direct information on hadron formation comes from the DIS data. Over the next decade one can hope to begin to understand these data within a unified picture. We briefly survey the most relevant data and the common elements of the physics picture, then highlight the new Jefferson Lab data, and close with a prospective for the future.
nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties
Kusina, A.; Jezo, T.; Clark, D. B.; Keppel, Cynthia; Lyonnet, F.; Morfin, Jorge; Olness, F. I.; Owens, Jeff; Schienbein, I.
2015-09-01
We present the first official release of the nCTEQ nuclear parton distribution functions with errors. The main addition to the previous nCTEQ PDFs is the introduction of PDF uncertainties based on the Hessian method. Another important addition is the inclusion of pion production data from RHIC that give us a handle on constraining the gluon PDF. This contribution summarizes our results from arXiv:1509.00792 and concentrates on the comparison with other groups providing nuclear parton distributions.
Constraints on spin-dependent parton distributions at large x from global QCD analysis
NASA Astrophysics Data System (ADS)
Jimenez-Delgado, P.; Avakian, H.; Melnitchouk, W.
2014-11-01
We investigate the behavior of spin-dependent parton distribution functions (PDFs) at large parton momentum fractions x in the context of global QCD analysis. We explore the constraints from existing deep-inelastic scattering data, and from theoretical expectations for the leading x → 1 behavior based on hard gluon exchange in perturbative QCD. Systematic uncertainties from the dependence of the PDFs on the choice of parametrization are studied by considering functional forms motivated by orbital angular momentum arguments. Finally, we quantify the reduction in the PDF uncertainties that may be expected from future high-x data from Jefferson Lab at 12 GeV.
Reaction dynamics of {sup 34-38}Mg projectile with carbon target using Glauber model
Shama, Mahesh K.; Panda, R. N.; Sharma, Manoj K.; Patra, S. K.
2015-08-28
We have studied nuclear reaction cross-sections for {sup 34-38}Mg isotopes as projectile with {sup 12}C target at projectile energy 240AMeV using Glauber model with the conjunction of densities from relativistic mean filed formalism. We found good agreement with the available experimental data. The halo status of {sup 37}Mg is also investigated.
ERIC Educational Resources Information Center
Zhang, Mo; Williamson, David M.; Breyer, F. Jay; Trapani, Catherine
2012-01-01
This article describes two separate, related studies that provide insight into the effectiveness of "e-rater" score calibration methods based on different distributional targets. In the first study, we developed and evaluated a new type of "e-rater" scoring model that was cost-effective and applicable under conditions of absent human rating and…
TARGETED DELIVERY OF INHALED PHARMACEUTICALS USING AN IN SILICO DOSIMETRY MODEL
We present an in silico dosimetry model which can be used for inhalation toxicology (risk assessment of inhaled air pollutants) and aerosol therapy ( targeted delivery of inhaled drugs). This work presents scientific and clinical advances beyond the development of the original in...
Target identification and navigation performance modeling of a passive millimeter wave imager.
Jacobs, Eddie L; Furxhi, Orges
2010-07-01
Human task performance using a passive interferometric millimeter wave imaging sensor is modeled using a task performance modeling approach developed by the U.S. Army Night Vision and Electronic Sensors Directorate. The techniques used are illustrated for an imaging system composed of an interferometric antenna array, optical upconversion, and image formation using a shortwave infrared focal plane array. Two tasks, target identification and pilotage, are modeled. The effects of sparse antenna arrays on task performance are considered. Applications of this model include system trade studies for concealed weapon identification, navigation in fog, and brownout conditions. PMID:20648126
Target identification and navigation performance modeling of a passive millimeter wave imager.
Jacobs, Eddie L; Furxhi, Orges
2010-07-01
Human task performance using a passive interferometric millimeter wave imaging sensor is modeled using a task performance modeling approach developed by the U.S. Army Night Vision and Electronic Sensors Directorate. The techniques used are illustrated for an imaging system composed of an interferometric antenna array, optical upconversion, and image formation using a shortwave infrared focal plane array. Two tasks, target identification and pilotage, are modeled. The effects of sparse antenna arrays on task performance are considered. Applications of this model include system trade studies for concealed weapon identification, navigation in fog, and brownout conditions.
NASA Astrophysics Data System (ADS)
Gaunt, Jonathan R.; Maciuła, Rafał; Szczurek, Antoni
2014-09-01
The double parton distributions (dPDF), both conventional (i.e. double ladder) and those corresponding to 1→2 ladder splitting, are calculated and compared for different two-parton combinations. The conventional and splitting dPDFs have very similar shape in x1 and x2. We make a first quantitative evaluation of the single-ladder-splitting contribution to double parton scattering (DPS) production of two S- or P-wave quarkonia, two Higgs bosons and cc ¯cc ¯. The ratio of the single-ladder-splitting to conventional (i.e. double ladder against double ladder) contributions is discussed as a function of center-of-mass energy, mass of the produced system and other kinematical variables. Using a simple model for the dependence of the conventional two-parton distribution on transverse parton separation (Gaussian and independent of xi and scales), we find that the single-ladder-splitting (or 2v1) contribution is as big as the conventional (or 2v2) contribution discussed in recent years in the literature. In many experimental studies of DPS, one extracts the quantity 1/σeff=σDPS/(σSPS ,1σSPS,2), with σSPS ,1 and σSPS ,2 being the single scattering cross sections for the two subprocesses in the DPS process. Many past phenomenological studies of DPS have only considered the conventional contribution and have obtained values a factor of ˜2 too small for 1/σeff. Our analysis shows that it is important also to consider the ladder-splitting mechanism, and that this might resolve the discrepancy (this was also pointed out in a recent study by Blok et al.). The differential distributions in rapidity and transverse momenta calculated for conventional and single-ladder-splitting DPS processes are however very similar which causes their experimental separation to be rather difficult, if not impossible. The direct consequence of the existence of the two components (conventional and splitting) is the energy and process dependence of the empirical parameter σeff. This is
NASA Astrophysics Data System (ADS)
Atthey, M.; Nahum, A. E.; Flower, M. A.; McCready, V. R.
2000-04-01
A previous targeted radionuclide therapy modelling study has been extended to include the radiobiological effects of cellular repair and proliferation. Dose distributions have been converted into biologically effective dose (BED) distributions using a previously published formulation. With suitable estimated parameters, corrected tumour control probability (TCP) values were derived. The dependence of BED on the physical half-life of the radionuclide was also modelled. Results indicate that the TCP is greater when a shorter physical half-life is employed.
NASA Astrophysics Data System (ADS)
Ozvenchuk, V.; Linnyk, O.; Gorenstein, M. I.; Bratkovskaya, E. L.; Cassing, W.
2013-06-01
We study the shear and bulk viscosities of partonic and hadronic matter as functions of temperature T within the parton-hadron-string dynamics (PHSD) off-shell transport approach. Dynamical hadronic and partonic systems in equilibrium are studied by the PHSD simulations in a finite box with periodic boundary conditions. The ratio of the shear viscosity to entropy density η(T)/s(T) from PHSD shows a minimum (with a value of about 0.1) close to the critical temperature Tc, while it approaches the perturbative QCD limit at higher temperatures in line with lattice QCD (lQCD) results. For T
Visual search for a tilted target: tests of spatial uncertainty models.
Morgan, M J; Ward, R M; Castet, E
1998-05-01
We report that spatial cueing of a parafoveal target in the presence of distractors enhances orientational acuity for that target. When no distractors were present, orientation thresholds were in the range 1-4 degrees. For long exposure times, distractors increased threshold by the amount predicted from a winner-takes-all spatial uncertainty model. For short (100-msec) exposures followed by a random dot mask, the rise in threshold with distractors was considerably greater than that predicted from spatial uncertainty. For brief exposures the effect of distractors was greater when the target and distractors were spatially crowded rather than widely spaced. Adding a tilt to the distractors in the opposite direction to the target increased thresholds still further. Cueing the target with a spatial pointer decreased the effect of distractors, even when they were crowded. We suggest that when attention cannot be appropriately focused, discrimination is carried out by a relatively coarse texture analyser, which averages over several elements, and that focused attention permits the analysis of the target over a smaller area of space. PMID:9621843
NASA Astrophysics Data System (ADS)
Xing, Rui; Chen, Xue-Dong; Zhou, Yan-Feng; Zhang, Jue; Su, Yuan-Yuan; Qiu, Jian-Feng; Sima, Yang-Hu; Zhang, Ke-Qin; He, Yao; Xu, Shi-Qing
2016-01-01
The use of quantum dots (QDs) in biological imaging applications and targeted drug delivery is expected to increase. However, the efficiency of QDs in drug targeting needs to be improved. Here, we show that amino acids linked to CdTe QDs significantly increased the targeted transfer efficiency and biological safety in the invertebrate model Bombyx mori. Compared with bare QDs530, the transfer efficiency of Ala- and Gly-conjugated QDs (QDs530-Ala and QDs530-Gly) in circulatory system increased by 2.6 ± 0.3 and 1.5 ± 0.3 times, and increased by 7.8 ± 0.9 and 2.9 ± 0.2 times in target tissue silk glands, respectively, after 24 h of QDs exposure. Meanwhile, the amount of conjugated QDs decreased by (68.4 ± 4.4)% and (46.7 ± 9.1)% in the non-target tissue fat body, and the speed at which they entered non-target circulating blood cells significantly decreased. The resultant QDs530-Ala revealed a better structural integrity in tissues and a longer retention time in hemolymph than that of QDs530 after exposure via the dorsal vessel. On the other hand, QDs530-Ala significantly reduced the toxicity to hemocytes, silk gland, and fat body, and reduced the amount of reactive oxygen species (ROS) in tissues.
Zhu, Peican; Aliabadi, Hamidreza Montazeri; Uludağ, Hasan; Han, Jie
2016-03-18
The investigation of vulnerable components in a signaling pathway can contribute to development of drug therapy addressing aberrations in that pathway. Here, an original signaling pathway is derived from the published literature on breast cancer models. New stochastic logical models are then developed to analyze the vulnerability of the components in multiple signalling sub-pathways involved in this signaling cascade. The computational results are consistent with the experimental results, where the selected proteins were silenced using specific siRNAs and the viability of the cells were analyzed 72 hours after silencing. The genes elF4E and NFkB are found to have nearly no effect on the relative cell viability and the genes JAK2, Stat3, S6K, JUN, FOS, Myc, and Mcl1 are effective candidates to influence the relative cell growth. The vulnerabilities of some targets such as Myc and S6K are found to vary significantly depending on the weights of the sub-pathways; this will be indicative of the chosen target to require customization for therapy. When these targets are utilized, the response of breast cancers from different patients will be highly variable because of the known heterogeneities in signaling pathways among the patients. The targets whose vulnerabilities are invariably high might be more universally acceptable targets.
Computational modeling of pulsed-power-driven magnetized target fusion experiments
Sheehey, P.; Kirkpatrick, R.; Lindemuth, I.
1995-08-01
Direct magnetic drive using electrical pulsed power has been considered impractically slow for traditional inertial confinement implosion of fusion targets. However, if the target contains a preheated, magnetized plasma, magnetothermal insulation may allow the near-adiabatic compression of such a target to fusion conditions on a much slower time scale. 100-MJ-class explosive flux compression generators with implosion kinetic energies far beyond those available with conventional fusion drivers, are an inexpensive means to investigate such magnetized target fusion (MTF) systems. One means of obtaining the preheated and magnetized plasma required for an MTF system is the recently reported {open_quotes}MAGO{close_quotes} concept. MAGO is a unique, explosive-pulsed-power driven discharge in two cylindrical chambers joined by an annular nozzle. Joint Russian-American MAGO experiments have reported D-T neutron yields in excess of 10{sup 13} from this plasma preparation stage alone, without going on to the proposed separately driven NM implosion of the main plasma chamber. Two-dimensional MED computational modeling of MAGO discharges shows good agreement to experiment. The calculations suggest that after the observed neutron pulse, a diffuse Z-pinch plasma with temperature in excess of 100 eV is created, which may be suitable for subsequent MTF implosion, in a heavy liner magnetically driven by explosive pulsed power. Other MTF concepts, such as fiber-initiated Z-pinch target plasmas, are also being computationally and theoretically evaluated. The status of our modeling efforts will be reported.
Zhu, Peican; Aliabadi, Hamidreza Montazeri; Uludağ, Hasan; Han, Jie
2016-01-01
The investigation of vulnerable components in a signaling pathway can contribute to development of drug therapy addressing aberrations in that pathway. Here, an original signaling pathway is derived from the published literature on breast cancer models. New stochastic logical models are then developed to analyze the vulnerability of the components in multiple signalling sub-pathways involved in this signaling cascade. The computational results are consistent with the experimental results, where the selected proteins were silenced using specific siRNAs and the viability of the cells were analyzed 72 hours after silencing. The genes elF4E and NFkB are found to have nearly no effect on the relative cell viability and the genes JAK2, Stat3, S6K, JUN, FOS, Myc, and Mcl1 are effective candidates to influence the relative cell growth. The vulnerabilities of some targets such as Myc and S6K are found to vary significantly depending on the weights of the sub-pathways; this will be indicative of the chosen target to require customization for therapy. When these targets are utilized, the response of breast cancers from different patients will be highly variable because of the known heterogeneities in signaling pathways among the patients. The targets whose vulnerabilities are invariably high might be more universally acceptable targets. PMID:26988076
Early recognition of lung cancer by integrin targeted imaging in K-ras mouse model.
Ermolayev, Vladimir; Mohajerani, Pouyan; Ale, Angelique; Sarantopoulos, Athanasios; Aichler, Michaela; Kayser, Gian; Walch, Axel; Ntziachristos, Vasilis
2015-09-01
Non-small cell lung cancer is characterized by slow progression and high heterogeneity of tumors. Integrins play an important role in lung cancer development and metastasis and were suggested as a tumor marker; however their role in anticancer therapy remains controversial. In this work, we demonstrate the potential of integrin-targeted imaging to recognize early lesions in transgenic mouse model of lung cancer based on spontaneous introduction of mutated human gene bearing K-ras mutation. We conducted ex vivo and fluorescence molecular tomography-X-ray computed tomography (FMT-XCT) in vivo imaging and analysis for specific targeting of early lung lesions and tumors in rodent preclinical model for lung cancer. The lesions and tumors were characterized by histology, immunofluorescence and immunohistochemistry using a panel of cancer markers. Ex vivo, the integrin-targeted fluorescent signal significantly differed between wild type lung tissue and K-ras pulmonary lesions (PL) at all ages studied. The panel of immunofluorescence experiments demonstrated that PL, which only partially show cancer cell features were detected by αvβ3-integrin targeted imaging. Human patient material analysis confirmed the specificity of target localization in different lung cancer types. Most importantly, small tumors in the lungs of 4-week-old animals could be noninvasively detected in vivo on the fluorescence channel of FMT-XCT. Our findings demonstrated αvβ3-integrin targeted fluorescent imaging to specifically detect premalignant pleural lesions in K-ras mice. Integrin targeted imaging may find application areas in preclinical research and clinical practice, such as early lung cancer diagnostics, intraoperative assistance or therapy monitoring.
Optimal Strategies for Controlling Riverine Tsetse Flies Using Targets: A Modelling Study
Vale, Glyn A.; Hargrove, John W.; Lehane, Michael J.; Solano, Philippe; Torr, Stephen J.
2015-01-01
Background Tsetse flies occur in much of sub-Saharan Africa where they transmit the trypanosomes that cause the diseases of sleeping sickness in humans and nagana in livestock. One of the most economical and effective methods of tsetse control is the use of insecticide-treated screens, called targets, that simulate hosts. Targets have been ~1m2, but recently it was shown that those tsetse that occupy riverine situations, and which are the main vectors of sleeping sickness, respond well to targets only ~0.06m2. The cheapness of these tiny targets suggests the need to reconsider what intensity and duration of target deployments comprise the most cost-effective strategy in various riverine habitats. Methodology/Principal Findings A deterministic model, written in Excel spreadsheets and managed by Visual Basic for Applications, simulated the births, deaths and movement of tsetse confined to a strip of riverine vegetation composed of segments of habitat in which the tsetse population was either self-sustaining, or not sustainable unless supplemented by immigrants. Results suggested that in many situations the use of tiny targets at high density for just a few months per year would be the most cost-effective strategy for rapidly reducing tsetse densities by the ~90% expected to have a great impact on the incidence of sleeping sickness. Local elimination of tsetse becomes feasible when targets are deployed in isolated situations, or where the only invasion occurs from populations that are not self-sustaining. Conclusion/Significance Seasonal use of tiny targets deserves field trials. The ability to recognise habitat that contains tsetse populations which are not self-sustaining could improve the planning of all methods of tsetse control, against any species, in riverine, savannah or forest situations. Criteria to assist such recognition are suggested. PMID:25803871
Geiger, K.; Longacre, R.; Srivastava, D.K.
1999-02-01
VNI is a general-purpose Monte-Carlo event-generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. It uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme, as well as the development of hadron cascades after hadronization. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time-development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position-space, momentum-space and color-space. The parton-evolution is described in terms of a space-time generalization of the familiar momentum-space description of multiple (semi)hard interactions in QCD, involving 2 {r_arrow} 2 parton collisions, 2 {r_arrow} 1 parton fusion processes, and 1 {r_arrow} 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. Finally, the cascading of produced prehadronic clusters and of hadrons includes a multitude of 2 {r_arrow} n processes, and is modeled in parallel to the parton cascade description. This paper gives a brief review of the physics underlying VNI, as well as a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including simple examples), annotates input and control parameters, and discusses output data provided by it.
Analytical model for interaction of short intense laser pulse with solid target
Luan, S. X.; Ma, G. J.; Yu, Wei; Yu, M. Y.; Zhang, Q. J.; Sheng, Z. M.; Murakami, M.
2011-04-15
A simple but comprehensive two-dimensional analytical model for the interaction of a normally incident short intense laser pulse with a solid-density plasma is proposed. Electron cavitation near the target surface by the laser ponderomotive force induces a strong local electrostatic charge-separation field. The cavitation makes possible mode conversion of the laser light into longitudinal electron oscillation at laser frequency, even for initial normal incidence of laser pulse. The intense charge-separation field in the cavity can significantly enhance the laser induced uxB electron oscillation at twice laser frequency to density levels even higher than that of the initial target.
A Production System Model of Capturing Reactive Moving Targets. M.S. Thesis
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Plamondon, B. D.; Miller, R. A.
1984-01-01
Subjects manipulated a control stick to position a cursor over a moving target that reacted with a computer-generated escape strategy. The cursor movements were described at two levels of abstraction. At the upper level, a production system described transitions among four modes of activity; rapid acquisition, close following, a predictive mode, and herding. Within each mode, differential equations described trajectory-generating mechanisms. A simulation of this two-level model captures the targets in a manner resembling the episodic time histories of human subjects.
NASA Astrophysics Data System (ADS)
Tate, Jennifer A.; Kett, Warren; NDong, Christian; Griswold, Karl E.; Hoopes, P. Jack
2013-02-01
Iron oxide nanoparticle (IONP) hyperthermia is a novel therapeutic strategy currently under consideration for the treatment of various cancer types. Systemic delivery of IONP followed by non-invasive activation via a local alternating magnetic field (AMF) results in site-specific energy deposition in the IONP-containing tumor. Targeting IONP to the tumor using an antibody or antibody fragment conjugated to the surface may enhance the intratumoral deposition of IONP and is currently being pursued by many nanoparticle researchers. This strategy, however, is subject to a variety of restrictions in the in vivo environment, where other aspects of IONP design will strongly influence the biodistribution. In these studies, various targeted IONP are compared to non-targeted controls. IONP were injected into BT-474 tumor-bearing NSG mice and tissues harvested 24hrs post-injection. Results indicate no significant difference between the various targeted IONP and the non-targeted controls, suggesting the IONP were prohibitively-sized to incur tumor penetration. Additional strategies are currently being pursued in conjuncture with targeted particles to increase the intratumoral deposition.
Modeling spallation reactions in tungsten and uranium targets with the Geant4 toolkit
NASA Astrophysics Data System (ADS)
Malyshkin, Yury; Pshenichnov, Igor; Mishustin, Igor; Greiner, Walter
2012-02-01
We study primary and secondary reactions induced by 600 MeV proton beams in monolithic cylindrical targets made of natural tungsten and uranium by using Monte Carlo simulations with the Geant4 toolkit [1-3]. Bertini intranuclear cascade model, Binary cascade model and IntraNuclear Cascade Liège (INCL) with ABLA model [4] were used as calculational options to describe nuclear reactions. Fission cross sections, neutron multiplicity and mass distributions of fragments for 238U fission induced by 25.6 and 62.9 MeV protons are calculated and compared to recent experimental data [5]. Time distributions of neutron leakage from the targets and heat depositions are calculated. This project is supported by Siemens Corporate Technology.
Modeling and production of 240Am by deuteron-induced activation of a 240Pu target
Finn, Erin C.; McNamara, Bruce K.; Greenwood, Lawrence R.; Wittman, Richard S.; Soderquist, Chuck Z.; Woods, Vincent T.; VanDevender, Brent A.; Metz, Lori A.; Friese, Judah I.
2015-02-01
A novel reaction pathway for production of 240Am is reported. Models of reaction cross-sections in EMPIRE II suggests that deuteron-induced activation of a 240Pu target produces maximum yields of 240Am from 11.5 MeV incident deuterons. This activation had not been previously reported in the literature. A 240Pu target was activated under the modeled optimum conditions to produce 240Am. The modeled cross-section for the 240Pu(d, 2n)240Am reaction is on the order of 20-30 mbarn, but the experimentally estimated value is 5.3 ± 0.2 mbarn. We discuss reasons for the discrepancy as well as production of other Am isotopes that contaminate the final product.
A stochastic model for eye movements during fixation on a stationary target.
NASA Technical Reports Server (NTRS)
Vasudevan, R.; Phatak, A. V.; Smith, J. D.
1971-01-01
A stochastic model describing small eye movements occurring during steady fixation on a stationary target is presented. Based on eye movement data for steady gaze, the model has a hierarchical structure; the principal level represents the random motion of the image point within a local area of fixation, while the higher level mimics the jump processes involved in transitions from one local area to another. Target image motion within a local area is described by a Langevin-like stochastic differential equation taking into consideration the microsaccadic jumps pictured as being due to point processes and the high frequency muscle tremor, represented as a white noise. The transform of the probability density function for local area motion is obtained, leading to explicit expressions for their means and moments. Evaluation of these moments based on the model is comparable with experimental results.
De Velasco, Marco A; Kura, Yurie; Yoshikawa, Kazuhiro; Nishio, Kazuto; Davies, Barry R; Uemura, Hirotsugu
2016-03-29
The PI3K/AKT pathway is frequently altered in advanced human prostate cancer mainly through the loss of functional PTEN, and presents as potential target for personalized therapy. Our aim was to determine the therapeutic potential of the pan-AKT inhibitor, AZD5363, in PTEN-deficient prostate cancer. Here we used a genetically engineered mouse (GEM) model of PTEN-deficient prostate cancer to evaluate the in vivo pharmacodynamic and antitumor activity of AZD5363 in castration-naïve and castration-resistant prostate cancer. An additional GEM model, based on the concomitant inactivation of PTEN and Trp53 (P53), was established as an aggressive model of advanced prostate cancer and was used to further evaluate clinically relevant endpoints after treatment with AZD5363. In vivo pharmacodynamic studies demonstrated that AZD5363 effectively inhibited downstream targets of AKT. AZD5363 monotherapy significantly reduced growth of tumors in castration-naïve and castration-resistant models of PTEN-deficient prostate cancer. More importantly, AZD5363 significantly delayed tumor growth and improved overall survival and progression-free survival in PTEN/P53 double knockout mice. Our findings demonstrate that AZD5363 is effective against GEM models of PTEN-deficient prostate cancer and provide lines of evidence to support further investigation into the development of treatment strategies targeting AKT for the treatment of PTEN-deficient prostate cancer.
De Velasco, Marco A.; Kura, Yurie; Yoshikawa, Kazuhiro; Nishio, Kazuto; Davies, Barry R.; Uemura, Hirotsugu
2016-01-01
The PI3K/AKT pathway is frequently altered in advanced human prostate cancer mainly through the loss of functional PTEN, and presents as potential target for personalized therapy. Our aim was to determine the therapeutic potential of the pan-AKT inhibitor, AZD5363, in PTEN-deficient prostate cancer. Here we used a genetically engineered mouse (GEM) model of PTEN-deficient prostate cancer to evaluate the in vivo pharmacodynamic and antitumor activity of AZD5363 in castration-naïve and castration-resistant prostate cancer. An additional GEM model, based on the concomitant inactivation of PTEN and Trp53 (P53), was established as an aggressive model of advanced prostate cancer and was used to further evaluate clinically relevant endpoints after treatment with AZD5363. In vivo pharmacodynamic studies demonstrated that AZD5363 effectively inhibited downstream targets of AKT. AZD5363 monotherapy significantly reduced growth of tumors in castration-naïve and castration-resistant models of PTEN-deficient prostate cancer. More importantly, AZD5363 significantly delayed tumor growth and improved overall survival and progression-free survival in PTEN/P53 double knockout mice. Our findings demonstrate that AZD5363 is effective against GEM models of PTEN-deficient prostate cancer and provide lines of evidence to support further investigation into the development of treatment strategies targeting AKT for the treatment of PTEN-deficient prostate cancer. PMID:26910118
Zhang, Li; Mager, Donald E
2015-10-01
Bortezomib is a reversible proteasome inhibitor with potent antineoplastic activity that exhibits dose- and time-dependent pharmacokinetics (PK). Proteasome-mediated bortezomib disposition is proposed as the primary source of its nonlinear and apparent nonstationary PK behavior. Single intravenous (IV) doses of bortezomib (0.25 and 1 mg/kg) were administrated to BALB/c mice, with blood and tissue samples obtained over 144 h, which were analyzed by LC/MS/MS. A physiologically based pharmacokinetic (PBPK) model incorporating tissue drug-target binding was developed to test the hypothesis of proteasome-mediated bortezomib disposition. The final model reasonably captured bortezomib plasma and tissue PK profiles, and parameters were estimated with good precision. The rank-order of model estimated tissue target density correlated well with experimentally measured proteasome concentrations reported in the literature, supporting the hypothesis that binding to proteasome influences bortezomib disposition. The PBPK model was further scaled-up to humans to assess the similarity of bortezomib disposition among species. Human plasma bortezomib PK profiles following multiple IV dosing (1.3 mg/m(2)) on days 1, 4, 8, and 11 were simulated by appropriately scaling estimated mouse parameters. Simulated and observed bortezomib concentrations after multiple dosing were in good agreement, suggesting target-mediated bortezomib disposition is likely for both mice and humans. Furthermore, the model predicts that renal impairment should exert minimal influence on bortezomib exposure in humans, confirming that bortezomib dose adjustment is not necessary for patients with renal impairment.
Statistical modeling of targets and clutter in single-look non-polarimetric SAR imagery
Salazar, J.S.; Hush, D.R.; Koch, M.W.; Fogler, R.J.; Hostetler, L.D.
1998-08-01
This paper presents a Generalized Logistic (gLG) distribution as a unified model for Log-domain synthetic aperture Radar (SAR) data. This model stems from a special case of the G-distribution known as the G{sup 0}-distribution. The G-distribution arises from a multiplicative SAR model and has the classical K-distribution as another special case. The G{sup 0}-distribution, however, can model extremely heterogeneous clutter regions that the k-distribution cannot model. This flexibility is preserved in the unified gLG model, which is capable of modeling non-polarimetric SAR returns from clutter as well as man-made objects. Histograms of these two types of SAR returns have opposite skewness. The flexibility of the gLG model lies in its shape and shift parameters. The shape parameter describes the differing skewness between target and clutter data while the shift parameter compensates for movements in the mean as the shape parameter changes. A Maximum Likelihood (ML) estimate of the shape parameter gives an optimal measure of the skewness of the SAR data. This measure provides a basis for an optimal target detection algorithm.
Unsupervised Spatial Event Detection in Targeted Domains with Applications to Civil Unrest Modeling
Zhao, Liang; Chen, Feng; Dai, Jing; Hua, Ting; Lu, Chang-Tien; Ramakrishnan, Naren
2014-01-01
Twitter has become a popular data source as a surrogate for monitoring and detecting events. Targeted domains such as crime, election, and social unrest require the creation of algorithms capable of detecting events pertinent to these domains. Due to the unstructured language, short-length messages, dynamics, and heterogeneity typical of Twitter data streams, it is technically difficult and labor-intensive to develop and maintain supervised learning systems. We present a novel unsupervised approach for detecting spatial events in targeted domains and illustrate this approach using one specific domain, viz. civil unrest modeling. Given a targeted domain, we propose a dynamic query expansion algorithm to iteratively expand domain-related terms, and generate a tweet homogeneous graph. An anomaly identification method is utilized to detect spatial events over this graph by jointly maximizing local modularity and spatial scan statistics. Extensive experiments conducted in 10 Latin American countries demonstrate the effectiveness of the proposed approach. PMID:25350136
Mimeault, Murielle; Batra, Surinder K.
2013-01-01
The in vivo zebrafish models have recently attracted great attention in molecular oncology to investigate multiple genetic alterations associated with the development of human cancers and validate novel anticancer drug targets. Particularly, the transparent zebrafish models can be used as a xenotransplantation system to rapidly assess the tumorigenicity and metastatic behavior of cancer stem and/or progenitor cells and their progenies. Moreover, the zebrafish models have emerged as powerful tools for an in vivo testing of novel anticancer agents and nanomaterials for counteracting tumor formation and metastases and improving the efficacy of current radiation and chemotherapeutic treatments against aggressive, metastatic and lethal cancers. PMID:22903142
Model-based automatic target recognition using hierarchical foveal machine vision
NASA Astrophysics Data System (ADS)
McKee, Douglas C.; Bandera, Cesar; Ghosal, Sugata; Rauss, Patrick J.
1996-06-01
This paper presents a target detection and interrogation techniques for a foveal automatic target recognition (ATR) system based on the hierarchical scale-space processing of imagery from a rectilinear tessellated multiacuity retinotopology. Conventional machine vision captures imagery and applies early vision techniques with uniform resolution throughout the field-of-view (FOV). In contrast, foveal active vision features graded acuity imagers and processing coupled with context sensitive gaze control, analogous to that prevalent throughout vertebrate vision. Foveal vision can operate more efficiently in dynamic scenarios with localized relevance than uniform acuity vision because resolution is treated as a dynamically allocable resource. Foveal ATR exploits the difference between detection and recognition resolution requirements and sacrifices peripheral acuity to achieve a wider FOV (e.g. faster search), greater localized resolution where needed (e.g., more confident recognition at the fovea), and faster frame rates (e.g., more reliable tracking and navigation) without increasing processing requirements. The rectilinearity of the retinotopology supports a data structure that is a subset of the image pyramid. This structure lends itself to multiresolution and conventional 2-D algorithms, and features a shift invariance of perceived target shape that tolerates sensor pointing errors and supports multiresolution model-based techniques. The detection technique described in this paper searches for regions-of- interest (ROIs) using the foveal sensor's wide FOV peripheral vision. ROIs are initially detected using anisotropic diffusion filtering and expansion template matching to a multiscale Zernike polynomial-based target model. Each ROI is then interrogated to filter out false target ROIs by sequentially pointing a higher acuity region of the sensor at each ROI centroid and conducting a fractal dimension test that distinguishes targets from structured clutter.
Making Sense in the City: Dolly Parton, Early Reading and Educational Policy-Making
ERIC Educational Resources Information Center
Hall, Christine; Jones, Susan
2016-01-01
In this paper, we present a case study of a philanthropic literacy initiative, Dolly Parton's Imagination Library, a book-gifting scheme for under 5s, and consider the impact of the scheme on literacy policy in the English city where it was introduced. We bring four lenses to bear on the case study. First, we analyse the operation of the scheme in…
Evolution of the helicity and transversity Transverse-Momentum-Dependent parton distributions
Prokudin, Alexei; Bacchetta, Alessandro
2013-07-01
We examine the QCD evolution of the helicity and transversity parton distribution functions when including also their dependence on transverse momentum. Using an appropriate definition of these polarized transverse momentum distributions (TMDs), we describe their dependence on the factorization scale and rapidity cutoff, which is essential for phenomenological applications.
Evolution of the helicity and transversity Transverse-Momentum-Dependent parton distributions
Prokudin, Alexey; Bacchetta, Alessandro
2013-10-01
We examine the QCD evolution of the helicity and transversity parton distribution functions when including also their dependence on transverse momentum. Using an appropriate definition of these polarized transverse momentum distributions (TMDs), we describe their dependence on the factorization scale and rapidity cutoff, which is essential for phenomenological applications.
Drell-Yan Lepton pair production at NNLO QCD with parton showers
Hoeche, Stefan; Li, Ye; Prestel, Stefan
2015-04-13
We present a simple approach to combine NNLO QCD calculations and parton showers, based on the UNLOPS technique. We apply the method to the computation of Drell-Yan lepton-pair production at the Large Hadron Collider. We comment on possible improvements and intrinsic uncertainties.
Energy dependence of jet transport parameter and parton saturationin quark-gluon plasma
Casalderrey-Solana, Jorge; Wang, Xin-Nian
2007-06-24
We study the evolution and saturation of the gluondistribution function in the quark-gluon plasma as probed by apropagating parton and its effect on the computation of jet quenching ortransport parameter $\\hat q $. For thermal partons, the saturation scale$Q2_s$ is found to be proportional to the Debye screening mass $\\mu_D2$.For hard probes, evolution at small $x=Q2_s/6ET$ leads to jet energydependence of hat q. We study this dependence for both a conformal gaugetheory in weak and strong coupling limit and for (pure gluon) QCD. Theenergy dependence can be used to extract the shear viscosity $\\eta$ ofthe medium since $\\eta$ can be related to the transport parameter forthermal partons in a transport description. We also derive upper boundson the transport parameter for both energetic and thermal partons. Thelater leads to a lower bound on shear viscosity-to-entropy density ratiowhich is consistent with the conjectured lower bound $\\eta/s\\geq 1/4\\pi$.Implications on the study of jet quenching at RHIC and LHC and the bulkproperties of the dense matter are discussed.
Multi-aspect target discrimination using hidden Markov models and neural networks.
Robinson, Marc; Azimi-Sadjadi, Mahmood R; Salazar, Jaime
2005-03-01
This paper presents a new multi-aspect pattern classification method using hidden Markov models (HMMs). Models are defined for each class, with the probability found by each model determining class membership. Each HMM model is enhanced by the use of a multilayer perception (MLP) network to generate emission probabilities. This hybrid system uses the MLP to find the probability of a state for an unknown pattern and the HMM to model the process underlying the state transitions. A new batch gradient descent-based method is introduced for optimal estimation of the transition and emission probabilities. A prediction method in conjunction with HMM model is also presented that attempts to improve the computation of transition probabilities by using the previous states to predict the next state. This method exploits the correlation information between consecutive aspects. These algorithms are then implemented and benchmarked on a multi-aspect underwater target classification problem using a realistic sonar data set collected in different bottom conditions.
Design and modeling of spectral-thermal unmixing targets for airborne hyperspectral imagery
NASA Astrophysics Data System (ADS)
Clare, Phil
2006-05-01
Techniques to determine the proportions of constituent materials within a single pixel spectrum are well documented in the reflective (0.4-2.5μm) domain. The same capability is also desirable for the thermal (7-14μm) domain, but is complicated by the thermal contributions to the measured spectral radiance. Atmospheric compensation schemes for the thermal domain have been described along with methods for estimating the spectral emissivity from a spectral radiance measurement and hence the next stage to be tackled is the unmixing of thermal spectral signatures. In order to pursue this goal it is necessary to collect data of well-calibrated targets which will expose the limits of the available techniques and enable more robust methods to be designed. This paper describes the design of a set of ground targets for an airborne hyperspectral imager, which will test the effectiveness of available methods. The set of targets include panels to explore a number of difficult scenarios such as isothermal (different materials at identical temperature), isochromal (identical materials, but at differing temperatures), thermal adjacency and thermal point sources. Practical fabrication issues for heated targets and selection of appropriate materials are described. Mathematical modelling of the experiments has enabled prediction of at-sensor measured radiances which are used to assess the design parameters. Finally, a number of useful lessons learned during the fielding of these actual targets are presented to assist those planning future trials of thermal hyperspectral sensors.
Laser induced plasma on copper target, a non-equilibrium model
Oumeziane, Amina Ait Liani, Bachir; Parisse, Jean-Denis
2014-02-15
The aim of this work is to present a comprehensive numerical model for the UV laser ablation of metal targets, it focuses mainly on the prediction of laser induced plasma thresholds, the effect of the laser-plasma interaction, and the importance of the electronic non-equilibrium in the laser induced plume and its expansion in the background gas. This paper describes a set of numerical models for laser-matter interaction between 193-248 and 355 nm lasers and a copper target. Along with the thermal effects inside the material resulting from the irradiation of the latter with the pulsed laser, the laser-evaporated matter interaction and the plasma formation are thoroughly modelled. In the laser induced plume, the electronic nonequilibrium and the laser beam absorption have been investigated. Our calculations of the plasmas ignition thresholds on copper targets have been validated and compared to experimental as well as theoretical results. Comparison with experiment data indicates that our results are in good agreement with those reported in the literature. Furthermore, the inclusion of electronic non-equilibrium in our work indicated that this important process must be included in models of laser ablation and plasma plume formation.
Boni, Andrea; Politi, Antonio Z.; Strnad, Petr; Xiang, Wanqing; Hossain, M. Julius
2015-01-01
Targeting of inner nuclear membrane (INM) proteins is essential for nuclear architecture and function, yet its mechanism remains poorly understood. Here, we established a new reporter that allows real-time imaging of membrane protein transport from the ER to the INM using Lamin B receptor and Lap2β as model INM proteins. These reporters allowed us to characterize the kinetics of INM targeting and establish a mathematical model of this process and enabled us to probe its molecular requirements in an RNA interference screen of 96 candidate genes. Modeling of the phenotypes of genes involved in transport of these INM proteins predicted that it critically depended on the number and permeability of nuclear pores and the availability of nuclear binding sites, but was unaffected by depletion of most transport receptors. These predictions were confirmed with targeted validation experiments on the functional requirements of nucleoporins and nuclear lamins. Collectively, our data support a diffusion retention model of INM protein transport in mammalian cells. PMID:26056140
Osman, Erkan Y.; Miller, Madeline R.; Robbins, Kate L.; Lombardi, Abby M.; Atkinson, Arleigh K.; Brehm, Amanda J.; Lorson, Christian L.
2014-01-01
Spinal muscular atrophy (SMA) is a neurodegenerative disease caused by the loss of Survival Motor Neuron-1 (SMN1). In all SMA patients, a nearly identical copy gene called SMN2 is present, which produces low levels of functional protein owing to an alternative splicing event. To prevent exon-skipping, we have targeted an intronic repressor, Element1 (E1), located upstream of SMN2 exon 7 using Morpholino-based antisense oligonucleotides (E1MO-ASOs). A single intracerebroventricular injection in the relatively severe mouse model of SMA (SMNΔ7 mouse model) elicited a robust induction of SMN protein, and mean life span was extended from an average survival of 13 to 54 days following a single dose, consistent with large weight gains and a correction of the neuronal pathology. Additionally, E1MO-ASO treatment in an intermediate SMA mouse (SMNRT mouse model) significantly extended life span by ∼700% and weight gain was comparable with the unaffected animals. While a number of experimental therapeutics have targeted the ISS-N1 element of SMN2 pre-mRNA, the development of E1 ASOs provides a new molecular target for SMA therapeutics that dramatically extends survival in two important pre-clinical models of disease. PMID:24781211
Targeted Proteomics-Driven Computational Modeling of Macrophage S1P Chemosensing.
Manes, Nathan P; Angermann, Bastian R; Koppenol-Raab, Marijke; An, Eunkyung; Sjoelund, Virginie H; Sun, Jing; Ishii, Masaru; Germain, Ronald N; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra
2015-10-01
Osteoclasts are monocyte-derived multinuclear cells that directly attach to and resorb bone. Sphingosine-1-phosphate (S1P)(1) regulates bone resorption by functioning as both a chemoattractant and chemorepellent of osteoclast precursors through two G-protein coupled receptors that antagonize each other in an S1P-concentration-dependent manner. To quantitatively explore the behavior of this chemosensing pathway, we applied targeted proteomics, transcriptomics, and rule-based pathway modeling using the Simmune toolset. RAW264.7 cells (a mouse monocyte/macrophage cell line) were used as model osteoclast precursors, RNA-seq was used to identify expressed target proteins, and selected reaction monitoring (SRM) mass spectrometry using internal peptide standards was used to perform absolute abundance measurements of pathway proteins. The resulting transcript and protein abundance values were strongly correlated. Measured protein abundance values, used as simulation input parameters, led to in silico pathway behavior matching in vitro measurements. Moreover, once model parameters were established, even simulated responses toward stimuli that were not used for parameterization were consistent with experimental findings. These findings demonstrate the feasibility and value of combining targeted mass spectrometry with pathway modeling for advancing biological insight.
Targeted Proteomics-Driven Computational Modeling of Macrophage S1P Chemosensing*
Manes, Nathan P.; Angermann, Bastian R.; Koppenol-Raab, Marijke; An, Eunkyung; Sjoelund, Virginie H.; Sun, Jing; Ishii, Masaru; Germain, Ronald N.; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra
2015-01-01
Osteoclasts are monocyte-derived multinuclear cells that directly attach to and resorb bone. Sphingosine-1-phosphate (S1P)1 regulates bone resorption by functioning as both a chemoattractant and chemorepellent of osteoclast precursors through two G-protein coupled receptors that antagonize each other in an S1P-concentration-dependent manner. To quantitatively explore the behavior of this chemosensing pathway, we applied targeted proteomics, transcriptomics, and rule-based pathway modeling using the Simmune toolset. RAW264.7 cells (a mouse monocyte/macrophage cell line) were used as model osteoclast precursors, RNA-seq was used to identify expressed target proteins, and selected reaction monitoring (SRM) mass spectrometry using internal peptide standards was used to perform absolute abundance measurements of pathway proteins. The resulting transcript and protein abundance values were strongly correlated. Measured protein abundance values, used as simulation input parameters, led to in silico pathway behavior matching in vitro measurements. Moreover, once model parameters were established, even simulated responses toward stimuli that were not used for parameterization were consistent with experimental findings. These findings demonstrate the feasibility and value of combining targeted mass spectrometry with pathway modeling for advancing biological insight. PMID:26199343
Targeted Proteomics-Driven Computational Modeling of Macrophage S1P Chemosensing.
Manes, Nathan P; Angermann, Bastian R; Koppenol-Raab, Marijke; An, Eunkyung; Sjoelund, Virginie H; Sun, Jing; Ishii, Masaru; Germain, Ronald N; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra
2015-10-01
Osteoclasts are monocyte-derived multinuclear cells that directly attach to and resorb bone. Sphingosine-1-phosphate (S1P)(1) regulates bone resorption by functioning as both a chemoattractant and chemorepellent of osteoclast precursors through two G-protein coupled receptors that antagonize each other in an S1P-concentration-dependent manner. To quantitatively explore the behavior of this chemosensing pathway, we applied targeted proteomics, transcriptomics, and rule-based pathway modeling using the Simmune toolset. RAW264.7 cells (a mouse monocyte/macrophage cell line) were used as model osteoclast precursors, RNA-seq was used to identify expressed target proteins, and selected reaction monitoring (SRM) mass spectrometry using internal peptide standards was used to perform absolute abundance measurements of pathway proteins. The resulting transcript and protein abundance values were strongly correlated. Measured protein abundance values, used as simulation input parameters, led to in silico pathway behavior matching in vitro measurements. Moreover, once model parameters were established, even simulated responses toward stimuli that were not used for parameterization were consistent with experimental findings. These findings demonstrate the feasibility and value of combining targeted mass spectrometry with pathway modeling for advancing biological insight. PMID:26199343
A target detection model predicting field observer performance in maritime scenes
NASA Astrophysics Data System (ADS)
Culpepper, Joanne B.; Wheaton, Vivienne C.
2014-10-01
The U.S. Army's target acquisition models, the ACQUIRE and Target Task Performance (TTP) models, have been employed for many years to assess the performance of thermal infrared sensors. In recent years, ACQUIRE and the TTP models have been adapted to assess the performance of visible sensors. These adaptations have been primarily focused on the performance of an observer viewing a display device. This paper describes an implementation of the TTP model to predict field observer performance in maritime scenes. Predictions of the TTP model implementation were compared to observations of a small watercraft taken in a field trial. In this field trial 11 Australian Navy observers viewed a small watercraft in an open ocean scene. Comparisons of the observed probability of detection to predictions of the TTP model implementation showed the normalised RSS metric overestimated the probability of detection. The normalised Pixel Contrast using a literature value for V50 yielded a correlation of 0.58 between the predicted and observed probability of detection. With a measured value of N50 or V50 for the small watercraft used in this investigation, this implementation of the TTP model may yield stronger correlation with observed probability of detection.
Modeling tone and intonation in Mandarin and English as a process of target approximation.
Prom-on, Santitham; Xu, Yi; Thipakorn, Bundit
2009-01-01
This paper reports the development of a quantitative target approximation (qTA) model for generating F(0) contours of speech. The qTA model simulates the production of tone and intonation as a process of syllable-synchronized sequential target approximation [Xu, Y. (2005). "Speech melody as articulatorily implemented communicative functions," Speech Commun. 46, 220-251]. It adopts a set of biomechanical and linguistic assumptions about the mechanisms of speech production. The communicative functions directly modeled are lexical tone in Mandarin and lexical stress in English and focus in both languages. The qTA model is evaluated by extracting function-specific model parameters from natural speech via supervised learning (automatic analysis by synthesis) and comparing the F(0) contours generated with the extracted parameters to those of natural utterances through numerical evaluation and perceptual testing. The F(0) contours generated by the qTA model with the learned parameters were very close to the natural contours in terms of root mean square error, rate of human identification of tone, and focus and judgment of naturalness by human listeners. The results demonstrate that the qTA model is both an effective tool for research on tone and intonation and a potentially effective system for automatic synthesis of tone and intonation.
Molecular Inversion Probes for targeted resequencing in non-model organisms.
Niedzicka, M; Fijarczyk, A; Dudek, K; Stuglik, M; Babik, W
2016-01-01
Applications that require resequencing of hundreds or thousands of predefined genomic regions in numerous samples are common in studies of non-model organisms. However few approaches at the scale intermediate between multiplex PCR and sequence capture methods are available. Here we explored the utility of Molecular Inversion Probes (MIPs) for the medium-scale targeted resequencing in a non-model system. Markers targeting 112 bp of exonic sequence were designed from transcriptome of Lissotriton newts. We assessed performance of 248 MIP markers in a sample of 85 individuals. Among the 234 (94.4%) successfully amplified markers 80% had median coverage within one order of magnitude, indicating relatively uniform performance; coverage uniformity across individuals was also high. In the analysis of polymorphism and segregation within family, 77% of 248 tested MIPs were confirmed as single copy Mendelian markers. Genotyping concordance assessed using replicate samples exceeded 99%. MIP markers for targeted resequencing have a number of advantages: high specificity, high multiplexing level, low sample requirement, straightforward laboratory protocol, no need for preparation of genomic libraries and no ascertainment bias. We conclude that MIP markers provide an effective solution for resequencing targets of tens or hundreds of kb in any organism and in a large number of samples. PMID:27046329
Kaushik, Pawan; Lal Khokra, Sukhbir; Rana, A. C.
2014-01-01
The present study attempts to establish a relationship between ethnopharmacological claims and bioactive constituents present in Pinus roxburghii against all possible targets for diabetes through molecular docking and to develop a pharmacophore model for the active target. The process of molecular docking involves study of different bonding modes of one ligand with active cavities of target receptors protein tyrosine phosphatase 1-beta (PTP-1β), dipeptidyl peptidase-IV (DPP-IV), aldose reductase (AR), and insulin receptor (IR) with help of docking software Molegro virtual docker (MVD). From the results of docking score values on different receptors for antidiabetic activity, it is observed that constituents, namely, secoisoresinol, pinoresinol, and cedeodarin, showed the best docking results on almost all the receptors, while the most significant results were observed on AR. Then, LigandScout was applied to develop a pharmacophore model for active target. LigandScout revealed that 2 hydrogen bond donors pointing towards Tyr 48 and His 110 are a major requirement of the pharmacophore generated. In our molecular docking studies, the active constituent, secoisoresinol, has also shown hydrogen bonding with His 110 residue which is a part of the pharmacophore. The docking results have given better insights into the development of better aldose reductase inhibitor so as to treat diabetes related secondary complications. PMID:25114678
Kaushik, Pawan; Lal Khokra, Sukhbir; Rana, A C; Kaushik, Dhirender
2014-01-01
The present study attempts to establish a relationship between ethnopharmacological claims and bioactive constituents present in Pinus roxburghii against all possible targets for diabetes through molecular docking and to develop a pharmacophore model for the active target. The process of molecular docking involves study of different bonding modes of one ligand with active cavities of target receptors protein tyrosine phosphatase 1-beta (PTP-1β), dipeptidyl peptidase-IV (DPP-IV), aldose reductase (AR), and insulin receptor (IR) with help of docking software Molegro virtual docker (MVD). From the results of docking score values on different receptors for antidiabetic activity, it is observed that constituents, namely, secoisoresinol, pinoresinol, and cedeodarin, showed the best docking results on almost all the receptors, while the most significant results were observed on AR. Then, LigandScout was applied to develop a pharmacophore model for active target. LigandScout revealed that 2 hydrogen bond donors pointing towards Tyr 48 and His 110 are a major requirement of the pharmacophore generated. In our molecular docking studies, the active constituent, secoisoresinol, has also shown hydrogen bonding with His 110 residue which is a part of the pharmacophore. The docking results have given better insights into the development of better aldose reductase inhibitor so as to treat diabetes related secondary complications. PMID:25114678
Molecular Inversion Probes for targeted resequencing in non-model organisms
Niedzicka, M.; Fijarczyk, A.; Dudek, K.; Stuglik, M.; Babik, W.
2016-01-01
Applications that require resequencing of hundreds or thousands of predefined genomic regions in numerous samples are common in studies of non-model organisms. However few approaches at the scale intermediate between multiplex PCR and sequence capture methods are available. Here we explored the utility of Molecular Inversion Probes (MIPs) for the medium-scale targeted resequencing in a non-model system. Markers targeting 112 bp of exonic sequence were designed from transcriptome of Lissotriton newts. We assessed performance of 248 MIP markers in a sample of 85 individuals. Among the 234 (94.4%) successfully amplified markers 80% had median coverage within one order of magnitude, indicating relatively uniform performance; coverage uniformity across individuals was also high. In the analysis of polymorphism and segregation within family, 77% of 248 tested MIPs were confirmed as single copy Mendelian markers. Genotyping concordance assessed using replicate samples exceeded 99%. MIP markers for targeted resequencing have a number of advantages: high specificity, high multiplexing level, low sample requirement, straightforward laboratory protocol, no need for preparation of genomic libraries and no ascertainment bias. We conclude that MIP markers provide an effective solution for resequencing targets of tens or hundreds of kb in any organism and in a large number of samples. PMID:27046329
Shamay, Yosi; Golan, Moran; Tyomkin, Dalia; David, Ayelet
2016-05-10
Polymer-drug conjugates that can actively target the tumor vasculature have emerged as an attractive technology for improving the therapeutic efficacy of cytotoxic drugs. We have recently provided, for the first time, in vivo evidence showing the significant advantage of the E-selectin-targeted N-(2-hydroxypropyl)methacrylamide (HPMA) copolymer-doxorubicin conjugate, P-(Esbp)-DOX, in inhibiting primary tumor growth and preventing the formation and development of cancer metastases. Here, we describe the design of a vascular endothelial growth factor receptor (VEGFR)-1-targeted HPMA copolymer-DOX conjugate (P-(F56)-DOX) that can actively and simultaneously target different cell types in the tumor microenvironment, such as endothelial cells (ECs), bone marrow-derived cells and many human cancer cells of diverse tumor origin. The VEGFR-1-targeted copolymer was tested for its binding, internalization and in vitro cytotoxicity in ECs (bEnd.3 and cEND cells) and cancer cells (B16-F10, 3LL and HT29). The in vivo anti-cancer activity of P-(F56)-DOX was then tested in two tumor-bearing mice (TBM) models (i.e., primary Lewis lung carcinoma (3LL) tumors and B16-F10 melanoma pulmonary metastases), relative to that of the E-selectin-targeted system (P-(Esbp)-DOX) that solely targets ECs. Our results indicate that the binding and internalization profiles of the VEGFR-1-targeted copolymer were superior towards ECs as compared to cancer cells and correlated well to the level of VEGFR-1 expression in cells. Accordingly, the VEGFR-1-targeted copolymer (P-(F56)-DOX) was more toxic towards bEnd.3 cells than to cancer cells, and exhibited significantly higher cytotoxicity than did the non-targeted control copolymer. P-(F56)-DOX inhibited 3LL tumor growth and significantly prolonged the survival of mice with B16-F10 pulmonary metastases. When compared to a system that actively targets only tumor vascular ECs, P-(F56)-DOX and P-(Esbp)-DOX exhibited comparable efficacy in slowing the
Shamay, Yosi; Golan, Moran; Tyomkin, Dalia; David, Ayelet
2016-05-10
Polymer-drug conjugates that can actively target the tumor vasculature have emerged as an attractive technology for improving the therapeutic efficacy of cytotoxic drugs. We have recently provided, for the first time, in vivo evidence showing the significant advantage of the E-selectin-targeted N-(2-hydroxypropyl)methacrylamide (HPMA) copolymer-doxorubicin conjugate, P-(Esbp)-DOX, in inhibiting primary tumor growth and preventing the formation and development of cancer metastases. Here, we describe the design of a vascular endothelial growth factor receptor (VEGFR)-1-targeted HPMA copolymer-DOX conjugate (P-(F56)-DOX) that can actively and simultaneously target different cell types in the tumor microenvironment, such as endothelial cells (ECs), bone marrow-derived cells and many human cancer cells of diverse tumor origin. The VEGFR-1-targeted copolymer was tested for its binding, internalization and in vitro cytotoxicity in ECs (bEnd.3 and cEND cells) and cancer cells (B16-F10, 3LL and HT29). The in vivo anti-cancer activity of P-(F56)-DOX was then tested in two tumor-bearing mice (TBM) models (i.e., primary Lewis lung carcinoma (3LL) tumors and B16-F10 melanoma pulmonary metastases), relative to that of the E-selectin-targeted system (P-(Esbp)-DOX) that solely targets ECs. Our results indicate that the binding and internalization profiles of the VEGFR-1-targeted copolymer were superior towards ECs as compared to cancer cells and correlated well to the level of VEGFR-1 expression in cells. Accordingly, the VEGFR-1-targeted copolymer (P-(F56)-DOX) was more toxic towards bEnd.3 cells than to cancer cells, and exhibited significantly higher cytotoxicity than did the non-targeted control copolymer. P-(F56)-DOX inhibited 3LL tumor growth and significantly prolonged the survival of mice with B16-F10 pulmonary metastases. When compared to a system that actively targets only tumor vascular ECs, P-(F56)-DOX and P-(Esbp)-DOX exhibited comparable efficacy in slowing the
A mathematical model for a distributed attack on targeted resources in a computer network
NASA Astrophysics Data System (ADS)
Haldar, Kaushik; Mishra, Bimal Kumar
2014-09-01
A mathematical model has been developed to analyze the spread of a distributed attack on critical targeted resources in a network. The model provides an epidemic framework with two sub-frameworks to consider the difference between the overall behavior of the attacking hosts and the targeted resources. The analysis focuses on obtaining threshold conditions that determine the success or failure of such attacks. Considering the criticality of the systems involved and the strength of the defence mechanism involved, a measure has been suggested that highlights the level of success that has been achieved by the attacker. To understand the overall dynamics of the system in the long run, its equilibrium points have been obtained and their stability has been analyzed, and conditions for their stability have been outlined.
Advanced EMI models for survey data processing: targets detection and classification
NASA Astrophysics Data System (ADS)
Shubitidze, F.; Sigman, J. B.; Wang, Yinlin; Miller, J.; Keranen, J.; Shamatava, I.; Barrowes, B. E.; O'Neill, K.
2014-06-01
One of the most challenging aspects of survey data processing is target selection. The fundamental input for the classification is dynamic data collected along survey lines. These data are different from the static data obtained in cued mode and used for target classification. Survey data are typically collected using just one transmitter loop (the Z-axis loop) and feature short data point collection times and short decay transience. The collection intervals for each data point are typically 0.1 s, and the signal repetition rates are typically 90 or 270 Hz (in other words, the transient decay times are 2.7 ms or 0.9 ms). Reliable classification requires multiple side/angle illumination; i.e., to conduct reliable classification it is necessary to combine and jointly invert multiple data points. However, picking data points that provide optimal information for classifying targets is a difficult task. The traditional method plots signal amplitudes on a 2D map and picks peaks of signal level without properly accounting for the underlying physics. In this paper, the joint diagonalization is applied to survey data sets to improve data pre-processing and target picking. The JD technique is an EMI data analysis and target classification technique and is applicable for all next-generation multi-static array EMI sensors. The method extracts multi-static response data matrix eigenvalues. The eigenvalues are main characteristics of the data. Recent studies have demonstrated that the method has great potential to quickly estimate the number of potential targets and moreover classify these targets at the data pre-processing stage, in real time and without the need for a forward model. Another advantage of JD is that it provides the ability to separate signal from noise making it possible to de-noise data without distorting the signal due to the targets. In this paper the JD technique is used to process dynamic data collected at South West Proving Ground and Aberdeen Proving
A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements.
Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J Douglas
2016-01-01
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks. PMID:27242452
Comparative modeling: the state of the art and protein drug target structure prediction.
Liu, Tianyun; Tang, Grace W; Capriotti, Emidio
2011-07-01
The goal of computational protein structure prediction is to provide three-dimensional (3D) structures with resolution comparable to experimental results. Comparative modeling, which predicts the 3D structure of a protein based on its sequence similarity to homologous structures, is the most accurate computational method for structure prediction. In the last two decades, significant progress has been made on comparative modeling methods. Using the large number of protein structures deposited in the Protein Data Bank (~65,000), automatic prediction pipelines are generating a tremendous number of models (~1.9 million) for sequences whose structures have not been experimentally determined. Accurate models are suitable for a wide range of applications, such as prediction of protein binding sites, prediction of the effect of protein mutations, and structure-guided virtual screening. In particular, comparative modeling has enabled structure-based drug design against protein targets with unknown structures. In this review, we describe the theoretical basis of comparative modeling, the available automatic methods and databases, and the algorithms to evaluate the accuracy of predicted structures. Finally, we discuss relevant applications in the prediction of important drug target proteins, focusing on the G protein-coupled receptor (GPCR) and protein kinase families.
Cross-simulation between two pharmacokinetic models for the target-controlled infusion of propofol
Kim, Jong-Yeop; Kim, Dae-Hee; Lee, A-Ram; Moon, Bong-Ki
2012-01-01
Background We investigated how one pharmacokinetic (PK) model differed in prediction of plasma (Cp) and effect-site concentration (Ceff) using a reproducing simulation of target-controlled infusion (TCI) with another PK model of propofol. Methods Sixty female patients were randomly assigned to TCI using Marsh PK (Group M) and TCI using Schnider PK (Group S) targeting 6.0 µg/ml of Cp of propofol for induction of anesthesia, and loss of responsiveness (LOR) was evaluated. Total and separate cross-simulation were investigated using the 2 hr TCI data (Marsh TCI and Schnider TCI), and we investigated the reproduced predicted concentrations (MARSHSCH and SCHNIDERMAR) using the other model. The correlation of the difference with covariates, and the influence of the PK parameters on the difference of prediction were investigated. Results Group M had a shorter time to LOR compared to Group S (P < 0.001), but Ceff at LOR was not different between groups. Reproduced simulations showed different time courses of Cp. MARSHSCH predicted a higher concentration during the early phase, whereas SCHNIDERMAR was maintained at a higher concentration. Volume and clearance of the central compartment were relevant to the difference of prediction, respectively. Body weight correlated well with differences in prediction between models (Rsqr = 0.9821, P < 0.001). Conclusions We compared two PK models to determine the different infusion behaviors during TCI, which resulted from the different parameter sets for each PK model. PMID:22558495
A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements
Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J. Douglas
2016-01-01
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks. PMID:27242452
Lescarbeau, Rebecca S; Lei, Liang; Bakken, Katrina K; Sims, Peter A; Sarkaria, Jann N; Canoll, Peter; White, Forest M
2016-06-01
Glioblastoma (GBM) is the most common malignant primary brain cancer. With a median survival of about a year, new approaches to treating this disease are necessary. To identify signaling molecules regulating GBM progression in a genetically engineered murine model of proneural GBM, we quantified phosphotyrosine-mediated signaling using mass spectrometry. Oncogenic signals, including phosphorylated ERK MAPK, PI3K, and PDGFR, were found to be increased in the murine tumors relative to brain. Phosphorylation of CDK1 pY15, associated with the G2 arrest checkpoint, was identified as the most differentially phosphorylated site, with a 14-fold increase in phosphorylation in the tumors. To assess the role of this checkpoint as a potential therapeutic target, syngeneic primary cell lines derived from these tumors were treated with MK-1775, an inhibitor of Wee1, the kinase responsible for CDK1 Y15 phosphorylation. MK-1775 treatment led to mitotic catastrophe, as defined by increased DNA damage and cell death by apoptosis. To assess the extensibility of targeting Wee1/CDK1 in GBM, patient-derived xenograft (PDX) cell lines were also treated with MK-1775. Although the response was more heterogeneous, on-target Wee1 inhibition led to decreased CDK1 Y15 phosphorylation and increased DNA damage and apoptosis in each line. These results were also validated in vivo, where single-agent MK-1775 demonstrated an antitumor effect on a flank PDX tumor model, increasing mouse survival by 1.74-fold. This study highlights the ability of unbiased quantitative phosphoproteomics to reveal therapeutic targets in tumor models, and the potential for Wee1 inhibition as a treatment approach in preclinical models of GBM. Mol Cancer Ther; 15(6); 1332-43. ©2016 AACR. PMID:27196784
Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.
Chadderdon, George L; Neymotin, Samuel A; Kerr, Cliff C; Lytton, William W
2012-01-01
Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1), no learning (0), or punishment (-1), corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.
Optical model analyses of galactic cosmic ray fragmentation in hydrogen targets
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.
1993-01-01
Quantum-mechanical optical model methods for calculating cross sections for the fragmentation of galactic cosmic ray nuclei by hydrogen targets are presented. The fragmentation cross sections are calculated with an abrasion-ablation collision formalism. Elemental and isotopic cross sections are estimated and compared with measured values for neon, sulfur, and calcium ions at incident energies between 400A MeV and 910A MeV. Good agreement between theory and experiment is obtained.
NASA Astrophysics Data System (ADS)
Pasquini, B.; Schweitzer, P.
2014-07-01
At leading twist the transverse momentum dependent parton distributions of the pion consist of two functions, the unpolarized f1,π(x,k⊥2) and the Boer-Mulders function h1,π⊥(x ,k⊥2). We study both functions within a light-front constituent model of the pion, comparing the results with different pion models and the corresponding nucleon distributions from a light-front constituent model. After evolution from the model scale to the relevant experimental scales, the results for the collinear pion valence parton distribution function f1,π(x) are in very good agreement with available parametrizations. Using the light-front constituent model results for the Boer-Mulders functions of the pion and nucleon, we calculate the coefficient ν in the angular distribution of Drell-Yan dileptons produced in pion-nucleus scattering, which is responsible for the violation of the Lam-Tung relation. We find a good agreement with the data, and carefully discuss the range of applicability of our approach.
Lappi, T; Mäntysaari, H; Venugopalan, R
2015-02-27
We argue that the proton multiplicities measured in Roman pot detectors at an electron ion collider can be used to determine centrality classes in incoherent diffractive scattering. Incoherent diffraction probes the fluctuations in the interaction strengths of multiparton Fock states in the nuclear wave functions. In particular, the saturation scale that characterizes this multiparton dynamics is significantly larger in central events relative to minimum bias events. As an application, we study the centrality dependence of incoherent diffractive vector meson production. We identify an observable which is simultaneously very sensitive to centrality triggered parton fluctuations and insensitive to details of the model.
Lappi, T; Mäntysaari, H; Venugopalan, R
2015-02-27
We argue that the proton multiplicities measured in Roman pot detectors at an electron ion collider can be used to determine centrality classes in incoherent diffractive scattering. Incoherent diffraction probes the fluctuations in the interaction strengths of multiparton Fock states in the nuclear wave functions. In particular, the saturation scale that characterizes this multiparton dynamics is significantly larger in central events relative to minimum bias events. As an application, we study the centrality dependence of incoherent diffractive vector meson production. We identify an observable which is simultaneously very sensitive to centrality triggered parton fluctuations and insensitive to details of the model. PMID:25768758
Infrared transmission through cirrus clouds: a radiative model for target detection.
Liou, K N; Takano, Y; Ou, S C; Heymsfield, A; Kreiss, W
1990-05-01
An IR transmission model for thin and subvisual cirrus clouds composed of hexagonal ice crystals with a specific use for target detection has been developed. The present model includes parameterizations of the ice crystal size distribution and the position of cirrus clouds in terms of ambient temperature. To facilitate the scattering and absorption calculations for hexagonal column and plate crystals in connection with transmission calculations, we have developed parameterized equations for their single scattering properties by using the results computed from a geometric ray-tracing program. The successive order-of-scattering approach has been used to account for multiple scattering of ice crystals associated with a target-detector system. The direct radiance, path radiance, and radiances produced by multiple scattering and background radiation involving cirrus clouds have been computed for 3.7- and 10-,microm wavelengths. We show that the background radiance at the 3.7-,microm wavelength is relatively small so that a high contrast may be obtained using this wavelength for the detection of airborne and ground-based objects in the presence of thin cirrus clouds. Finally, using the present model, including a simple prediction scheme for the ice crystal size distribution and cloud position, the transmission of infrared radiation through cirrus clouds can be efficiently evaluated if the target-detector geometry is defined.
NASA Astrophysics Data System (ADS)
Azimov, D.; Bishop, R.
2012-09-01
A complete analytical integration of the kinematic and dynamic equations of motion and applications of their integrals to targeting and guidance schemes for various dynamical models of various flight vehicles are presented. The general integral of these equations consists of six independent first integrals of motion and describes a class of non-steady flight trajectories in a maneuver plane. These first integrals represent explicit relationships for time, components of position and velocity vectors, and propulsive and aerodynamic accelerations. This explicitness with respect to the problem parameters can make these relationships useful in the design of airspace trajectories, and targeting and guidance schemes. It is also shown that the equations represent a 3rd-order vector differential equation used to develop the nonlinear maneuver model of a flight vehicle, and the state estimation and prediction schemes. Similarity in the dynamical models makes the first integrals valid for re-entry vehicles and missiles. An illustrative example has shown that the general integral provides a complete set of analytical solutions for nonlinear tracking, targeting, guidance and control problems with a wide range of terminal conditions, accelerations due to propulsive thrust and aerodynamic forces.
A linear-encoding model explains the variability of the target morphology in regeneration
Lobo, Daniel; Solano, Mauricio; Bubenik, George A.; Levin, Michael
2014-01-01
A fundamental assumption of today's molecular genetics paradigm is that complex morphology emerges from the combined activity of low-level processes involving proteins and nucleic acids. An inherent characteristic of such nonlinear encodings is the difficulty of creating the genetic and epigenetic information that will produce a given self-assembling complex morphology. This ‘inverse problem’ is vital not only for understanding the evolution, development and regeneration of bodyplans, but also for synthetic biology efforts that seek to engineer biological shapes. Importantly, the regenerative mechanisms in deer antlers, planarian worms and fiddler crabs can solve an inverse problem: their target morphology can be altered specifically and stably by injuries in particular locations. Here, we discuss the class of models that use pre-specified morphological goal states and propose the existence of a linear encoding of the target morphology, making the inverse problem easy for these organisms to solve. Indeed, many model organisms such as Drosophila, hydra and Xenopus also develop according to nonlinear encodings producing linear encodings of their final morphologies. We propose the development of testable models of regeneration regulation that combine emergence with a top-down specification of shape by linear encodings of target morphology, driving transformative applications in biomedicine and synthetic bioengineering. PMID:24402915
Dynamic Data Driven Applications Systems (DDDAS) modeling for automatic target recognition
NASA Astrophysics Data System (ADS)
Blasch, Erik; Seetharaman, Guna; Darema, Frederica
2013-05-01
The Dynamic Data Driven Applications System (DDDAS) concept uses applications modeling, mathematical algorithms, and measurement systems to work with dynamic systems. A dynamic systems such as Automatic Target Recognition (ATR) is subject to sensor, target, and the environment variations over space and time. We use the DDDAS concept to develop an ATR methodology for multiscale-multimodal analysis that seeks to integrated sensing, processing, and exploitation. In the analysis, we use computer vision techniques to explore the capabilities and analogies that DDDAS has with information fusion. The key attribute of coordination is the use of sensor management as a data driven techniques to improve performance. In addition, DDDAS supports the need for modeling from which uncertainty and variations are used within the dynamic models for advanced performance. As an example, we use a Wide-Area Motion Imagery (WAMI) application to draw parallels and contrasts between ATR and DDDAS systems that warrants an integrated perspective. This elementary work is aimed at triggering a sequence of deeper insightful research towards exploiting sparsely sampled piecewise dense WAMI measurements - an application where the challenges of big-data with regards to mathematical fusion relationships and high-performance computations remain significant and will persist. Dynamic data-driven adaptive computations are required to effectively handle the challenges with exponentially increasing data volume for advanced information fusion systems solutions such as simultaneous target tracking and ATR.
Xing, Rui; Chen, Xue-Dong; Zhou, Yan-Feng; Zhang, Jue; Su, Yuan-Yuan; Qiu, Jian-Feng; Sima, Yang-Hu; Zhang, Ke-Qin; He, Yao; Xu, Shi-Qing
2016-01-01
The use of quantum dots (QDs) in biological imaging applications and targeted drug delivery is expected to increase. However, the efficiency of QDs in drug targeting needs to be improved. Here, we show that amino acids linked to CdTe QDs significantly increased the targeted transfer efficiency and biological safety in the invertebrate model Bombyx mori. Compared with bare QDs530, the transfer efficiency of Ala- and Gly-conjugated QDs (QDs530-Ala and QDs530-Gly) in circulatory system increased by 2.6 ± 0.3 and 1.5 ± 0.3 times, and increased by 7.8 ± 0.9 and 2.9 ± 0.2 times in target tissue silk glands, respectively, after 24 h of QDs exposure. Meanwhile, the amount of conjugated QDs decreased by (68.4 ± 4.4)% and (46.7 ± 9.1)% in the non-target tissue fat body, and the speed at which they entered non-target circulating blood cells significantly decreased. The resultant QDs530-Ala revealed a better structural integrity in tissues and a longer retention time in hemolymph than that of QDs530 after exposure via the dorsal vessel. On the other hand, QDs530-Ala significantly reduced the toxicity to hemocytes, silk gland, and fat body, and reduced the amount of reactive oxygen species (ROS) in tissues. PMID:26806642
Xing, Rui; Chen, Xue-Dong; Zhou, Yan-Feng; Zhang, Jue; Su, Yuan-Yuan; Qiu, Jian-Feng; Sima, Yang-Hu; Zhang, Ke-Qin; He, Yao; Xu, Shi-Qing
2016-01-01
The use of quantum dots (QDs) in biological imaging applications and targeted drug delivery is expected to increase. However, the efficiency of QDs in drug targeting needs to be improved. Here, we show that amino acids linked to CdTe QDs significantly increased the targeted transfer efficiency and biological safety in the invertebrate model Bombyx mori. Compared with bare QDs530, the transfer efficiency of Ala- and Gly-conjugated QDs (QDs530-Ala and QDs530-Gly) in circulatory system increased by 2.6 ± 0.3 and 1.5 ± 0.3 times, and increased by 7.8 ± 0.9 and 2.9 ± 0.2 times in target tissue silk glands, respectively, after 24 h of QDs exposure. Meanwhile, the amount of conjugated QDs decreased by (68.4 ± 4.4)% and (46.7 ± 9.1)% in the non-target tissue fat body, and the speed at which they entered non-target circulating blood cells significantly decreased. The resultant QDs530-Ala revealed a better structural integrity in tissues and a longer retention time in hemolymph than that of QDs530 after exposure via the dorsal vessel. On the other hand, QDs530-Ala significantly reduced the toxicity to hemocytes, silk gland, and fat body, and reduced the amount of reactive oxygen species (ROS) in tissues. PMID:26806642
Construction of a mouse model of factor VIII deficiency by gene targeting
Bi, L.; Lawler, A.; Gearhart, J.
1994-09-01
To develop a small animal model of hemophilia A for gene therapy experiments, we set out to construct a mouse model for factor VIII deficiency by gene targeting. First, we screened a mouse liver cDNA library using a human FVIII cDNA probe. We cloned a 2.6 Kb partial mouse factor VIII cDNA which extends from 800 base pairs of the 3{prime} end of exon 14 to the 5{prime} end of exon 26. A mouse genomic library made from strain 129 was then screened to obtain genomic fragments covering the exons desired for homologous recombination. Two genomic clones were obtained, and one covering exon 15 through 22 was used for gene targeting. To make gene targeting constructs, a 5.8 Kb genomic DNA fragment covering exons 15 to 19 of the mouse FVIII gene was subcloned, and the neo expression cassette was inserted into exons 16 and 17 separately by different strategies. These two constructs were named MFVIIIC-16 and MFVIIIC-17. The constructs were linearized and transfected into strain 129 mouse ES cells by electroporation. Factor VIII gene-knockout ES cell lines were selected by G-418 and screened by genomic Southern blots. Eight exon 16 targeted cell lines and five exon 17 targeted cell lines were obtained. Three cell lines from each construct were injected into blastocysts and surgically transferred into foster mothers. Multiple chimeric mice with 70-90% hair color derived from the ES-cell genotype were seen with both constructs. Germ line transmission of the ES-cell genotype has been obtained for the MFVIIIC-16 construct, and multiple hemophilia A carrier females have been identified. Factor VIII-deficient males will be conceived soon.
[Relevance of animal models in the development of compounds targeting multidrug resistant cancer].
Füredi, András; Tóth, Szilárd; Hámori, Lilla; Nagy, Veronika; Tóvári, József; Szakács, Gergely
2015-12-01
Anticancer compounds are typically identified in in vitro screens. Unfortunately, the in vitro drug sensitivity of cell lines does not reflect treatment efficiency in animal models, and neither show acceptable correlation to clinical results. While cell lines and laboratory animals can be readily "cured", the treatment of malignancies remains hampered by the multidrug resistance (MDR) of tumors. Genetically engineered mouse models (GEMMs) giving rise to spontaneous tumors offer a new possibility to characterize the evolution of drug resistance mechanisms and to target multidrug resistant cancer. PMID:26665195
The last glacial termination: targets for climate modelling and proxy-based reconstructions
NASA Astrophysics Data System (ADS)
Renssen, Hans; Blockley, Simon P.; Rasmussen, Sune O.; Roche, Didier M.; Valdes, Paul J.; Nisancioglu, Kerim M.; Working Group 3 Members Of Intimate
2013-04-01
During the last glacial termination, the climate system experienced a major reorganisation, making this time interval a crucial period for our understanding of climate change. Despite a basic understanding of these changes and a reasonable level of agreement between data and model simulations, a deeper understanding of the last glacial termination remains a long standing goal: we are still faced with the dual challenge of reconstructing the climate history from incomplete and uncertain proxy data, and accurately simulating the climate history with physics-based climate models. There are, however, significant advantages in attempting to reliably integrate palaeoclimate data with model simulations, not least because it is necessary to examine the limitations of both current models and palaeoclimate records before testing possible forcing mechanisms. For the model studies, palaeodata play a crucial role, both as a source of (1) climate forcings for the modelling experiments and (2) palaeoclimate information that is required for model evaluation. Therefore, interaction between the modelling and data communities is essential. For this purpose, and with the last termination as a target, a working group has been set up within the INTIMATE (INTegration of Ice core, MArine and TErrestrial records of the last termination) COST Action (http://cost-es0907.geoenvi.org). We report on the outcome of a workshop of this working group, discussing the state of knowledge of the forcings and various aspects of climate variability during the last termination. We focus in particular on the main uncertainties in the climate signals and the forcings. We discuss the major problems that must be solved to make further progress in our understanding. This requires a joint effort of paleodata, chronology, and climate modelling communities. A number of specific targets for these communities are identified.
Höche, Stefan; Schönherr, Marek
2012-11-01
We quantify uncertainties in the Monte Carlo simulation of inclusive and dijet final states, which arise from using the MC@NLO technique for matching next-to-leading order parton-level calculations and parton showers. We analyse a large variety of data from early measurements at the LHC. In regions of phase space where Sudakov logarithms dominate over high-energy effects, we observe that the main uncertainty can be ascribed to the free parameters of the parton shower. In complementary regions, the main uncertainty stems from the considerable freedom in the simulation of underlying events.
Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator
NASA Astrophysics Data System (ADS)
Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi
Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.
Target-mediated drug disposition model and its approximations for antibody-drug conjugates.
Gibiansky, Leonid; Gibiansky, Ekaterina
2014-02-01
Antibody-drug conjugate (ADC) is a complex structure composed of an antibody linked to several molecules of a biologically active cytotoxic drug. The number of ADC compounds in clinical development now exceeds 30, with two of them already on the market. However, there is no rigorous mechanistic model that describes pharmacokinetic (PK) properties of these compounds. PK modeling of ADCs is even more complicated than that of other biologics as the model should describe distribution, binding, and elimination of antibodies with different toxin load, and also the deconjugation process and PK of the released toxin. This work extends the target-mediated drug disposition (TMDD) model to describe ADCs, derives the rapid binding (quasi-equilibrium), quasi-steady-state, and Michaelis-Menten approximations of the TMDD model as applied to ADCs, derives the TMDD model and its approximations for ADCs with load-independent properties, and discusses further simplifications of the system under various assumptions. The developed models are shown to describe data simulated from the available clinical population PK models of trastuzumab emtansine (T-DM1), one of the two currently approved ADCs. Identifiability of model parameters is also discussed and illustrated on the simulated T-DM1 examples.
Identification of Treatment Targets in a Genetic Mouse Model of Voluntary Methamphetamine Drinking.
Phillips, T J; Mootz, J R K; Reed, C
2016-01-01
Methamphetamine has powerful stimulant and euphoric effects that are experienced as rewarding and encourage use. Methamphetamine addiction is associated with debilitating illnesses, destroyed relationships, child neglect, violence, and crime; but after many years of research, broadly effective medications have not been identified. Individual differences that may impact not only risk for developing a methamphetamine use disorder but also affect treatment response have not been fully considered. Human studies have identified candidate genes that may be relevant, but lack of control over drug history, the common use or coabuse of multiple addictive drugs, and restrictions on the types of data that can be collected in humans are barriers to progress. To overcome some of these issues, a genetic animal model comprised of lines of mice selectively bred for high and low voluntary methamphetamine intake was developed to identify risk and protective alleles for methamphetamine consumption, and identify therapeutic targets. The mu opioid receptor gene was supported as a target for genes within a top-ranked transcription factor network associated with level of methamphetamine intake. In addition, mice that consume high levels of methamphetamine were found to possess a nonfunctional form of the trace amine-associated receptor 1 (TAAR1). The Taar1 gene is within a mouse chromosome 10 quantitative trait locus for methamphetamine consumption, and TAAR1 function determines sensitivity to aversive effects of methamphetamine that may curb intake. The genes, gene interaction partners, and protein products identified in this genetic mouse model represent treatment target candidates for methamphetamine addiction. PMID:27055611
Tuberculosis control in China: use of modelling to develop targets and policies.
Lin, Hsien-Ho; Wang, Lixia; Zhang, Hui; Ruan, Yunzhou; Chin, Daniel P; Dye, Christopher
2015-11-01
It is unclear if current programmes in China can achieve the post-2015 global targets for tuberculosis - 50% reduction in incidence and a 75% reduction in mortality by 2025. Chinese policy-makers need to maintain the recent decline in the prevalence of tuberculosis, while revising control policies to cope with an epidemic of drug-resistant tuberculosis and the effects of ongoing health reform. Health reforms are expected to shift patients from tuberculosis dispensaries to designated hospitals. We developed a mathematical model of tuberculosis control in China to help set appropriate targets and prioritize interventions that might be implemented in the next 10 years. This model indicates that, even under the most optimistic scenario - improved treatment in tuberculosis dispensaries, introduction of a new effective regimen for the treatment of drug-susceptible tuberculosis and optimal care of cases of multidrug-resistant tuberculosis - the current global targets for tuberculosis are unlikely to be reached. However, reductions in the incidence of multidrug-resistant tuberculosis should be feasible. We conclude that a shift of patients from tuberculosis dispensaries to designated hospitals is likely to hamper efforts at tuberculosis control if cure rates in the designated hospitals cannot be maintained at a high level. Our results can inform the planning of tuberculosis control in China.
Modeling human echolocation of near-range targets with an audible sonar.
Kuc, Roman; Kuc, Victor
2016-02-01
Blind humans echolocate nearby targets by emitting palatal clicks and perceiving echoes that the auditory system is not able to resolve temporally. The mechanism for perceiving near-range echoes is not known. This paper models the direct mouth-to-ear signal (MES) and the echo to show that the echo enhances the high-frequency components in the composite MES/echo signal with features that allow echolocation. The mouth emission beam narrows with increasing frequency and exhibits frequency-dependent transmission notches in the backward direction toward the ears as predicted by the piston-in-sphere model. The ears positioned behind the mouth detect a MES that contains predominantly the low frequencies contained in the emission. Hence the high-frequency components in the emission that are perceived by the ears are enhanced by the echoes. A pulse/echo audible sonar verifies this model by echolocating targets from 5 cm range, where the MES and echo overlap significantly, to 55 cm. The model predicts that unambiguous ranging occurs over a limited range and that there is an optimal range that produces the highest range resolution. PMID:26936542
PROPERTIES OF 42 SOLAR-TYPE KEPLER TARGETS FROM THE ASTEROSEISMIC MODELING PORTAL
Metcalfe, T. S.; Mathur, S.; Creevey, O. L.; Doğan, G.; Christensen-Dalsgaard, J.; Karoff, C.; Trampedach, R.; Xu, H.; Bedding, T. R.; Benomar, O.; Chaplin, W. J.; Campante, T. L.; Davies, G. R.; Brown, B. P.; Buzasi, D. L.; Çelik, Z.; Cunha, M. S.; Deheuvels, S.; Derekas, A.; Mauro, M. P. Di; and others
2014-10-01
Recently the number of main-sequence and subgiant stars exhibiting solar-like oscillations that are resolved into individual mode frequencies has increased dramatically. While only a few such data sets were available for detailed modeling just a decade ago, the Kepler mission has produced suitable observations for hundreds of new targets. This rapid expansion in observational capacity has been accompanied by a shift in analysis and modeling strategies to yield uniform sets of derived stellar properties more quickly and easily. We use previously published asteroseismic and spectroscopic data sets to provide a uniform analysis of 42 solar-type Kepler targets from the Asteroseismic Modeling Portal. We find that fitting the individual frequencies typically doubles the precision of the asteroseismic radius, mass, and age compared to grid-based modeling of the global oscillation properties, and improves the precision of the radius and mass by about a factor of three over empirical scaling relations. We demonstrate the utility of the derived properties with several applications.
Modeling human echolocation of near-range targets with an audible sonar.
Kuc, Roman; Kuc, Victor
2016-02-01
Blind humans echolocate nearby targets by emitting palatal clicks and perceiving echoes that the auditory system is not able to resolve temporally. The mechanism for perceiving near-range echoes is not known. This paper models the direct mouth-to-ear signal (MES) and the echo to show that the echo enhances the high-frequency components in the composite MES/echo signal with features that allow echolocation. The mouth emission beam narrows with increasing frequency and exhibits frequency-dependent transmission notches in the backward direction toward the ears as predicted by the piston-in-sphere model. The ears positioned behind the mouth detect a MES that contains predominantly the low frequencies contained in the emission. Hence the high-frequency components in the emission that are perceived by the ears are enhanced by the echoes. A pulse/echo audible sonar verifies this model by echolocating targets from 5 cm range, where the MES and echo overlap significantly, to 55 cm. The model predicts that unambiguous ranging occurs over a limited range and that there is an optimal range that produces the highest range resolution.
Use of constraint-based modeling for the prediction and validation of antimicrobial targets.
Trawick, John D; Schilling, Christophe H
2006-03-30
The overall process of antimicrobial drug discovery and development seems simple, to cure infectious disease by identifying suitable antibiotic drugs. However, this goal has been difficult to fulfill in recent years. Despite the promise of the high-throughput innovations sparked by the genomics revolution, discovery, and development of new antibiotics has lagged in recent years exacerbating the already serious problem of evolution of antibiotic resistance. Therefore, both new antimicrobials are desperately needed as are improvements to speed up or improve nearly all steps in the process of discovering novel antibiotics and bringing these to clinical use. Another product of the genomic revolution is the modeling of metabolism using computational methodologies. Genomic-scale networks of metabolic reactions based on stoichiometry, thermodynamics and other physico-chemical constraints that emulate microbial metabolism have been developed into valuable research tools in metabolic engineering and other fields. This constraint-based modeling is predictive in identifying critical reactions, metabolites, and genes in metabolism. This is extremely useful in determining and rationalizing cellular metabolic requirements. In turn, these methods can be used to predict potential metabolic targets for antimicrobial research especially if used to increase the confidence in prioritization of metabolic targets. The many different capacities of constraint-based modeling also enable prediction of cellular response to specific inhibitors such as antibiotics and this may, ultimately find a role in drug discovery and development. Herein, we describe the principles of metabolic modeling and how they might initially be applied to antimicrobial research.
Sengers, Bram G.; McGinty, Sean; Nouri, Fatma Z.; Argungu, Maryam; Hawkins, Emma; Hadji, Aymen; Weber, Andrew; Taylor, Adam; Sepp, Armin
2016-01-01
ABSTRACT We have developed a mathematical framework for describing a bispecific monoclonal antibody interaction with two independent membrane-bound targets that are expressed on the same cell surface. The bispecific antibody in solution binds either of the two targets first, and then cross-links with the second one while on the cell surface, subject to rate-limiting lateral diffusion step within the lifetime of the monovalently engaged antibody-antigen complex. At experimental densities, only a small fraction of the free targets is expected to lie within the reach of the antibody binding sites at any time. Using ordinary differential equation and Monte Carlo simulation-based models, we validated this approach against an independently published anti-CD4/CD70 DuetMab experimental data set. As a result of dimensional reduction, the cell surface reaction is expected to be so rapid that, in agreement with the experimental data, no monovalently bound bispecific antibody binary complexes accumulate until cross-linking is complete. The dissociation of the bispecific antibody from the ternary cross-linked complex is expected to be significantly slower than that from either of the monovalently bound variants. We estimate that the effective affinity of the bivalently bound bispecific antibody is enhanced for about 4 orders of magnitude over that of the monovalently bound species. This avidity enhancement allows for the highly specific binding of anti-CD4/CD70 DuetMab to the cells that are positive for both target antigens over those that express only one or the other We suggest that the lateral diffusion of target antigens in the cell membrane also plays a key role in the avidity effect of natural antibodies and other bivalent ligands in their interactions with their respective cell surface receptors. PMID:27097222
Hassan, Syed Shah; Tiwari, Sandeep; Guimarães, Luís Carlos; Jamal, Syed Babar; Folador, Edson; Sharma, Neha Barve; de Castro Soares, Siomar; Almeida, Síntia; Ali, Amjad; Islam, Arshad; Póvoa, Fabiana Dias; de Abreu, Vinicius Augusto Carvalho; Jain, Neha; Bhattacharya, Antaripa; Juneja, Lucky; Miyoshi, Anderson; Silva, Artur; Barh, Debmalya; Turjanski, Adrian Gustavo; Azevedo, Vasco; Ferreira, Rafaela Salgado
2014-01-01
Corynebacterium pseudotuberculosis (Cp) is a pathogenic bacterium that causes caseous lymphadenitis (CLA), ulcerative lymphangitis, mastitis, and edematous to a broad spectrum of hosts, including ruminants, thereby threatening economic and dairy industries worldwide. Currently there is no effective drug or vaccine available against Cp. To identify new targets, we adopted a novel integrative strategy, which began with the prediction of the modelome (tridimensional protein structures for the proteome of an organism, generated through comparative modeling) for 15 previously sequenced C. pseudotuberculosis strains. This pan-modelomics approach identified a set of 331 conserved proteins having 95-100% intra-species sequence similarity. Next, we combined subtractive proteomics and modelomics to reveal a set of 10 Cp proteins, which may be essential for the bacteria. Of these, 4 proteins (tcsR, mtrA, nrdI, and ispH) were essential and non-host homologs (considering man, horse, cow and sheep as hosts) and satisfied all criteria of being putative targets. Additionally, we subjected these 4 proteins to virtual screening of a drug-like compound library. In all cases, molecules predicted to form favorable interactions and which showed high complementarity to the target were found among the top ranking compounds. The remaining 6 essential proteins (adk, gapA, glyA, fumC, gnd, and aspA) have homologs in the host proteomes. Their active site cavities were compared to the respective cavities in host proteins. We propose that some of these proteins can be selectively targeted using structure-based drug design approaches (SBDD). Our results facilitate the selection of C. pseudotuberculosis putative proteins for developing broad-spectrum novel drugs and vaccines. A few of the targets identified here have been validated in other microorganisms, suggesting that our modelome strategy is effective and can also be applicable to other pathogens.
microRNAs: Emerging Targets Regulating Oxidative Stress in the Models of Parkinson's Disease
Xie, Yangmei; Chen, Yinghui
2016-01-01
Parkinson's disease (PD) is the second most common neurodegenerative disorder. This chronic, progressive disease is characterized by loss of dopaminergic (DA) neurons in the substantia nigra pars compacta (SNpc) and the presence of cytoplasmic inclusions called Lewy bodies (LBs) in surviving neurons. PD is attributed to a combination of environment and genetic factors, but the precise underlying molecular mechanisms remain elusive. Oxidative stress is generally recognized as one of the main causes of PD, and excessive reactive oxygen species (ROS) can lead to DA neuron vulnerability and eventual death. Several studies have demonstrated that small non-coding RNAs termed microRNAs (miRNAs) can regulate oxidative stress in vitro and in vivo models of PD. Relevant miRNAs involved in oxidative stress can prevent ROS-mediated damage to DA neurons, suggesting that specific miRNAs may be putative targets for novel therapeutic targets in PD. PMID:27445669
microRNAs: Emerging Targets Regulating Oxidative Stress in the Models of Parkinson's Disease.
Xie, Yangmei; Chen, Yinghui
2016-01-01
Parkinson's disease (PD) is the second most common neurodegenerative disorder. This chronic, progressive disease is characterized by loss of dopaminergic (DA) neurons in the substantia nigra pars compacta (SNpc) and the presence of cytoplasmic inclusions called Lewy bodies (LBs) in surviving neurons. PD is attributed to a combination of environment and genetic factors, but the precise underlying molecular mechanisms remain elusive. Oxidative stress is generally recognized as one of the main causes of PD, and excessive reactive oxygen species (ROS) can lead to DA neuron vulnerability and eventual death. Several studies have demonstrated that small non-coding RNAs termed microRNAs (miRNAs) can regulate oxidative stress in vitro and in vivo models of PD. Relevant miRNAs involved in oxidative stress can prevent ROS-mediated damage to DA neurons, suggesting that specific miRNAs may be putative targets for novel therapeutic targets in PD. PMID:27445669
Joensuu, Heikki; DeMatteo, Ronald P
2012-01-01
Gastrointestinal stromal tumor (GIST) has become a model for targeted therapy in cancer. The vast majority of GISTs contain an activating mutation in either the KIT or platelet-derived growth factor A (PDGFRA) gene. GIST is highly responsive to several selective tyrosine kinase inhibitors. In fact, this cancer has been converted to a chronic disease in some patients. Considerable progress has been made recently in our understanding of the natural history and molecular biology of GIST, risk stratification, and drug resistance. Despite the efficacy of targeted therapy, though, surgery remains the only curative primary treatment and cures >50% of GIST patients who present with localized disease. Adjuvant therapy with imatinib prolongs recurrence-free survival and may improve overall survival. Combined or sequential use of tyrosine kinase inhibitors with other agents following tumor molecular subtyping is an attractive next step in the management of GIST. PMID:22017446
Hydrodynamic modeling of targeted magnetic-particle delivery in a blood vessel.
Weng, Huei Chu
2013-03-01
Since the flow of a magnetic fluid could easily be influenced by an external magnetic field, its hydrodynamic modeling promises to be useful for magnetically controllable delivery systems. It is desirable to understand the flow fields and characteristics before targeted magnetic particles arrive at their destination. In this study, we perform an analysis for the effects of particles and a magnetic field on biomedical magnetic fluid flow to study the targeted magnetic-particle delivery in a blood vessel. The fully developed solutions of velocity, flow rate, and flow drag are derived analytically and presented for blood with magnetite nanoparticles at body temperature. Results reveal that in the presence of magnetic nanoparticles, a minimum magnetic field gradient (yield gradient) is required to initiate the delivery. A magnetic driving force leads to the increase in velocity and has enhancing effects on flow rate and flow drag. Such a magnetic driving effect can be magnified by increasing the particle volume fraction.
Numerical study of penetration in ceramic targets with a multiple-plane model
Espinosa, H. D.; Yuan, G.; Dwivedi, S.; Zavattieri, P. D.
1998-07-10
The penetration mechanics in different material/structure systems has been investigated by numerical simulations with the finite element code EPIC95. A multi-plane microcracking model was implemented to simulate ceramic fragmentation and comminution. Two kinds of confined structures, depth-of-penetration (DOP) and interface-defeat (ID) configurations, were examined in the simulations. The results revealed that the penetration process is found to be less dependent on the ceramic material than usually assumed by most investigators. By contrast, the penetration process is highly dependent on the multi-layered configuration and the target structural design (geometry, and boundary conditions). From a simulation standpoint, we found that the selection of the erosion parameter plays an important role in predicting the deformation history and interaction of the penetrator with the target. These findings show that meaningful light weight armor design can only be accomplished through a combined experimental/numerical study in which relevant ballistic materials and structures are simultaneously investigated.
Velena, Astrida; Zarkovic, Neven; Gall Troselj, Koraljka; Bisenieks, Egils; Krauze, Aivars; Poikans, Janis; Duburs, Gunars
2016-01-01
Many 1,4-dihydropyridines (DHPs) possess redox properties. In this review DHPs are surveyed as protectors against oxidative stress (OS) and related disorders, considering the DHPs as specific group of potential antioxidants with bioprotective capacities. They have several peculiarities related to antioxidant activity (AOA). Several commercially available calcium antagonist, 1,4-DHP drugs, their metabolites, and calcium agonists were shown to express AOA. Synthesis, hydrogen donor properties, AOA, and methods and approaches used to reveal biological activities of various groups of 1,4-DHPs are presented. Examples of DHPs antioxidant activities and protective effects of DHPs against OS induced damage in low density lipoproteins (LDL), mitochondria, microsomes, isolated cells, and cell cultures are highlighted. Comparison of the AOA of different DHPs and other antioxidants is also given. According to the data presented, the DHPs might be considered as bellwether among synthetic compounds targeting OS and potential pharmacological model compounds targeting oxidative stress important for medicinal chemistry. PMID:26881016
Sheehey, P.T.; Faehl, R.J.; Kirkpatrick, R.C.; Lindemuth, I.R.
1997-12-31
Magnetized Target Fusion (MTF) experiments, in which a preheated and magnetized target plasma is hydrodynamically compressed to fusion conditions, present some challenging computational modeling problems. Recently, joint experiments relevant to MTF (Russian acronym MAGO, for Magnitnoye Obzhatiye, or magnetic compression) have been performed by Los Alamos National Laboratory and the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF). Modeling of target plasmas must accurately predict plasma densities, temperatures, fields, and lifetime; dense plasma interactions with wall materials must be characterized. Modeling of magnetically driven imploding solid liners, for compression of target plasmas, must address issues such as Rayleigh-Taylor instability growth in the presence of material strength, and glide plane-liner interactions. Proposed experiments involving liner-on-plasma compressions to fusion conditions will require integrated target plasma and liner calculations. Detailed comparison of the modeling results with experiment will be presented.
First Results from the Physics-Based Forecasting-Targeted Inner Heliosphere Model Euhforia
NASA Astrophysics Data System (ADS)
Pomoell, J.
2015-12-01
In this work, we present the first results of the new physics-based forecasting-targeted inner heliosphere model Euhforia ('European heliospheric forecasting information asset') that we are developing. Euhforia consists of a coronal model and a magnetohydrodynamic (MHD) heliosphere model with CMEs. The aim of the baseline coronal model is to produce realistic plasma conditions at the interface radius r = 0.1 AU between the two models thus providing the necessary input to the time-dependent, three-dimensional MHD heliosphere model. It uses GONG synoptic line-of-sight magnetograms as input for a potential (PFSS) field extrapolation of the low-coronal magnetic field coupled to a current sheet (CS) model of the extended coronal magnetic field. The plasma variables at the interface radius are determined by employing semi-empirical considerations based on the properties of the PFSS+CS field such as the flux tube expansion factor and distance to nearest coronal hole. The heliosphere model computes the time-dependent evolution of the MHD variables from the interface radius typically up to 2 AU. Coronal mass ejections (CMEs) are injected at the interface radius using a hydrodynamic cone-like model using parameters constrained from fits to coronal imaging observations. In order to account for the modification of the heliosphere due to the presence of earlier CMEs, the standard run scenario includes CMEs launched five days prior to the start of the forecast, while the duration of the forecast extends up to seven days. In addition to presenting results of the modeling, we will highlight our on-going efforts to advance beyond the baseline in the forecasting pipeline. In particular we discuss our path towards using magnetized CMEs, application of a time-dependent coronal model as well as modeling the transport of solar energetic particles (SEPs) in the heliosphere.
Abazov, Victor Mukhamedovich; Abbott, Braden Keim; Acharya, Bannanje Sripath; Adams, Mark Raymond; Adams, Todd; Alexeev, Guennadi D.; Alkhazov, Georgiy D.; Alton, Andrew K.; Alverson, George O.; Alves, Gilvan Augusto; Ancu, Lucian Stefan; /Nijmegen U. /Serpukhov, IHEP
2011-01-01
Samples of inclusive {gamma} + 2 jet and {gamma} + 3 jet events collected by the D0 experiment with an integrated luminosity of about 1 fb{sup -1} in p{bar p} collisions at {radical}s = 1.96 TeV are used to measure cross sections as a function of the angle in the plane transverse to the beam direction between the transverse momentum (p{sub T}) of the {gamma} + leading jet system (jets are ordered in p{sub T}) and p{sub T} of the other jet for {gamma} + 2 jet, or p{sub T} sum of the two other jets for {gamma} + 3 jet events. The results are compared to different models of multiple parton interactions (MPI) in the pythia and sherpa Monte Carlo (MC) generators. The data indicate a contribution from events with double parton (DP) interactions and are well described by predictions provided by the pythia MPI models with p{sub T}-ordered showers and by sherpa with the default MPI model. The {gamma} + 2 jet data are also used to determine the fraction of events with DP interactions as a function of the azimuthal angle and as a function of the second jet p{sub T}.
The 1994 Fermilab Fixed Target Program
Conrad, J. |
1994-11-01
This paper highlights the results of the Fermilab Fixed Target Program that were announced between October, 1993 and October, 1994. These results are drawn from 18 experiments that took data in the 1985, 1987 and 1990/91 fixed target running periods. For this discussion, the Fermilab Fixed Target Program is divided into 5 major topics: hadron structure, precision electroweak measurements, heavy quark production, polarization and magnetic moments, and searches for new phenomena. However, it should be noted that most experiments span several subtopics. Also, measurements within each subtopic often affect the results in other subtopics. For example, parton distributions from hadron structure measurements are used in the studies of heavy quark production.
Iles, LaKesla R; Bartholomeusz, Geoffrey A
2016-01-01
The intrinsic limitations of 2D monolayer cell culture models have prompted the development of 3D cell culture model systems for in vitro studies. Multicellular tumor spheroid (MCTS) models closely simulate the pathophysiological milieu of solid tumors and are providing new insights into tumor biology as well as differentiation, tissue organization, and homeostasis. They are straightforward to apply in high-throughput screens and there is a great need for the development of reliable and robust 3D spheroid-based assays for high-throughput RNAi screening for target identification and cell signaling studies highlighting their potential in cancer research and treatment. In this chapter we describe a stringent standard operating procedure for the use of MCTS for high-throughput RNAi screens. PMID:27581289
The role of animal models in unravelling therapeutic targets in coeliac disease.
Costes, Léa M M; Meresse, Bertrand; Cerf-Bensussan, Nadine; Samsom, Janneke N
2015-06-01
Coeliac disease is a complex small intestinal enteropathy that develops consequently to a breach of tolerance to gliadin, a storage protein abundantly found in cereals such as wheat, rye and barley. The understanding of the mechanisms underlying the development of coeliac disease in HLA-DQ2 and HLA-DQ8 genetically susceptible individuals has greatly improved during the last decades but so far did not allow to develop curative therapeutics, leaving a long-life gluten free diet as the only treatment option for the patients. In order to bring new therapeutic targets to light and to test the safety and efficacy of putative drugs, animal models recapitulating features of the disease are needed. Here, we will review the existing animal models and the clinical features of coeliac disease they reflect and discuss their relevance for modelling immune pathways that may lead to potential therapeutic approaches.
Tailored Pig Models for Preclinical Efficacy and Safety Testing of Targeted Therapies.
Klymiuk, Nikolai; Seeliger, Frank; Bohlooly-Y, Mohammad; Blutke, Andreas; Rudmann, Daniel G; Wolf, Eckhard
2016-04-01
Despite enormous advances in translational biomedical research, there remains a growing demand for improved animal models of human disease. This is particularly true for diseases where rodent models do not reflect the human disease phenotype. Compared to rodents, pig anatomy and physiology are more similar to humans in cardiovascular, immune, respiratory, skeletal muscle, and metabolic systems. Importantly, efficient and precise techniques for genetic engineering of pigs are now available, facilitating the creation of tailored large animal models that mimic human disease mechanisms at the molecular level. In this article, the benefits of genetically engineered pigs for basic and translational research are exemplified by a novel pig model of Duchenne muscular dystrophy and by porcine models of cystic fibrosis. Particular emphasis is given to potential advantages of using these models for efficacy and safety testing of targeted therapies, such as exon skipping and gene editing, for example, using the clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated system. In general, genetically tailored pig models have the potential to bridge the gap between proof-of-concept studies in rodents and clinical trials in patients, thus supporting translational medicine.
Animal models and therapeutic molecular targets of cancer: utility and limitations.
Cekanova, Maria; Rathore, Kusum
2014-01-01
Cancer is the term used to describe over 100 diseases that share several common hallmarks. Despite prevention, early detection, and novel therapies, cancer is still the second leading cause of death in the USA. Successful bench-to-bedside translation of basic scientific findings about cancer into therapeutic interventions for patients depends on the selection of appropriate animal experimental models. Cancer research uses animal and human cancer cell lines in vitro to study biochemical pathways in these cancer cells. In this review, we summarize the important animal models of cancer with focus on their advantages and limitations. Mouse cancer models are well known, and are frequently used for cancer research. Rodent models have revolutionized our ability to study gene and protein functions in vivo and to better understand their molecular pathways and mechanisms. Xenograft and chemically or genetically induced mouse cancers are the most commonly used rodent cancer models. Companion animals with spontaneous neoplasms are still an underexploited tool for making rapid advances in human and veterinary cancer therapies by testing new drugs and delivery systems that have shown promise in vitro and in vivo in mouse models. Companion animals have a relatively high incidence of cancers, with biological behavior, response to therapy, and response to cytotoxic agents similar to those in humans. Shorter overall lifespan and more rapid disease progression are factors contributing to the advantages of a companion animal model. In addition, the current focus is on discovering molecular targets for new therapeutic drugs to improve survival and quality of life in cancer patients.
Tailored Pig Models for Preclinical Efficacy and Safety Testing of Targeted Therapies.
Klymiuk, Nikolai; Seeliger, Frank; Bohlooly-Y, Mohammad; Blutke, Andreas; Rudmann, Daniel G; Wolf, Eckhard
2016-04-01
Despite enormous advances in translational biomedical research, there remains a growing demand for improved animal models of human disease. This is particularly true for diseases where rodent models do not reflect the human disease phenotype. Compared to rodents, pig anatomy and physiology are more similar to humans in cardiovascular, immune, respiratory, skeletal muscle, and metabolic systems. Importantly, efficient and precise techniques for genetic engineering of pigs are now available, facilitating the creation of tailored large animal models that mimic human disease mechanisms at the molecular level. In this article, the benefits of genetically engineered pigs for basic and translational research are exemplified by a novel pig model of Duchenne muscular dystrophy and by porcine models of cystic fibrosis. Particular emphasis is given to potential advantages of using these models for efficacy and safety testing of targeted therapies, such as exon skipping and gene editing, for example, using the clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated system. In general, genetically tailored pig models have the potential to bridge the gap between proof-of-concept studies in rodents and clinical trials in patients, thus supporting translational medicine. PMID:26511847
Kernel Target Alignment Parameter: A New Modelability Measure for Regression Tasks.
Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre
2016-01-25
In this paper, we demonstrate that the kernel target alignment (KTA) parameter can efficiently be used to estimate the relevance of molecular descriptors for QSAR modeling on a given data set, i.e., as a modelability measure. The efficiency of KTA to assess modelability was demonstrated in two series of QSAR modeling studies, either varying different descriptor spaces for one same data set, or comparing various data sets within one same descriptor space. Considered data sets included 25 series of various GPCR binders with ChEMBL-reported pKi values, and a toxicity data set. Employed descriptor spaces covered more than 100 different ISIDA fragment descriptor types, and ChemAxon BCUT terms. Model performances (RMSE) were seen to anticorrelate consistently with the KTA parameter. Two other modelability measures were employed for benchmarking purposes: the Jaccard distance average over the data set (Div), and a measure related to the normalized mean absolute error (MAE) obtained in 1-nearest neighbors calculations on the training set (Sim = 1 - MAE). It has been demonstrated that both Div and Sim perform similarly to KTA. However, a consensus index combining KTA, Div and Sim provides a more robust correlation with RMSE than any of the individual modelability measures.
Chen, Jian-Yan; Yi, Ming; Yao, Shang-Long; Zhang, Xue-Ping
2016-06-01
This study aimed to establish a new propofol target-controlled infusion (TCI) model in animals so as to study the general anesthetic mechanism at multi-levels in vivo. Twenty Japanese white rabbits were enrolled and propofol (10 mg/kg) was administrated intravenously. Artery blood samples were collected at various time points after injection, and plasma concentrations of propofol were measured. Pharmacokinetic modeling was performed using WinNonlin software. Propofol TCI within the acquired parameters integrated was conducted to achieve different anesthetic depths in rabbits, monitored by narcotrend. The pharmacodynamics was analyzed using a sigmoidal inhibitory maximal effect model for narcotrend index (NI) versus effect-site concentration. The results showed the pharmacokinetics of propofol in Japanese white rabbits was best described by a two-compartment model. The target plasma concentrations of propofol required at light anesthetic depth was 9.77±0.23 μg/mL, while 12.52±0.69 μg/mL at deep anesthetic depth. NI was 76.17±4.25 at light anesthetic depth, while 27.41±5.77 at deep anesthetic depth. The effect-site elimination rate constant (ke0) was 0.263/min, and the propofol dose required to achieve a 50% decrease in the NI value from baseline was 11.19 μg/mL (95% CI, 10.25-13.67). Our results established a new propofol TCI animal model and proved the model controlled the anesthetic depth accurately and stably in rabbits. The study provides a powerful method for exploring general anesthetic mechanisms at different anesthetic depths in vivo.
Sanchez, Diana T; Good, Jessica J; Chavez, George
2011-01-01
The present study examined the causal role of amount of Black ancestry in targets' perceived fit with Black prototypes and perceivers' categorization of biracial targets. Greater Black ancestry increased the likelihood that perceivers categorized biracial targets as Black and perceived targets as fitting Black prototypes (e.g., experiencing racial discrimination, possessing stereotypic traits). These results persisted, controlling for perceptions of phenotype that stem from ancestry information. Perceivers' beliefs about how society would categorize the biracial targets predicted perceptions of discrimination, whereas perceivers' beliefs about the targets' self-categorization predicted trait perceptions. The results of this study support the Black ancestry prototype model of affirmative action, which reveals the downstream consequences of Black ancestry for the distribution of minority resources (e.g., affirmative action) to biracial targets. PMID:21088283
DESHPANDE,A.; VOGELSANG, W.
2007-10-08
The determination of the polarized gluon distribution is a central goal of the RHIC spin program. Recent achievements in polarization and luminosity of the proton beams in RHIC, has enabled the RHIC experiments to acquire substantial amounts of high quality data with polarized proton beams at 200 and 62.4 GeV center of mass energy, allowing a first glimpse of the polarized gluon distribution at RHIC. Short test operation at 500 GeV center of mass energy has also been successful, indicating absence of any fundamental roadblocks for measurements of polarized quark and anti-quark distributions planned at that energy in a couple of years. With this background, it has now become high time to consider how all these data sets may be employed most effectively to determine the polarized parton distributions in the nucleon, in general, and the polarized gluon distribution, in particular. A global analysis of the polarized DIS data from the past and present fixed target experiments jointly with the present and anticipated RHIC Spin data is needed.
Meng, Jun; Shi, Lin; Luan, Yushi
2014-01-01
Background Confident identification of microRNA-target interactions is significant for studying the function of microRNA (miRNA). Although some computational miRNA target prediction methods have been proposed for plants, results of various methods tend to be inconsistent and usually lead to more false positive. To address these issues, we developed an integrated model for identifying plant miRNA–target interactions. Results Three online miRNA target prediction toolkits and machine learning algorithms were integrated to identify and analyze Arabidopsis thaliana miRNA-target interactions. Principle component analysis (PCA) feature extraction and self-training technology were introduced to improve the performance. Results showed that the proposed model outperformed the previously existing methods. The results were validated by using degradome sequencing supported Arabidopsis thaliana miRNA-target interactions. The proposed model constructed on Arabidopsis thaliana was run over Oryza sativa and Vitis vinifera to demonstrate that our model is effective for other plant species. Conclusions The integrated model of online predictors and local PCA-SVM classifier gained credible and high quality miRNA-target interactions. The supervised learning algorithm of PCA-SVM classifier was employed in plant miRNA target identification for the first time. Its performance can be substantially improved if more experimentally proved training samples are provided. PMID:25051153
Momentum broadening of a fast parton in a perturbative quark-gluon plasma
Majumder, Abhijit; Mueller, Berndt; Mrowczynski, Stanislaw
2009-12-15
The average transverse momentum transfer per unit path length to a fast parton scattering elastically in a perturbative quark-gluon plasma is related to the radiative energy loss of the parton. We first calculate the momentum transfer coefficient q-circumflex in terms of a classical Langevin problem and then define it quantum mechanically through a scattering matrix element. After treating the well-known case of a quark-gluon plasma in equilibrium, we consider an off-equilibrium unstable plasma. As a specific example, we treat the two-stream plasma with unstable modes of longitudinal chromoelectric field. In the presence of the instabilities, q-circumflex is shown to exponentially grow in time.
COLLINEAR SPLITTING, PARTON EVOLUTION AND THE STRANGE-QUARK ASYMMETRY OF THE NUCLEON IN NNLO QCD.
RODRIGO,G.CATANI,S.DE FLORIAN, D.VOGELSANG,W.
2004-04-25
We consider the collinear limit of QCD amplitudes at one-loop order, and their factorization properties directly in color space. These results apply to the multiple collinear limit of an arbitrary number of QCD partons, and are a basic ingredient in many higher-order computations. In particular, we discuss the triple collinear limit and its relation to flavor asymmetries in the QCD evolution of parton densities at three loops. As a phenomenological consequence of this new effect, and of the fact that the nucleon has non-vanishing quark valence densities, we study the perturbative generation of a strange-antistrange asymmetry s(x)-{bar s}(x) in the nucleon's sea.
Guzey, Vadim; Goeke, Klaus; Siddikov, Marat
2009-01-01
We generalize the leading twist theory of nuclear shadowing and calculate quark and gluon generalized parton distributions (GPDs) of spinless nuclei. We predict very large nuclear shadowing for nuclear GPDs. In the limit of the purely transverse momentum transfer, our nuclear GPDs become impact parameter dependent nuclear parton distributions (PDFs). Nuclear shadowing induces non-trivial correlations between the impact parameter $b$ and the light-cone fraction $x$. We make predictions for the deeply virtual Compton scattering (DVCS) amplitude and the DVCS cross section on $^{208}$Pb at high energies. We calculate the cross section of the Bethe-Heitler (BH) process and address the issue of the extraction of the DVCS signal from the $e A \\to e \\gamma A$ cross section. We find that the $e A \\to e \\gamma A$ differential cross section is dominated by DVCS at the momentum transfer $t$ near the minima of the nuclear form factor. We also find that nuclear shadowing leads
Off-shell single-top production at NLO matched to parton showers
NASA Astrophysics Data System (ADS)
Frederix, R.; Frixione, S.; Papanastasiou, A. S.; Prestel, S.; Torrielli, P.
2016-06-01
We study the hadroproduction of a W b pair in association with a light jet, focusing on the dominant t-channel contribution and including exactly at the matrix-element level all non-resonant and off-shell effects induced by the finite top-quark width. Our simulations are accurate to the next-to-leading order in QCD, and are matched to the Herwig6 and Pythia8 parton showers through the MC@NLO method. We present phenomenological results relevant to the 8 TeV LHC, and carry out a thorough comparison to the case of on-shell t-channel single-top production. We formulate our approach so that it can be applied to the general case of matrix elements that feature coloured intermediate resonances and are matched to parton showers.
Off-shell single-top production at NLO matched to parton showers
Frederix, R.; Frixione, S.; Papanastasiou, A. S.; Prestel, S.; Torrielli, P.
2016-06-06
We study the hadroproduction of a W b pair in association with a light jet, focusing on the dominant t -channel contribution and including exactly at the matrix-element level all non-resonant and off-shell effects induced by the finite top-quark width. Our simulations are accurate to the next-to-leading order in QCD, and are matched to the Herwig6 and Pythia8 parton showers through the MC@NLO method. We present phenomenological results relevant to the 8 TeV LHC, and carry out a thorough comparison to the case of on-shell t -channel single-top production. Furthermore, we formulate our approach so that it can be appliedmore » to the general case of matrix elements that feature coloured intermediate resonances and are matched to parton showers.« less
Giotopoulos, George; van der Weyden, Louise; Osaki, Hikari; Rust, Alistair G.; Gallipoli, Paolo; Meduri, Eshwar; Horton, Sarah J.; Chan, Wai-In; Foster, Donna; Prinjha, Rab K.; Pimanda, John E.; Tenen, Daniel G.; Vassiliou, George S.; Koschmieder, Steffen; Adams, David J.
2015-01-01
The introduction of highly selective ABL-tyrosine kinase inhibitors (TKIs) has revolutionized therapy for chronic myeloid leukemia (CML). However, TKIs are only efficacious in the chronic phase of the disease and effective therapies for TKI-refractory CML, or after progression to blast crisis (BC), are lacking. Whereas the chronic phase of CML is dependent on BCR-ABL, additional mutations are required for progression to BC. However, the identity of these mutations and the pathways they affect are poorly understood, hampering our ability to identify therapeutic targets and improve outcomes. Here, we describe a novel mouse model that allows identification of mechanisms of BC progression in an unbiased and tractable manner, using transposon-based insertional mutagenesis on the background of chronic phase CML. Our BC model is the first to faithfully recapitulate the phenotype, cellular and molecular biology of human CML progression. We report a heterogeneous and unique pattern of insertions identifying known and novel candidate genes and demonstrate that these pathways drive disease progression and provide potential targets for novel therapeutic strategies. Our model greatly informs the biology of CML progression and provides a potent resource for the development of candidate therapies to improve the dismal outcomes in this highly aggressive disease. PMID:26304963
NASA Astrophysics Data System (ADS)
Preece, Bradley L.; Haefner, David P.; Fanning, Jonathan D.
2012-06-01
Using post-processing filters to enhance image detail, a process commonly referred to as boost, can significantly affect the performance of an EO/IR system. The US Army's target acquisition models currently use the Targeting Task Performance (TTP) metric to quantify sensor performance. The TTP metric accounts for each element in the system including: blur and noise introduced by the imager, any additional post-processing steps, and the effects of the Human Visual System (HVS). The current implementation of the TTP metric assumes spatial separability, which can introduce significant errors when the TTP is applied to systems using non-separable filters. To accurately apply the TTP metric to systems incorporating boost, we have implement a two-dimensional (2D) version of the TTP metric. The accuracy of the 2D TTP metric was verified through a series of perception experiments involving various levels of boost. The 2D TTP metric has been incorporated into the Night Vision Integrated Performance Model (NV-IPM) allowing accurate system modeling of non-separable image filters.
Hadley, Wendy; Sato, Amy; Kuhl, Elizabeth; Rancourt, Diana; Oster, Danielle; Lloyd-Richardson, Elizabeth
2015-01-01
Objective Adolescent weight control interventions demonstrate variable findings, with inconsistent data regarding the appropriate role for parents. The current study examined the efficacy of a standard adolescent behavioral weight control (BWC) intervention that also targeted parent–adolescent communication and parental modeling of healthy behaviors (Standard Behavioral Treatment + Enhanced Parenting; SBT + EP) compared with a standard BWC intervention (SBT). Methods 49 obese adolescents (M age = 15.10; SD = 1.33; 76% female; 67.3% non-Hispanic White) and a caregiver were randomly assigned to SBT or SBT + EP. Adolescent and caregiver weight and height, parental modeling, and weight-related communication were obtained at baseline and end of the 16-week intervention. Results Significant decreases in adolescent weight and increases in parental self-monitoring were observed across both conditions. Analyses of covariance revealed a trend for greater reduction in weight and negative maternal commentary among SBT condition participants. Conclusions Contrary to hypotheses, targeting parent–adolescent communication and parental modeling did not lead to better outcomes in adolescent weight control. PMID:25294840
A basin-specific aquatic food web biomagnification model for estimation of mercury target levels.
Hope, Bruce
2003-10-01
In the Willamette River Basin (WRB, Oregon, USA), health advisories currently limit consumption of fish that have accumulated methylmercury (MeHg) to levels posing a potential health risk for humans. Under the Clean Water Act, these advisories create the requirement for a total maximum daily load (TMDL) for mercury in the WRB. A TMDL is a calculation of the maximum amount of a pollutant that a body of water can receive and still meet water-quality standards. Because MeHg is known to biomagnify in aquatic food webs, a basin-specific biomagnification factor can be used, given a protective fish tissue criterion, to estimate total mercury concentrations in surface waters required to lower advisory mercury concentrations currently in fish in the WRB. This paper presents an aquatic food web biomagnification model that simulates inorganic mercury (Hg(II)) and MeHg accumulation in fish tissue and estimates WRB-specific biomagnification factors for resident fish species of concern to stakeholders. Probabilistic (two-dimensional Monte Carlo) techniques propagate parameter variability and uncertainty throughout the model, providing decision makers with credible range information and increased flexibility in establishing a specific mercury target level. The model predicts the probability of tissue mercury concentrations in eight fish species within the range of concentrations measured in these species over 20 years of water-quality monitoring. Estimated mean biomagnification factor values range from 1.12 x 10(6) to 7.66 x 10(6) and are within the range of U.S. Environmental Protection Agency national values. Several WRB-specific mercury target levels are generated, which very by their probability of affording human health protection relative to the federal MeHg tissue criterion of 0.30 mg/kg. Establishing a specific numeric target level is, however, a public policy decision, and one that will require further discussions among WRB stakeholders. PMID:14552019
A folate receptor-targeting nanoparticle minimizes drug resistance in a human cancer model.
Wang, Xu; Li, Jun; Wang, Yuxiang; Koenig, Lydia; Gjyrezi, Ada; Giannakakou, Paraskevi; Shin, Edwin H; Tighiouart, Mourad; Chen, Zhuo Georgia; Nie, Shuming; Shin, Dong M
2011-08-23
Resistance to chemotherapy is a major obstacle in cancer therapy. The main purpose of this study is to evaluate the potential of a folate receptor-targeting nanoparticle to overcome/minimize drug resistance and to explore the underlying mechanisms. This is accomplished with enhanced cellular accumulation and retention of paclitaxel (one of the most effective anticancer drugs in use today and a well-known P-glycoprotein (P-gp) substrate) in a P-gp-overexpressing cancer model. The folate receptor-targeted nanoparticle, HFT-T, consists of a heparin-folate-paclitaxel (HFT) backbone with an additional paclitaxel (T) loaded in its hydrophobic core. In vitro analyses demonstrated that the HFT-T nanoparticle was superior to free paclitaxel or nontargeted nanoparticle (HT-T) in inhibiting proliferation of P-gp-overexpressing cancer cells (KB-8-5), partially due to its enhanced uptake and prolonged intracellular retention. In a subcutaneous KB-8-5 xenograft model, HFT-T administration enhanced the specific delivery of paclitaxel into tumor tissues and remarkably prolonged retention within tumor tissues. Importantly, HFT-T treatment markedly retarded tumor growth in a xenograft model of resistant human squamous cancer. Immunohistochemical analysis further indicated that increased in vivo efficacy of HFT-T nanoparticles was associated with a higher degree of microtubule stabilization, mitotic arrest, antiangiogenic activity, and inhibition of cell proliferation. These findings suggest that when the paclitaxel was delivered as an HFT-T nanoparticle, the drug is better retained within the P-gp-overexpressing cells than the free form of paclitaxel. These results indicated that the targeted HFT-T nanoparticle may be promising in minimizing P-gp related drug resistance and enhancing therapeutic efficacy compared with the free form of paclitaxel.
Pokreisz, Peter; Schnitzer, Jan E.
2013-01-01
We describe a novel model of myocardial infarction (MI) in rats induced by percutaneous transthoracic low-energy laser-targeted photodynamic irradiation. The procedure does not require thoracotomy and represents a minimally invasive alternative to existing surgical models. Target cardiac area to be photodynamically irradiated was triangulated from the thoracic X-ray scans. The acute phase of MI was histopathologically characterized by the presence of extensive vascular occlusion, hemorrhage, loss of transversal striations, neutrophilic infiltration, and necrotic changes of cardiomyocytes. Consequently, damaged myocardium was replaced with fibrovascular and granulation tissue. The fibrotic scar in the infarcted area was detected by computer tomography imaging. Cardiac troponin I (cTnI), a specific marker of myocardial injury, was significantly elevated at 6 h (41 ± 6 ng/ml, n = 4, P < 0.05 vs. baseline) and returned to baseline after 72 h. Triphenyltetrazolium chloride staining revealed transmural anterolateral infarcts targeting 25 ± 3% of the left ventricle at day 1 with a decrease to 20 ± 3% at day 40 (n = 6 for each group, P < 0.01 vs. day 1). Electrocardiography (ECG) showed significant ST-segment elevation in the acute phase with subsequent development of a pathological Q wave and premature ventricular contractions in the chronic phase of MI. Vectorcardiogram analysis of spatiotemporal electrical signal transduction revealed changes in inscription direction, QRS loop morphology, and redistribution in quadrant areas. The photodynamically induced MI in n = 51 rats was associated with 12% total mortality. Histological findings, ECG abnormalities, and elevated cTnI levels confirmed the photosensitizer-dependent induction of MI after laser irradiation. This novel rodent model of MI might provide a platform to evaluate new diagnostic or therapeutic interventions. PMID:24213611
Alwall, J.; Hoche, S.; Krauss, F.; Lavesson, N.; Lonnblad, L.; Maltoni, F.; Mangano, M.L.; Moretti, M.; Papadopoulos, C.G.; Piccinini, F.; Schumann, S.; Treccani, M.; Winter, J.; Worek, M.; /SLAC /Durham U., IPPP /Lund U. /Louvain U. /CERN /Ferrara U. /INFN, Ferrara /Athens U. /INFN, Pavia /Dresden, Tech. U. /Karlsruhe U., TP /Silesia U.
2007-06-27
We compare different procedures for combining fixed-order tree-level matrix-element generators with parton showers. We use the case of W-production at the Tevatron and the LHC to compare different implementations of the so-called CKKW and MLM schemes using different matrix-element generators and different parton cascades. We find that although similar results are obtained in all cases, there are important differences.