Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
Centrifuge impact cratering experiments: Scaling laws for non-porous targets
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.
1987-01-01
A geotechnical centrifuge was used to investigate large body impacts onto planetary surfaces. At elevated gravity, it is possible to match various dimensionless similarity parameters which were shown to govern large scale impacts. Observations of crater growth and target flow fields have provided detailed and critical tests of a complete and unified scaling theory for impact cratering. Scaling estimates were determined for nonporous targets. Scaling estimates for large scale cratering in rock proposed previously by others have assumed that the crater radius is proportional to powers of the impactor energy and gravity, with no additional dependence on impact velocity. The size scaling laws determined from ongoing centrifuge experiments differ from earlier ones in three respects. First, a distinct dependence of impact velocity is recognized, even for constant impactor energy. Second, the present energy exponent for low porosity targets, like competent rock, is lower than earlier estimates. Third, the gravity exponent is recognized here as being related to both the energy and the velocity exponents.
TARGET Publication Guidelines | Office of Cancer Genomics
Like other NCI large-scale genomics initiatives, TARGET is a community resource project and data are made available rapidly after validation for use by other researchers. To act in accord with the Fort Lauderdale principles and support the continued prompt public release of large-scale genomic data prior to publication, researchers who plan to prepare manuscripts containing descriptions of TARGET pediatric cancer data that would be of comparable scope to an initial TARGET disease-specific comprehensive, global analysis publication, and journal editors who receive such manuscripts, are
Centrifuge impact cratering experiments: Scaling laws for non-porous targets
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.
1987-01-01
This research is a continuation of an ongoing program whose objective is to perform experiments and to develop scaling relationships for large body impacts onto planetary surfaces. The development of the centrifuge technique has been pioneered by the present investigator and is used to provide experimental data for actual target materials of interest. With both powder and gas guns mounted on a rotor arm, it is possible to match various dimensionless similarity parameters, which have been shown to govern the behavior of large scale impacts. Current work is directed toward the determination of scaling estimates for nonporous targets. The results are presented in summary form.
Three-dimensional hydrodynamic simulations of OMEGA implosions
NASA Astrophysics Data System (ADS)
Igumenshchev, I. V.; Michel, D. T.; Shah, R. C.; Campbell, E. M.; Epstein, R.; Forrest, C. J.; Glebov, V. Yu.; Goncharov, V. N.; Knauer, J. P.; Marshall, F. J.; McCrory, R. L.; Regan, S. P.; Sangster, T. C.; Stoeckl, C.; Schmitt, A. J.; Obenschain, S.
2017-05-01
The effects of large-scale (with Legendre modes ≲ 10) asymmetries in OMEGA direct-drive implosions caused by laser illumination nonuniformities (beam-power imbalance and beam mispointing and mistiming), target offset, and variation in target-layer thickness were investigated using the low-noise, three-dimensional Eulerian hydrodynamic code ASTER. Simulations indicate that these asymmetries can significantly degrade the implosion performance. The most important sources of the asymmetries are the target offsets ( ˜10 to 20 μm), beam-power imbalance ( σrms˜10 %), and variations ( ˜5 %) in target-layer thickness. Large-scale asymmetries distort implosion cores, resulting in a reduced hot-spot confinement and an increased residual kinetic energy of implosion targets. The ion temperature inferred from the width of simulated neutron spectra is influenced by bulk fuel motion in the distorted hot spot and can result in up to an ˜1 -keV increase in apparent temperature. Similar temperature variations along different lines of sight are observed. Demonstrating hydrodynamic equivalence to ignition designs on OMEGA requires a reduction in large-scale target and laser-imposed nonuniformities, minimizing target offset, and employing highly efficient mid-adiabat (α = 4) implosion designs, which mitigate cross-beam energy transfer and suppress short-wavelength Rayleigh-Taylor growth.
Three-dimensional hydrodynamic simulations of OMEGA implosions
Igumenshchev, I. V.; Michel, D. T.; Shah, R. C.; ...
2017-03-30
Here, the effects of large-scale (with Legendre modes ≲10) asymmetries in OMEGA direct-drive implosions caused by laser illumination nonuniformities (beam-power imbalance and beam mispointing and mistiming), target offset, and variation in target-layer thickness were investigated using the low-noise, three-dimensional Eulerian hydrodynamic code ASTER. Simulations indicate that these asymmetries can significantly degrade the implosion performance. The most important sources of the asymmetries are the target offsets (~10 to 20 μm), beam-power imbalance (σ rms ~ 10%), and variations (~5%) in target-layer thickness. Large-scale asymmetries distort implosion cores, resulting in a reduced hot-spot confinement and an increased residual kinetic energy of implosionmore » targets. The ion temperature inferred from the width of simulated neutron spectra is influenced by bulk fuel motion in the distorted hot spot and can result in up to an ~1 -keV increase in apparent temperature. Similar temperature variations along different lines of sight are observed. Demonstrating hydrodynamic equivalence to ignition designs on OMEGA requires a reduction in large-scale target and laser-imposed nonuniformities, minimizing target offset, and employing highly efficient mid-adiabat (α = 4) implosion designs, which mitigate cross-beam energy transfer and suppress short-wavelength Rayleigh–Taylor growth.« less
Three-dimensional hydrodynamic simulations of OMEGA implosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Igumenshchev, I. V.; Michel, D. T.; Shah, R. C.
Here, the effects of large-scale (with Legendre modes ≲10) asymmetries in OMEGA direct-drive implosions caused by laser illumination nonuniformities (beam-power imbalance and beam mispointing and mistiming), target offset, and variation in target-layer thickness were investigated using the low-noise, three-dimensional Eulerian hydrodynamic code ASTER. Simulations indicate that these asymmetries can significantly degrade the implosion performance. The most important sources of the asymmetries are the target offsets (~10 to 20 μm), beam-power imbalance (σ rms ~ 10%), and variations (~5%) in target-layer thickness. Large-scale asymmetries distort implosion cores, resulting in a reduced hot-spot confinement and an increased residual kinetic energy of implosionmore » targets. The ion temperature inferred from the width of simulated neutron spectra is influenced by bulk fuel motion in the distorted hot spot and can result in up to an ~1 -keV increase in apparent temperature. Similar temperature variations along different lines of sight are observed. Demonstrating hydrodynamic equivalence to ignition designs on OMEGA requires a reduction in large-scale target and laser-imposed nonuniformities, minimizing target offset, and employing highly efficient mid-adiabat (α = 4) implosion designs, which mitigate cross-beam energy transfer and suppress short-wavelength Rayleigh–Taylor growth.« less
NASA Astrophysics Data System (ADS)
Song, Z. N.; Sui, H. G.
2018-04-01
High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1) Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2) Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region) during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.
Cumulative Damage in Strength-Dominated Collisions of Rocky Asteroids: Rubble Piles and Brick Piles
NASA Technical Reports Server (NTRS)
Housen, Kevin
2009-01-01
Laboratory impact experiments were performed to investigate the conditions that produce large-scale damage in rock targets. Aluminum cylinders (6.3 mm diameter) impacted basalt cylinders (69 mm diameter) at speeds ranging from 0.7 to 2.0 km/s. Diagnostics included measurements of the largest fragment mass, velocities of the largest remnant and large fragments ejected from the periphery of the target, and X-ray computed tomography imaging to inspect some of the impacted targets for internal damage. Significant damage to the target occurred when the kinetic energy per unit target mass exceeded roughly 1/4 of the energy required for catastrophic shattering (where the target is reduced to one-half its original mass). Scaling laws based on a rate-dependent strength were developed that provide a basis for extrapolating the results to larger strength-dominated collisions. The threshold specific energy for widespread damage was found to scale with event size in the same manner as that for catastrophic shattering. Therefore, the factor of four difference between the two thresholds observed in the lab also applies to larger collisions. The scaling laws showed that for a sequence of collisions that are similar in that they produce the same ratio of largest fragment mass to original target mass, the fragment velocities decrease with increasing event size. As a result, rocky asteroids a couple hundred meters in diameter should retain their large ejecta fragments in a jumbled rubble-pile state. For somewhat larger bodies, the ejection velocities are sufficiently low that large fragments are essentially retained in place, possibly forming ordered "brick-pile" structures.
Impact of US and Canadian precursor regulation on methamphetamine purity in the United States.
Cunningham, James K; Liu, Lon-Mu; Callaghan, Russell
2009-03-01
Reducing drug purity is a major, but largely unstudied, goal of drug suppression. This study examines whether US methamphetamine purity was impacted by the suppression policy of US and Canadian precursor chemical regulation. Autoregressive integrated moving average (ARIMA)-intervention time-series analysis. Continental United States and Hawaii (1985-May 2005). Interventions US federal regulations targeting precursors, ephedrine and pseudoephedrine, in forms used by large-scale producers were implemented in November 1989, August 1995 and October 1997. US regulations targeting precursors in forms used by small-scale producers (e.g. over-the-counter medications) were implemented in October 1996 and October 2001. Canada implemented federal precursor regulations in January 2003 and July 2003 and an essential chemical (e.g. acetone) regulation in January 2004. Monthly median methamphetamine purity series. US regulations targeting large-scale producers were associated with purity declines of 16-67 points; those targeting small-scale producers had little or no impact. Canada's precursor regulations were associated with purity increases of 13-15 points, while its essential chemical regulation was associated with a 13-point decrease. Hawaii's purity was consistently high, and appeared to vary little with the 1990s/2000s regulations. US precursor regulations targeting large-scale producers were associated with substantial decreases in continental US methamphetamine purity, while regulations targeting over-the-counter medications had little or no impact. Canada's essential chemical regulation was also associated with a decrease in continental US purity. However, Canada's precursor regulations were associated with purity increases: these regulations may have impacted primarily producers of lower-quality methamphetamine, leaving higher-purity methamphetamine on the market by default. Hawaii's well-known preference for 'ice' (high-purity methamphetamine) may have helped to constrain purity there to a high, attenuated range, possibly limiting its sensitivity to precursor regulation.
NASA Astrophysics Data System (ADS)
Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.
2018-01-01
This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.
Manoharan, Lokeshwaran; Kushwaha, Sandeep K.; Hedlund, Katarina; Ahrén, Dag
2015-01-01
Microbial enzyme diversity is a key to understand many ecosystem processes. Whole metagenome sequencing (WMG) obtains information on functional genes, but it is costly and inefficient due to large amount of sequencing that is required. In this study, we have applied a captured metagenomics technique for functional genes in soil microorganisms, as an alternative to WMG. Large-scale targeting of functional genes, coding for enzymes related to organic matter degradation, was applied to two agricultural soil communities through captured metagenomics. Captured metagenomics uses custom-designed, hybridization-based oligonucleotide probes that enrich functional genes of interest in metagenomic libraries where only probe-bound DNA fragments are sequenced. The captured metagenomes were highly enriched with targeted genes while maintaining their target diversity and their taxonomic distribution correlated well with the traditional ribosomal sequencing. The captured metagenomes were highly enriched with genes related to organic matter degradation; at least five times more than similar, publicly available soil WMG projects. This target enrichment technique also preserves the functional representation of the soils, thereby facilitating comparative metagenomics projects. Here, we present the first study that applies the captured metagenomics approach in large scale, and this novel method allows deep investigations of central ecosystem processes by studying functional gene abundances. PMID:26490729
Chatterjee, Gourab; Singh, Prashant Kumar; Robinson, A P L; Blackman, D; Booth, N; Culfa, O; Dance, R J; Gizzi, L A; Gray, R J; Green, J S; Koester, P; Kumar, G Ravindra; Labate, L; Lad, Amit D; Lancaster, K L; Pasley, J; Woolsey, N C; Rajeev, P P
2017-08-21
The transport of hot, relativistic electrons produced by the interaction of an intense petawatt laser pulse with a solid has garnered interest due to its potential application in the development of innovative x-ray sources and ion-acceleration schemes. We report on spatially and temporally resolved measurements of megagauss magnetic fields at the rear of a 50-μm thick plastic target, irradiated by a multi-picosecond petawatt laser pulse at an incident intensity of ~10 20 W/cm 2 . The pump-probe polarimetric measurements with micron-scale spatial resolution reveal the dynamics of the magnetic fields generated by the hot electron distribution at the target rear. An annular magnetic field profile was observed ~5 ps after the interaction, indicating a relatively smooth hot electron distribution at the rear-side of the plastic target. This is contrary to previous time-integrated measurements, which infer that such targets will produce highly structured hot electron transport. We measured large-scale filamentation of the hot electron distribution at the target rear only at later time-scales of ~10 ps, resulting in a commensurate large-scale filamentation of the magnetic field profile. Three-dimensional hybrid simulations corroborate our experimental observations and demonstrate a beam-like hot electron transport at initial time-scales that may be attributed to the local resistivity profile at the target rear.
Experimental Simulations of Large-Scale Collisions
NASA Technical Reports Server (NTRS)
Housen, Kevin R.
2002-01-01
This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.
Lo, Yu-Chen; Senese, Silvia; Li, Chien-Ming; Hu, Qiyang; Huang, Yong; Damoiseaux, Robert; Torres, Jorge Z.
2015-01-01
Target identification is one of the most critical steps following cell-based phenotypic chemical screens aimed at identifying compounds with potential uses in cell biology and for developing novel disease therapies. Current in silico target identification methods, including chemical similarity database searches, are limited to single or sequential ligand analysis that have limited capabilities for accurate deconvolution of a large number of compounds with diverse chemical structures. Here, we present CSNAP (Chemical Similarity Network Analysis Pulldown), a new computational target identification method that utilizes chemical similarity networks for large-scale chemotype (consensus chemical pattern) recognition and drug target profiling. Our benchmark study showed that CSNAP can achieve an overall higher accuracy (>80%) of target prediction with respect to representative chemotypes in large (>200) compound sets, in comparison to the SEA approach (60–70%). Additionally, CSNAP is capable of integrating with biological knowledge-based databases (Uniprot, GO) and high-throughput biology platforms (proteomic, genetic, etc) for system-wise drug target validation. To demonstrate the utility of the CSNAP approach, we combined CSNAP's target prediction with experimental ligand evaluation to identify the major mitotic targets of hit compounds from a cell-based chemical screen and we highlight novel compounds targeting microtubules, an important cancer therapeutic target. The CSNAP method is freely available and can be accessed from the CSNAP web server (http://services.mbi.ucla.edu/CSNAP/). PMID:25826798
Lim, Hansaim; Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; He, Di; Zhuang, Luke; Meng, Patrick; Xie, Lei
2016-10-01
Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and side effect prediction. The software and benchmark are available at https://github.com/hansaimlim/REMAP.
Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; Meng, Patrick; Xie, Lei
2016-01-01
Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and side effect prediction. The software and benchmark are available at https://github.com/hansaimlim/REMAP. PMID:27716836
Telecommunications technology and rural education in the United States
NASA Technical Reports Server (NTRS)
Perrine, J. R.
1975-01-01
The rural sector of the US is examined from the point of view of whether telecommunications technology can augment the development of rural education. Migratory farm workers and American Indians were the target groups which were examined as examples of groups with special needs in rural areas. The general rural population and the target groups were examined to identify problems and to ascertain specific educational needs. Educational projects utilizing telecommunications technology in target group settings were discussed. Large scale regional ATS-6 satellite-based experimental educational telecommunications projects were described. Costs and organizational factors were also examined for large scale rural telecommunications projects.
Value-focused framework for defining landscape-scale conservation targets
Romañach, Stephanie; Benscoter, Allison M.; Brandt, Laura A.
2016-01-01
Conservation of natural resources can be challenging in a rapidly changing world and require collaborative efforts for success. Conservation planning is the process of deciding how to protect, conserve, and enhance or minimize loss of natural and cultural resources. Establishing conservation targets (also called indicators or endpoints), the measurable expressions of desired resource conditions, can help with site-specific up to landscape-scale conservation planning. Using conservation targets and tracking them through time can deliver benefits such as insight into ecosystem health and providing early warnings about undesirable trends. We describe an approach using value-focused thinking to develop statewide conservation targets for Florida. Using such an approach allowed us to first identify stakeholder objectives and then define conservation targets to meet those objectives. Stakeholders were able to see how their shared efforts fit into the broader conservation context, and also anticipate the benefits of multi-agency and -organization collaboration. We developed an iterative process for large-scale conservation planning that included defining a shared framework for the process, defining the conservation targets themselves, as well as developing management and monitoring strategies for evaluation of their effectiveness. The process we describe is applicable to other geographies where multiple parties are seeking to implement collaborative, large-scale biological planning.
The influence of cognitive load on spatial search performance.
Longstaffe, Kate A; Hood, Bruce M; Gilchrist, Iain D
2014-01-01
During search, executive function enables individuals to direct attention to potential targets, remember locations visited, and inhibit distracting information. In the present study, we investigated these executive processes in large-scale search. In our tasks, participants searched a room containing an array of illuminated locations embedded in the floor. The participants' task was to press the switches at the illuminated locations on the floor so as to locate a target that changed color when pressed. The perceptual salience of the search locations was manipulated by having some locations flashing and some static. Participants were more likely to search at flashing locations, even when they were explicitly informed that the target was equally likely to be at any location. In large-scale search, attention was captured by the perceptual salience of the flashing lights, leading to a bias to explore these targets. Despite this failure of inhibition, participants were able to restrict returns to previously visited locations, a measure of spatial memory performance. Participants were more able to inhibit exploration to flashing locations when they were not required to remember which locations had previously been visited. A concurrent digit-span memory task further disrupted inhibition during search, as did a concurrent auditory attention task. These experiments extend a load theory of attention to large-scale search, which relies on egocentric representations of space. High cognitive load on working memory leads to increased distractor interference, providing evidence for distinct roles for the executive subprocesses of memory and inhibition during large-scale search.
NASA Astrophysics Data System (ADS)
Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei
2017-10-01
In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.
Mixture model normalization for non-targeted gas chromatography/mass spectrometry metabolomics data.
Reisetter, Anna C; Muehlbauer, Michael J; Bain, James R; Nodzenski, Michael; Stevens, Robert D; Ilkayeva, Olga; Metzger, Boyd E; Newgard, Christopher B; Lowe, William L; Scholtens, Denise M
2017-02-02
Metabolomics offers a unique integrative perspective for health research, reflecting genetic and environmental contributions to disease-related phenotypes. Identifying robust associations in population-based or large-scale clinical studies demands large numbers of subjects and therefore sample batching for gas-chromatography/mass spectrometry (GC/MS) non-targeted assays. When run over weeks or months, technical noise due to batch and run-order threatens data interpretability. Application of existing normalization methods to metabolomics is challenged by unsatisfied modeling assumptions and, notably, failure to address batch-specific truncation of low abundance compounds. To curtail technical noise and make GC/MS metabolomics data amenable to analyses describing biologically relevant variability, we propose mixture model normalization (mixnorm) that accommodates truncated data and estimates per-metabolite batch and run-order effects using quality control samples. Mixnorm outperforms other approaches across many metrics, including improved correlation of non-targeted and targeted measurements and superior performance when metabolite detectability varies according to batch. For some metrics, particularly when truncation is less frequent for a metabolite, mean centering and median scaling demonstrate comparable performance to mixnorm. When quality control samples are systematically included in batches, mixnorm is uniquely suited to normalizing non-targeted GC/MS metabolomics data due to explicit accommodation of batch effects, run order and varying thresholds of detectability. Especially in large-scale studies, normalization is crucial for drawing accurate conclusions from non-targeted GC/MS metabolomics data.
Study of multi-functional precision optical measuring system for large scale equipment
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi
2017-10-01
The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.
Sugahara, Daisuke; Kaji, Hiroyuki; Sugihara, Kazushi; Asano, Masahide; Narimatsu, Hisashi
2012-01-01
Model organisms containing deletion or mutation in a glycosyltransferase-gene exhibit various physiological abnormalities, suggesting that specific glycan motifs on certain proteins play important roles in vivo. Identification of the target proteins of glycosyltransferase isozymes is the key to understand the roles of glycans. Here, we demonstrated the proteome-scale identification of the target proteins specific for a glycosyltransferase isozyme, β1,4-galactosyltransferase-I (β4GalT-I). Although β4GalT-I is the most characterized glycosyltransferase, its distinctive contribution to β1,4-galactosylation has been hardly described so far. We identified a large number of candidates for the target proteins specific to β4GalT-I by comparative analysis of β4GalT-I-deleted and wild-type mice using the LC/MS-based technique with the isotope-coded glycosylation site-specific tagging (IGOT) of lectin-captured N-glycopeptides. Our approach to identify the target proteins in a proteome-scale offers common features and trends in the target proteins, which facilitate understanding of the mechanism that controls assembly of a particular glycan motif on specific proteins. PMID:23002422
Highly multiplexed targeted proteomics using precise control of peptide retention time.
Gallien, Sebastien; Peterman, Scott; Kiyonami, Reiko; Souady, Jamal; Duriez, Elodie; Schoen, Alan; Domon, Bruno
2012-04-01
Large-scale proteomics applications using SRM analysis on triple quadrupole mass spectrometers present new challenges to LC-MS/MS experimental design. Despite the automation of building large-scale LC-SRM methods, the increased numbers of targeted peptides can compromise the balance between sensitivity and selectivity. To facilitate large target numbers, time-scheduled SRM transition acquisition is performed. Previously published results have demonstrated incorporation of a well-characterized set of synthetic peptides enabled chromatographic characterization of the elution profile for most endogenous peptides. We have extended this application of peptide trainer kits to not only build SRM methods but to facilitate real-time elution profile characterization that enables automated adjustment of the scheduled detection windows. Incorporation of dynamic retention time adjustments better facilitate targeted assays lasting several days without the need for constant supervision. This paper provides an overview of how the dynamic retention correction approach identifies and corrects for commonly observed LC variations. This adjustment dramatically improves robustness in targeted discovery experiments as well as routine quantification experiments. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Causal Inferences with Large Scale Assessment Data: Using a Validity Framework
ERIC Educational Resources Information Center
Rutkowski, David; Delandshere, Ginette
2016-01-01
To answer the calls for stronger evidence by the policy community, educational researchers and their associated organizations increasingly demand more studies that can yield causal inferences. International large scale assessments (ILSAs) have been targeted as a rich data sources for causal research. It is in this context that we take up a…
Reflections on the Increasing Relevance of Large-Scale Professional Development
ERIC Educational Resources Information Center
Krainer, Konrad
2015-01-01
This paper focuses on commonalities and differences of three approaches to large-scale professional development (PD) in mathematics education, based on two studies from Germany and one from the United States of America. All three initiatives break new ground in improving PD targeted at educating "multipliers", and in all three cases…
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.; ...
2018-02-06
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schanen, Michel; Marin, Oana; Zhang, Hong
Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less
Automatic three-dimensional measurement of large-scale structure based on vision metrology.
Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng
2014-01-01
All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.
Crater size estimates for large-body terrestrial impact
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.; Housen, Kevin R.
1988-01-01
Calculating the effects of impacts leading to global catastrophes requires knowledge of the impact process at very large size scales. This information cannot be obtained directly but must be inferred from subscale physical simulations, numerical simulations, and scaling laws. Schmidt and Holsapple presented scaling laws based upon laboratory-scale impact experiments performed on a centrifuge (Schmidt, 1980 and Schmidt and Holsapple, 1980). These experiments were used to develop scaling laws which were among the first to include gravity dependence associated with increasing event size. At that time using the results of experiments in dry sand and in water to provide bounds on crater size, they recognized that more precise bounds on large-body impact crater formation could be obtained with additional centrifuge experiments conducted in other geological media. In that previous work, simple power-law formulae were developed to relate final crater diameter to impactor size and velocity. In addition, Schmidt (1980) and Holsapple and Schmidt (1982) recognized that the energy scaling exponent is not a universal constant but depends upon the target media. Recently, Holsapple and Schmidt (1987) includes results for non-porous materials and provides a basis for estimating crater formation kinematics and final crater size. A revised set of scaling relationships for all crater parameters of interest are presented. These include results for various target media and include the kinematics of formation. Particular attention is given to possible limits brought about by very large impactors.
ERIC Educational Resources Information Center
Kieffer, Michael J.; Lesaux, Nonie K.; Rivera, Mabel; Francis, David J.
2009-01-01
Including English language learners (ELLs) in large-scale assessments raises questions about the validity of inferences based on their scores. Test accommodations for ELLs are intended to reduce the impact of limited English proficiency on the assessment of the target construct, most often mathematic or science proficiency. This meta-analysis…
The role of large—scale BECCS in the pursuit of the 1.5°C target: an Earth system model perspective
NASA Astrophysics Data System (ADS)
Muri, Helene
2018-04-01
The increasing awareness of the many damaging aspects of climate change has prompted research into ways of reducing and reversing the anthropogenic increase in carbon concentrations in the atmosphere. Most emission scenarios stabilizing climate at low levels, such as the 1.5 °C target as outlined by the Paris Agreement, require large-scale deployment of Bio-Energy with Carbon Capture and Storage (BECCS). Here, the potential of large-scale BECCS deployment in contributing towards the 1.5 °C global warming target is evaluated using an Earth system model, as well as associated climate responses and carbon cycle feedbacks. The geographical location of the bioenergy feedstock is shown to be key to the success of such measures in the context of temperature targets. Although net negative emissions were reached sooner, by ∼6 years, and scaled up, land use change emissions and reductions in forest carbon sinks outweigh these effects in one scenario. Re-cultivating mid-latitudes was found to be beneficial, on the other hand, contributing in the right direction towards the 1.5 °C target, only by ‑0.1 °C and ‑54 Gt C in avoided emissions, however. Obstacles remain related to competition for land from nature preservation and food security, as well as the technological availability of CCS.
Skin Friction Reduction Through Large-Scale Forcing
NASA Astrophysics Data System (ADS)
Bhatt, Shibani; Artham, Sravan; Gnanamanickam, Ebenezer
2017-11-01
Flow structures in a turbulent boundary layer larger than an integral length scale (δ), referred to as large-scales, interact with the finer scales in a non-linear manner. By targeting these large-scales and exploiting this non-linear interaction wall shear stress (WSS) reduction of over 10% has been achieved. The plane wall jet (PWJ), a boundary layer which has highly energetic large-scales that become turbulent independent of the near-wall finer scales, is the chosen model flow field. It's unique configuration allows for the independent control of the large-scales through acoustic forcing. Perturbation wavelengths from about 1 δ to 14 δ were considered with a reduction in WSS for all wavelengths considered. This reduction, over a large subset of the wavelengths, scales with both inner and outer variables indicating a mixed scaling to the underlying physics, while also showing dependence on the PWJ global properties. A triple decomposition of the velocity fields shows an increase in coherence due to forcing with a clear organization of the small scale turbulence with respect to the introduced large-scale. The maximum reduction in WSS occurs when the introduced large-scale acts in a manner so as to reduce the turbulent activity in the very near wall region. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-16-1-0194 monitored by Dr. Douglas Smith.
Large Scale Spectral Line Mapping of Galactic Regions with CCAT-Prime
NASA Astrophysics Data System (ADS)
Simon, Robert
2018-01-01
CCAT-prime is a 6-m submillimeter telescope that is being built on the top of Cerro Chajnantor (5600 m altitude) overlooking the ALMA plateau in the Atacama Desert. Its novel Crossed-Dragone design enables a large field of view without blockage and is thus particularly well suited for large scale surveys in the continuum and spectral lines targeting important questions ranging from star formation in the Milky Way to cosmology. On this poster, we focus on the large scale mapping opportunities in important spectral cooling lines of the interstellar medium opened up by CCAT-prime and the Cologne heterodyne instrument CHAI.
Target-decoy Based False Discovery Rate Estimation for Large-scale Metabolite Identification.
Wang, Xusheng; Jones, Drew R; Shaw, Timothy I; Cho, Ji-Hoon; Wang, Yuanyuan; Tan, Haiyan; Xie, Boer; Zhou, Suiping; Li, Yuxin; Peng, Junmin
2018-05-23
Metabolite identification is a crucial step in mass spectrometry (MS)-based metabolomics. However, it is still challenging to assess the confidence of assigned metabolites. In this study, we report a novel method for estimating false discovery rate (FDR) of metabolite assignment with a target-decoy strategy, in which the decoys are generated through violating the octet rule of chemistry by adding small odd numbers of hydrogen atoms. The target-decoy strategy was integrated into JUMPm, an automated metabolite identification pipeline for large-scale MS analysis, and was also evaluated with two other metabolomics tools, mzMatch and mzMine 2. The reliability of FDR calculation was examined by false datasets, which were simulated by altering MS1 or MS2 spectra. Finally, we used the JUMPm pipeline coupled with the target-decoy strategy to process unlabeled and stable-isotope labeled metabolomic datasets. The results demonstrate that the target-decoy strategy is a simple and effective method for evaluating the confidence of high-throughput metabolite identification.
News Release: May 25, 2016 — Building on data from The Cancer Genome Atlas (TCGA) project, a multi-institutional team of scientists has completed the first large-scale “proteogenomic” study of breast cancer, linking DNA mutations to protein signaling and helping pinpoint the genes that drive cancer.
Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio
2017-10-24
High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.
Validation of the RAGE Hydrocode for Impacts into Volatile-Rich Targets
NASA Astrophysics Data System (ADS)
Plesko, C. S.; Asphaug, E.; Coker, R. F.; Wohletz, K. H.; Korycansky, D. G.; Gisler, G. R.
2007-12-01
In preparation for a detailed study of large-scale impacts into the Martian surface, we have validated the RAGE hydrocode (Gittings et al., in press, CSD) against a suite of experiments and statistical models. We present comparisons of hydrocode models to centimeter-scale gas gun impacts (Nakazawa et al. 2002), an underground nuclear test (Perret, 1971), and crater scaling laws (Holsapple 1993, O'Keefe and Ahrens 1993). We have also conducted model convergence and uncertainty analyses which will be presented. Results to date are encouraging for our current model goals, and indicate areas where the hydrocode may be extended in the future. This validation work is focused on questions related to the specific problem of large impacts into volatile-rich targets. The overall goal of this effort is to be able to realistically model large-scale Noachian, and possibly post- Noachian, impacts on Mars not so much to model the crater morphology as to understand the evolution of target volatiles in the post-impact regime, to explore how large craters might set the stage for post-impact hydro- geologic evolution both locally (in the crater subsurface) and globally, due to the redistribution of volatiles from the surface and subsurface into the atmosphere. This work is performed under the auspices of IGPP and the DOE at LANL under contracts W-7405-ENG-36 and DE-AC52-06NA25396. Effort by DK and EA is sponsored by NASA's Mars Fundamental Research Program.
NASA Astrophysics Data System (ADS)
Torrisi, L.
2018-02-01
A large-scale study of ion acceleration in laser-generated plasma, extended to intensities from 1010 W/cm2 up to 1019 W/cm2, is presented. Aluminium thick and thin foils were irradiated in high vacuum using different infrared lasers and pulse durations from ns up to fs scale. Plasma was monitored mainly using SiC detectors employed in time-of-flight configuration. Protons and aluminium ions, at different energies and yields, were measured as a function of the laser intensity. The discontinuity region between particle acceleration from both the backward plasma (BPA) in thick targets and the forward plasma in thin foils in the target normal sheath acceleration (TNSA) regimes were investigated.
Zhou, Juntuo; Liu, Huiying; Liu, Yang; Liu, Jia; Zhao, Xuyang; Yin, Yuxin
2016-04-19
Recent advances in mass spectrometers which have yielded higher resolution and faster scanning speeds have expanded their application in metabolomics of diverse diseases. Using a quadrupole-Orbitrap LC-MS system, we developed an efficient large-scale quantitative method targeting 237 metabolites involved in various metabolic pathways using scheduled, parallel reaction monitoring (PRM). We assessed the dynamic range, linearity, reproducibility, and system suitability of the PRM assay by measuring concentration curves, biological samples, and clinical serum samples. The quantification performances of PRM and MS1-based assays in Q-Exactive were compared, and the MRM assay in QTRAP 6500 was also compared. The PRM assay monitoring 237 polar metabolites showed greater reproducibility and quantitative accuracy than MS1-based quantification and also showed greater flexibility in postacquisition assay refinement than the MRM assay in QTRAP 6500. We present a workflow for convenient PRM data processing using Skyline software which is free of charge. In this study we have established a reliable PRM methodology on a quadrupole-Orbitrap platform for evaluation of large-scale targeted metabolomics, which provides a new choice for basic and clinical metabolomics study.
Tsugawa, Hiroshi; Arita, Masanori; Kanazawa, Mitsuhiro; Ogiwara, Atsushi; Bamba, Takeshi; Fukusaki, Eiichiro
2013-05-21
We developed a new software program, MRMPROBS, for widely targeted metabolomics by using the large-scale multiple reaction monitoring (MRM) mode. The strategy became increasingly popular for the simultaneous analysis of up to several hundred metabolites at high sensitivity, selectivity, and quantitative capability. However, the traditional method of assessing measured metabolomics data without probabilistic criteria is not only time-consuming but is often subjective and makeshift work. Our program overcomes these problems by detecting and identifying metabolites automatically, by separating isomeric metabolites, and by removing background noise using a probabilistic score defined as the odds ratio from an optimized multivariate logistic regression model. Our software program also provides a user-friendly graphical interface to curate and organize data matrices and to apply principal component analyses and statistical tests. For a demonstration, we conducted a widely targeted metabolome analysis (152 metabolites) of propagating Saccharomyces cerevisiae measured at 15 time points by gas and liquid chromatography coupled to triple quadrupole mass spectrometry. MRMPROBS is a useful and practical tool for the assessment of large-scale MRM data available to any instrument or any experimental condition.
Cui, Chenchen; Song, Yujie; Liu, Jun; Ge, Hengtao; Li, Qian; Huang, Hui; Hu, Linyong; Zhu, Hongmei; Jin, Yaping; Zhang, Yong
2015-01-01
β-Lactoglobulin (BLG) is a major goat’s milk allergen that is absent in human milk. Engineered endonucleases, including transcription activator-like effector nucleases (TALENs) and zinc-finger nucleases, enable targeted genetic modification in livestock. In this study, TALEN-mediated gene knockout followed by gene knock-in were used to generate BLG knockout goats as mammary gland bioreactors for large-scale production of human lactoferrin (hLF). We introduced precise genetic modifications in the goat genome at frequencies of approximately 13.6% and 6.09% for the first and second sequential targeting, respectively, by using targeting vectors that underwent TALEN-induced homologous recombination (HR). Analysis of milk from the cloned goats revealed large-scale hLF expression or/and decreased BLG levels in milk from heterozygous goats as well as the absence of BLG in milk from homozygous goats. Furthermore, the TALEN-mediated targeting events in somatic cells can be transmitted through the germline after SCNT. Our result suggests that gene targeting via TALEN-induced HR may expedite the production of genetically engineered livestock for agriculture and biomedicine. PMID:25994151
Cui, Chenchen; Song, Yujie; Liu, Jun; Ge, Hengtao; Li, Qian; Huang, Hui; Hu, Linyong; Zhu, Hongmei; Jin, Yaping; Zhang, Yong
2015-05-21
β-Lactoglobulin (BLG) is a major goat's milk allergen that is absent in human milk. Engineered endonucleases, including transcription activator-like effector nucleases (TALENs) and zinc-finger nucleases, enable targeted genetic modification in livestock. In this study, TALEN-mediated gene knockout followed by gene knock-in were used to generate BLG knockout goats as mammary gland bioreactors for large-scale production of human lactoferrin (hLF). We introduced precise genetic modifications in the goat genome at frequencies of approximately 13.6% and 6.09% for the first and second sequential targeting, respectively, by using targeting vectors that underwent TALEN-induced homologous recombination (HR). Analysis of milk from the cloned goats revealed large-scale hLF expression or/and decreased BLG levels in milk from heterozygous goats as well as the absence of BLG in milk from homozygous goats. Furthermore, the TALEN-mediated targeting events in somatic cells can be transmitted through the germline after SCNT. Our result suggests that gene targeting via TALEN-induced HR may expedite the production of genetically engineered livestock for agriculture and biomedicine.
Kasam, Vinod; Salzemann, Jean; Botha, Marli; Dacosta, Ana; Degliesposti, Gianluca; Isea, Raul; Kim, Doman; Maass, Astrid; Kenyon, Colin; Rastelli, Giulio; Hofmann-Apitius, Martin; Breton, Vincent
2009-05-01
Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR), and on a new promising one, glutathione-S-transferase. In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software on computational grids in finding hits against three different targets (PfGST, PfDHFR, PvDHFR (wild type and mutant forms) implicated in malaria. Grid-enabled virtual screening approach is proposed to produce focus compound libraries for other biological targets relevant to fight the infectious diseases of the developing world.
Large-scale protein/antibody patterning with limiting unspecific adsorption
NASA Astrophysics Data System (ADS)
Fedorenko, Viktoriia; Bechelany, Mikhael; Janot, Jean-Marc; Smyntyna, Valentyn; Balme, Sebastien
2017-10-01
A simple synthetic route based on nanosphere lithography has been developed in order to design a large-scale nanoarray for specific control of protein anchoring. This technique based on two-dimensional (2D) colloidal crystals composed of polystyrene spheres allows the easy and inexpensive fabrication of large arrays (up to several centimeters) by reducing the cost. A silicon wafer coated with a thin adhesion layer of chromium (15 nm) and a layer of gold (50 nm) is used as a substrate. PS spheres are deposited on the gold surface using the floating-transferring technique. The PS spheres were then functionalized with PEG-biotin and the defects by self-assembly monolayer (SAM) PEG to prevent unspecific adsorption. Using epifluorescence microscopy, we show that after immersion of sample on target protein (avidin and anti-avidin) solution, the latter are specifically located on polystyrene spheres. Thus, these results are meaningful for exploration of devices based on a large-scale nanoarray of PS spheres and can be used for detection of target proteins or simply to pattern a surface with specific proteins.
Effects of Pre-Existing Target Structure on the Formation of Large Craters
NASA Technical Reports Server (NTRS)
Barnouin-Jha, O. S.; Cintala, M. J.; Crawford, D. A.
2003-01-01
The shapes of large-scale craters and the mechanics responsible for melt generation are influenced by broad and small-scale structures present in a target prior to impact. For example, well-developed systems of fractures often create craters that appear square in outline, good examples being Meteor Crater, AZ and the square craters of 433 Eros. Pre-broken target material also affects melt generation. Kieffer has shown how the shock wave generated in Coconino sandstone at Meteor crater created reverberations which, in combination with the natural target heterogeneity present, created peaks and troughs in pressure and compressed density as individual grains collided to produce a range of shock mineralogies and melts within neighboring samples. In this study, we further explore how pre-existing target structure influences various aspects of the cratering process. We combine experimental and numerical techniques to explore the connection between the scales of the impact generated shock wave and the pre-existing target structure. We focus on the propagation of shock waves in coarse, granular media, emphasizing its consequences on excavation, crater growth, ejecta production, cratering efficiency, melt generation, and crater shape. As a baseline, we present a first series of results for idealized targets where the particles are all identical in size and possess the same shock impedance. We will also present a few results, whereby we increase the complexities of the target properties by varying the grain size, strength, impedance and frictional properties. In addition, we investigate the origin and implications of reverberations that are created by the presence of physical and chemical heterogeneity in a target.
The Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey
NASA Astrophysics Data System (ADS)
Squires, Gordon K.; Lubin, L. M.; Gal, R. R.
2007-05-01
We present the motivation, design, and latest results from the Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey, a systematic search for structure on scales greater than 10 Mpc around 20 known galaxy clusters at z > 0.6. When complete, the survey will cover nearly 5 square degrees, all targeted at high-density regions, making it complementary and comparable to field surveys such as DEEP2, GOODS, and COSMOS. For the survey, we are using the Large Format Camera on the Palomar 5-m and SuPRIME-Cam on the Subaru 8-m to obtain optical/near-infrared imaging of an approximately 30 arcmin region around previously studied high-redshift clusters. Colors are used to identify likely member galaxies which are targeted for follow-up spectroscopy with the DEep Imaging Multi-Object Spectrograph on the Keck 10-m. This technique has been used to identify successfully the Cl 1604 supercluster at z = 0.9, a large scale structure containing at least eight clusters (Gal & Lubin 2004; Gal, Lubin & Squires 2005). We present the most recent structures to be photometrically and spectroscopically confirmed through this program, discuss the properties of the member galaxies as a function of environment, and describe our planned multi-wavelength (radio, mid-IR, and X-ray) observations of these systems. The goal of this survey is to identify and examine a statistical sample of large scale structures during an active period in the assembly history of the most massive clusters. With such a sample, we can begin to constrain large scale cluster dynamics and determine the effect of the larger environment on galaxy evolution.
Fitting a Point Cloud to a 3d Polyhedral Surface
NASA Astrophysics Data System (ADS)
Popov, E. V.; Rotkov, S. I.
2017-05-01
The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.
Allometry indicates giant eyes of giant squid are not exceptional.
Schmitz, Lars; Motani, Ryosuke; Oufiero, Christopher E; Martin, Christopher H; McGee, Matthew D; Gamarra, Ashlee R; Lee, Johanna J; Wainwright, Peter C
2013-02-18
The eyes of giant and colossal squid are among the largest eyes in the history of life. It was recently proposed that sperm whale predation is the main driver of eye size evolution in giant squid, on the basis of an optical model that suggested optimal performance in detecting large luminous visual targets such as whales in the deep sea. However, it is poorly understood how the eye size of giant and colossal squid compares to that of other aquatic organisms when scaling effects are considered. We performed a large-scale comparative study that included 87 squid species and 237 species of acanthomorph fish. While squid have larger eyes than most acanthomorphs, a comparison of relative eye size among squid suggests that giant and colossal squid do not have unusually large eyes. After revising constants used in a previous model we found that large eyes perform equally well in detecting point targets and large luminous targets in the deep sea. The eyes of giant and colossal squid do not appear exceptionally large when allometric effects are considered. It is probable that the giant eyes of giant squid result from a phylogenetically conserved developmental pattern manifested in very large animals. Whatever the cause of large eyes, they appear to have several advantages for vision in the reduced light of the deep mesopelagic zone.
Cross-indexing of binary SIFT codes for large-scale image search.
Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi
2014-05-01
In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.
diCenzo, George C; Finan, Turlough M
2018-01-01
The rate at which all genes within a bacterial genome can be identified far exceeds the ability to characterize these genes. To assist in associating genes with cellular functions, a large-scale bacterial genome deletion approach can be employed to rapidly screen tens to thousands of genes for desired phenotypes. Here, we provide a detailed protocol for the generation of deletions of large segments of bacterial genomes that relies on the activity of a site-specific recombinase. In this procedure, two recombinase recognition target sequences are introduced into known positions of a bacterial genome through single cross-over plasmid integration. Subsequent expression of the site-specific recombinase mediates recombination between the two target sequences, resulting in the excision of the intervening region and its loss from the genome. We further illustrate how this deletion system can be readily adapted to function as a large-scale in vivo cloning procedure, in which the region excised from the genome is captured as a replicative plasmid. We next provide a procedure for the metabolic analysis of bacterial large-scale genome deletion mutants using the Biolog Phenotype MicroArray™ system. Finally, a pipeline is described, and a sample Matlab script is provided, for the integration of the obtained data with a draft metabolic reconstruction for the refinement of the reactions and gene-protein-reaction relationships in a metabolic reconstruction.
Designing large-scale conservation corridors for pattern and process.
Rouget, Mathieu; Cowling, Richard M; Lombard, Amanda T; Knight, Andrew T; Kerley, Graham I H
2006-04-01
A major challenge for conservation assessments is to identify priority areas that incorporate biological patterns and processes. Because large-scale processes are mostly oriented along environmental gradients, we propose to accommodate them by designing regional-scale corridors to capture these gradients. Based on systematic conservation planning principles such as representation and persistence, we identified large tracts of untransformed land (i.e., conservation corridors) for conservation that would achieve biodiversity targets for pattern and process in the Subtropical Thicket Biome of South Africa. We combined least-cost path analysis with a target-driven algorithm to identify the best option for capturing key environmental gradients while considering biodiversity targets and conservation opportunities and constraints. We identified seven conservation corridors on the basis of subtropical thicket representation, habitat transformation and degradation, wildlife suitability, irreplaceability of vegetation types, protected area networks, and future land-use pressures. These conservation corridors covered 21.1% of the planning region (ranging from 600 to 5200 km2) and successfully achieved targets for biological processes and to a lesser extent for vegetation types. The corridors we identified are intended to promote the persistence of ecological processes (gradients and fixed processes) and fulfill half of the biodiversity pattern target. We compared the conservation corridors with a simplified corridor design consisting of a fixed-width buffer along major rivers. Conservation corridors outperformed river buffers in seven out of eight criteria. Our corridor design can provide a tool for quantifying trade-offs between various criteria (biodiversity pattern and process, implementation constraints and opportunities). A land-use management model was developed to facilitate implementation of conservation actions within these corridors.
A large-scale evaluation of computational protein function prediction
Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo
2013-01-01
Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650
Large-scale weakly supervised object localization via latent category learning.
Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve
2015-04-01
Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.
Quantifying design trade-offs of beryllium targets on NIF
NASA Astrophysics Data System (ADS)
Yi, S. A.; Zylstra, A. B.; Kline, J. L.; Loomis, E. N.; Kyrala, G. A.; Shah, R. C.; Perry, T. S.; Kanzleiter, R. J.; Batha, S. H.; MacLaren, S. A.; Ralph, J. E.; Masse, L. P.; Salmonson, J. D.; Tipton, R. E.; Callahan, D. A.; Hurricane, O. A.
2017-10-01
An important determinant of target performance is implosion kinetic energy, which scales with the capsule size. The maximum achievable performance for a given laser is thus related to the largest capsule that can be imploded symmetrically, constrained by drive uniformity. A limiting factor for symmetric radiation drive is the ratio of hohlraum to capsule radii, or case-to-capsule ratio (CCR). For a fixed laser energy, a larger hohlraum allows for driving bigger capsules symmetrically at the cost of reduced peak radiation temperature (Tr). Beryllium ablators may thus allow for unique target design trade-offs due to their higher ablation efficiency at lower Tr. By utilizing larger hohlraum sizes than most modern NIF designs, beryllium capsules thus have the potential to operate in unique regions of the target design parameter space. We present design simulations of beryllium targets with a large CCR = 4.3 3.7 . These are scaled surrogates of large hohlraum low Tr beryllium targets, with the goal of quantifying symmetry tunability as a function of CCR. This work performed under the auspices of the U.S. DOE by LANL under contract DE-AC52- 06NA25396, and by LLNL under Contract DE-AC52-07NA27344.
Cuellar, Maria C; Heijnen, Joseph J; van der Wielen, Luuk A M
2013-06-01
Industrial biotechnology is playing an important role in the transition to a bio-based economy. Currently, however, industrial implementation is still modest, despite the advances made in microorganism development. Given that the fuels and commodity chemicals sectors are characterized by tight economic margins, we propose to address overall process design and efficiency at the start of bioprocess development. While current microorganism development is targeted at product formation and product yield, addressing process design at the start of bioprocess development means that microorganism selection can also be extended to other critical targets for process technology and process scale implementation, such as enhancing cell separation or increasing cell robustness at operating conditions that favor the overall process. In this paper we follow this approach for the microbial production of diesel-like biofuels. We review current microbial routes with both oleaginous and engineered microorganisms. For the routes leading to extracellular production, we identify the process conditions for large scale operation. The process conditions identified are finally translated to microorganism development targets. We show that microorganism development should be directed at anaerobic production, increasing robustness at extreme process conditions and tailoring cell surface properties. All the same time, novel process configurations integrating fermentation and product recovery, cell reuse and low-cost technologies for product separation are mandatory. This review provides a state-of-the-art summary of the latest challenges in large-scale production of diesel-like biofuels. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fast Open-World Person Re-Identification.
Zhu, Xiatian; Wu, Botong; Huang, Dongcheng; Zheng, Wei-Shi
2018-05-01
Existing person re-identification (re-id) methods typically assume that: 1) any probe person is guaranteed to appear in the gallery target population during deployment (i.e., closed-world) and 2) the probe set contains only a limited number of people (i.e., small search scale). Both assumptions are artificial and breached in real-world applications, since the probe population in target people search can be extremely vast in practice due to the ambiguity of probe search space boundary. Therefore, it is unrealistic that any probe person is assumed as one target people, and a large-scale search in person images is inherently demanded. In this paper, we introduce a new person re-id search setting, called large scale open-world (LSOW) re-id, characterized by huge size probe images and open person population in search thus more close to practical deployments. Under LSOW, the under-studied problem of person re-id efficiency is essential in addition to that of commonly studied re-id accuracy. We, therefore, develop a novel fast person re-id method, called Cross-view Identity Correlation and vErification (X-ICE) hashing, for joint learning of cross-view identity representation binarisation and discrimination in a unified manner. Extensive comparative experiments on three large-scale benchmarks have been conducted to validate the superiority and advantages of the proposed X-ICE method over a wide range of the state-of-the-art hashing models, person re-id methods, and their combinations.
Test Information Targeting Strategies for Adaptive Multistage Testing Designs.
ERIC Educational Resources Information Center
Luecht, Richard M.; Burgin, William
Adaptive multistage testlet (MST) designs appear to be gaining popularity for many large-scale computer-based testing programs. These adaptive MST designs use a modularized configuration of preconstructed testlets and embedded score-routing schemes to prepackage different forms of an adaptive test. The conditional information targeting (CIT)…
NASA Astrophysics Data System (ADS)
Chan, YinThai
2016-03-01
Colloidal semiconductor nanocrystals are ideal fluorophores for clinical diagnostics, therapeutics, and highly sensitive biochip applications due to their high photostability, size-tunable color of emission and flexible surface chemistry. The relatively recent development of core-seeded semiconductor nanorods showed that the presence of a rod-like shell can confer even more advantageous physicochemical properties than their spherical counterparts, such as large multi-photon absorption cross-sections and facet-specific chemistry that can be exploited to deposit secondary nanoparticles. It may be envisaged that these highly fluorescent nanorods can be integrated with large scale integrated (LSI) microfluidic systems that allow miniaturization and integration of multiple biochemical processes in a single device at the nanoliter scale, resulting in a highly sensitive and automated detection platform. In this talk, I will describe a LSI microfluidic device that integrates RNA extraction, reverse transcription to cDNA, amplification and target pull-down to detect histidine decarboxylase (HDC) gene directly from human white blood cells samples. When anisotropic colloidal semiconductor nanorods (NRs) were used as the fluorescent readout, the detection limit was found to be 0.4 ng of total RNA, which was much lower than that obtained using spherical quantum dots (QDs) or organic dyes. This was attributed to the large action cross-section of NRs and their high probability of target capture in a pull-down detection scheme. The combination of large scale integrated microfluidics with highly fluorescent semiconductor NRs may find widespread utility in point-of-care devices and multi-target diagnostics.
Quantifying the Impacts of Large Scale Integration of Renewables in Indian Power Sector
NASA Astrophysics Data System (ADS)
Kumar, P.; Mishra, T.; Banerjee, R.
2017-12-01
India's power sector is responsible for nearly 37 percent of India's greenhouse gas emissions. For a fast emerging economy like India whose population and energy consumption are poised to rise rapidly in the coming decades, renewable energy can play a vital role in decarbonizing power sector. In this context, India has targeted 33-35 percent emission intensity reduction (with respect to 2005 levels) along with large scale renewable energy targets (100GW solar, 60GW wind, and 10GW biomass energy by 2022) in INDCs submitted at Paris agreement. But large scale integration of renewable energy is a complex process which faces a number of problems like capital intensiveness, matching intermittent loads with least storage capacity and reliability. In this context, this study attempts to assess the technical feasibility of integrating renewables into Indian electricity mix by 2022 and analyze its implications on power sector operations. This study uses TIMES, a bottom up energy optimization model with unit commitment and dispatch features. We model coal and gas fired units discretely with region-wise representation of wind and solar resources. The dispatch features are used for operational analysis of power plant units under ramp rate and minimum generation constraints. The study analyzes India's electricity sector transition for the year 2022 with three scenarios. The base case scenario (no RE addition) along with INDC scenario (with 100GW solar, 60GW wind, 10GW biomass) and low RE scenario (50GW solar, 30GW wind) have been created to analyze the implications of large scale integration of variable renewable energy. The results provide us insights on trade-offs involved in achieving mitigation targets and investment decisions involved. The study also examines operational reliability and flexibility requirements of the system for integrating renewables.
NASA Astrophysics Data System (ADS)
Rasera, L. G.; Mariethoz, G.; Lane, S. N.
2017-12-01
Frequent acquisition of high-resolution digital elevation models (HR-DEMs) over large areas is expensive and difficult. Satellite-derived low-resolution digital elevation models (LR-DEMs) provide extensive coverage of Earth's surface but at coarser spatial and temporal resolutions. Although useful for large scale problems, LR-DEMs are not suitable for modeling hydrologic and geomorphic processes at scales smaller than their spatial resolution. In this work, we present a multiple-point geostatistical approach for downscaling a target LR-DEM based on available high-resolution training data and recurrent high-resolution remote sensing images. The method aims at generating several equiprobable HR-DEMs conditioned to a given target LR-DEM by borrowing small scale topographic patterns from an analogue containing data at both coarse and fine scales. An application of the methodology is demonstrated by using an ensemble of simulated HR-DEMs as input to a flow-routing algorithm. The proposed framework enables a probabilistic assessment of the spatial structures generated by natural phenomena operating at scales finer than the available terrain elevation measurements. A case study in the Swiss Alps is provided to illustrate the methodology.
Ai, Haixin; Wu, Xuewei; Qi, Mengyuan; Zhang, Li; Hu, Huan; Zhao, Qi; Zhao, Jian; Liu, Hongsheng
2018-06-01
In recent years, new strains of influenza virus such as H7N9, H10N8, H5N6 and H5N8 had continued to emerge. There was an urgent need for discovery of new anti-influenza virus drugs as well as accurate and efficient large-scale inhibitor screening methods. In this study, we focused on six influenza virus proteins that could be anti-influenza drug targets, including neuraminidase (NA), hemagglutinin (HA), matrix protein 1 (M1), M2 proton channel (M2), nucleoprotein (NP) and non-structural protein 1 (NS1). Structure-based molecular docking was utilized to identify potential inhibitors for these drug targets from 13144 compounds in the Traditional Chinese Medicine Systems Pharmacology Database and Analysis Platform. The results showed that 56 compounds could inhibit more than two drug targets simultaneously. Further, we utilized reverse docking to study the interaction of these compounds with host targets. Finally, the 22 compound inhibitors could stably bind to host targets with high binding free energy. The results showed that the Chinese herbal medicines had a multi-target effect, which could directly inhibit influenza virus by the target viral protein and indirectly inhibit virus by the human target protein. This method was of great value for large-scale virtual screening of new anti-influenza virus compounds.
2017-01-01
Phase relations between specific scales in a turbulent boundary layer are studied here by highlighting the associated nonlinear scale interactions in the flow. This is achieved through an experimental technique that allows for targeted forcing of the flow through the use of a dynamic wall perturbation. Two distinct large-scale modes with well-defined spatial and temporal wavenumbers were simultaneously forced in the boundary layer, and the resulting nonlinear response from their direct interactions was isolated from the turbulence signal for the study. This approach advances the traditional studies of large- and small-scale interactions in wall turbulence by focusing on the direct interactions between scales with triadic wavenumber consistency. The results are discussed in the context of modelling high Reynolds number wall turbulence. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167576
Target charging in short-pulse-laser-plasma experiments.
Dubois, J-L; Lubrano-Lavaderci, F; Raffestin, D; Ribolzi, J; Gazave, J; Compant La Fontaine, A; d'Humières, E; Hulin, S; Nicolaï, Ph; Poyé, A; Tikhonchuk, V T
2014-01-01
Interaction of high-intensity laser pulses with solid targets results in generation of large quantities of energetic electrons that are the origin of various effects such as intense x-ray emission, ion acceleration, and so on. Some of these electrons are escaping the target, leaving behind a significant positive electric charge and creating a strong electromagnetic pulse long after the end of the laser pulse. We propose here a detailed model of the target electric polarization induced by a short and intense laser pulse and an escaping electron bunch. A specially designed experiment provides direct measurements of the target polarization and the discharge current in the function of the laser energy, pulse duration, and target size. Large-scale numerical simulations describe the energetic electron generation and their emission from the target. The model, experiment, and numerical simulations demonstrate that the hot-electron ejection may continue long after the laser pulse ends, enhancing significantly the polarization charge.
Reid, Beth; Ho, Shirley; Padmanabhan, Nikhil; ...
2015-11-17
The Baryon Oscillation Spectroscopic Survey (BOSS), part of the Sloan Digital Sky Survey (SDSS) III project, has provided the largest survey of galaxy redshifts available to date, in terms of both the number of galaxy redshifts measured by a single survey, and the effective cosmological volume covered. Key to analysing the clustering of these data to provide cosmological measurements is understanding the detailed properties of this sample. Potential issues include variations in the target catalogue caused by changes either in the targeting algorithm or properties of the data used, the pattern of spectroscopic observations, the spatial distribution of targets formore » which redshifts were not obtained, and variations in the target sky density due to observational systematics. We document here the target selection algorithms used to create the galaxy samples that comprise BOSS. We also present the algorithms used to create large-scale structure catalogues for the final Data Release (DR12) samples and the associated random catalogues that quantify the survey mask. The algorithms are an evolution of those used by the BOSS team to construct catalogues from earlier data, and have been designed to accurately quantify the galaxy sample. Furthermore, the code used, designated mksample, is released with this paper.« less
Molecular inversion probe assay.
Absalan, Farnaz; Ronaghi, Mostafa
2007-01-01
We have described molecular inversion probe technologies for large-scale genetic analyses. This technique provides a comprehensive and powerful tool for the analysis of genetic variation and enables affordable, large-scale studies that will help uncover the genetic basis of complex disease and explain the individual variation in response to therapeutics. Major applications of the molecular inversion probes (MIP) technologies include targeted genotyping from focused regions to whole-genome studies, and allele quantification of genomic rearrangements. The MIP technology (used in the HapMap project) provides an efficient, scalable, and affordable way to score polymorphisms in case/control populations for genetic studies. The MIP technology provides the highest commercially available multiplexing levels and assay conversion rates for targeted genotyping. This enables more informative, genome-wide studies with either the functional (direct detection) approach or the indirect detection approach.
ERIC Educational Resources Information Center
Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castilla, Veronica
2016-01-01
Recently, statisticians have begun developing methods to improve the generalizability of results from large-scale experiments in education. This work has included the development of methods for improved site selection when random sampling is infeasible, including the use of stratification and targeted recruitment strategies. This article provides…
NASA Astrophysics Data System (ADS)
van der Bogert, C. H.; Hiesinger, H.; Dundas, C. M.; Krüger, T.; McEwen, A. S.; Zanetti, M.; Robinson, M. S.
2017-12-01
Recent work on dating Copernican-aged craters, using Lunar Reconnaissance Orbiter (LRO) Camera data, re-encountered a curious discrepancy in crater size-frequency distribution (CSFD) measurements that was observed, but not understood, during the Apollo era. For example, at Tycho, Copernicus, and Aristarchus craters, CSFDs of impact melt deposits give significantly younger relative and absolute model ages (AMAs) than impact ejecta blankets, although these two units formed during one impact event, and would ideally yield coeval ages at the resolution of the CSFD technique. We investigated the effects of contrasting target properties on CSFDs and their resultant relative and absolute model ages for coeval lunar impact melt and ejecta units. We counted craters with diameters through the transition from strength- to gravity-scaling on two large impact melt deposits at Tycho and King craters, and we used pi-group scaling calculations to model the effects of differing target properties on final crater diameters for five different theoretical lunar targets. The new CSFD for the large King Crater melt pond bridges the gap between the discrepant CSFDs within a single geologic unit. Thus, the observed trends in the impact melt CSFDs support the occurrence of target property effects, rather than self-secondary and/or field secondary contamination. The CSFDs generated from the pi-group scaling calculations show that targets with higher density and effective strength yield smaller crater diameters than weaker targets, such that the relative ages of the former are lower relative to the latter. Consequently, coeval impact melt and ejecta units will have discrepant apparent ages. Target property differences also affect the resulting slope of the CSFD, with stronger targets exhibiting shallower slopes, so that the final crater diameters may differ more greatly at smaller diameters. Besides their application to age dating, the CSFDs may provide additional information about the characteristics of the target. For example, the transition diameter from strength- to gravity-scaling could provide a tool for investigating the relative strengths of different geologic units. The magnitude of the offset between the impact melt and ejecta isochrons may also provide information about the relative target properties and/or exposure/degradation ages of the two units. Robotic or human sampling of coeval units on the Moon could provide a direct test of the importance and magnitude of target property effects on CSFDs.
Van der Bogert, Carolyn H.; Hiesinger, Harald; Dundas, Colin M.; Kruger, T.; McEwen, Alfred S.; Zanetti, Michael; Robinson, Mark S.
2017-01-01
Recent work on dating Copernican-aged craters, using Lunar Reconnaissance Orbiter (LRO) Camera data, re-encountered a curious discrepancy in crater size-frequency distribution (CSFD) measurements that was observed, but not understood, during the Apollo era. For example, at Tycho, Copernicus, and Aristarchus craters, CSFDs of impact melt deposits give significantly younger relative and absolute model ages (AMAs) than impact ejecta blankets, although these two units formed during one impact event, and would ideally yield coeval ages at the resolution of the CSFD technique. We investigated the effects of contrasting target properties on CSFDs and their resultant relative and absolute model ages for coeval lunar impact melt and ejecta units. We counted craters with diameters through the transition from strength- to gravity-scaling on two large impact melt deposits at Tycho and King craters, and we used pi-group scaling calculations to model the effects of differing target properties on final crater diameters for five different theoretical lunar targets. The new CSFD for the large King Crater melt pond bridges the gap between the discrepant CSFDs within a single geologic unit. Thus, the observed trends in the impact melt CSFDs support the occurrence of target property effects, rather than self-secondary and/or field secondary contamination. The CSFDs generated from the pi-group scaling calculations show that targets with higher density and effective strength yield smaller crater diameters than weaker targets, such that the relative ages of the former are lower relative to the latter. Consequently, coeval impact melt and ejecta units will have discrepant apparent ages. Target property differences also affect the resulting slope of the CSFD, with stronger targets exhibiting shallower slopes, so that the final crater diameters may differ more greatly at smaller diameters. Besides their application to age dating, the CSFDs may provide additional information about the characteristics of the target. For example, the transition diameter from strength- to gravity-scaling could provide a tool for investigating the relative strengths of different geologic units. The magnitude of the offset between the impact melt and ejecta isochrons may also provide information about the relative target properties and/or exposure/degradation ages of the two units. Robotic or human sampling of coeval units on the Moon could provide a direct test of the importance and magnitude of target property effects on CSFDs.
Betel, Doron; Koppal, Anjali; Agius, Phaedra; Sander, Chris; Leslie, Christina
2010-01-01
mirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites.
Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis
2015-01-01
Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.
Large scale tracking algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For highermore » resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.« less
Zhang, Bo; Fu, Yingxue; Huang, Chao; Zheng, Chunli; Wu, Ziyin; Zhang, Wenjuan; Yang, Xiaoyan; Gong, Fukai; Li, Yuerong; Chen, Xiaoyu; Gao, Shuo; Chen, Xuetong; Li, Yan; Lu, Aiping; Wang, Yonghua
2016-02-25
The development of modern omics technology has not significantly improved the efficiency of drug development. Rather precise and targeted drug discovery remains unsolved. Here a large-scale cross-species molecular network association (CSMNA) approach for targeted drug screening from natural sources is presented. The algorithm integrates molecular network omics data from humans and 267 plants and microbes, establishing the biological relationships between them and extracting evolutionarily convergent chemicals. This technique allows the researcher to assess targeted drugs for specific human diseases based on specific plant or microbe pathways. In a perspective validation, connections between the plant Halliwell-Asada (HA) cycle and the human Nrf2-ARE pathway were verified and the manner by which the HA cycle molecules act on the human Nrf2-ARE pathway as antioxidants was determined. This shows the potential applicability of this approach in drug discovery. The current method integrates disparate evolutionary species into chemico-biologically coherent circuits, suggesting a new cross-species omics analysis strategy for rational drug development.
Progress in long scale length laser plasma interactions
NASA Astrophysics Data System (ADS)
Glenzer, S. H.; Arnold, P.; Bardsley, G.; Berger, R. L.; Bonanno, G.; Borger, T.; Bower, D. E.; Bowers, M.; Bryant, R.; Buckman, S.; Burkhart, S. C.; Campbell, K.; Chrisp, M. P.; Cohen, B. I.; Constantin, C.; Cooper, F.; Cox, J.; Dewald, E.; Divol, L.; Dixit, S.; Duncan, J.; Eder, D.; Edwards, J.; Erbert, G.; Felker, B.; Fornes, J.; Frieders, G.; Froula, D. H.; Gardner, S. D.; Gates, C.; Gonzalez, M.; Grace, S.; Gregori, G.; Greenwood, A.; Griffith, R.; Hall, T.; Hammel, B. A.; Haynam, C.; Heestand, G.; Henesian, M.; Hermes, G.; Hinkel, D.; Holder, J.; Holdner, F.; Holtmeier, G.; Hsing, W.; Huber, S.; James, T.; Johnson, S.; Jones, O. S.; Kalantar, D.; Kamperschroer, J. H.; Kauffman, R.; Kelleher, T.; Knight, J.; Kirkwood, R. K.; Kruer, W. L.; Labiak, W.; Landen, O. L.; Langdon, A. B.; Langer, S.; Latray, D.; Lee, A.; Lee, F. D.; Lund, D.; MacGowan, B.; Marshall, S.; McBride, J.; McCarville, T.; McGrew, L.; Mackinnon, A. J.; Mahavandi, S.; Manes, K.; Marshall, C.; Menapace, J.; Mertens, E.; Meezan, N.; Miller, G.; Montelongo, S.; Moody, J. D.; Moses, E.; Munro, D.; Murray, J.; Neumann, J.; Newton, M.; Ng, E.; Niemann, C.; Nikitin, A.; Opsahl, P.; Padilla, E.; Parham, T.; Parrish, G.; Petty, C.; Polk, M.; Powell, C.; Reinbachs, I.; Rekow, V.; Rinnert, R.; Riordan, B.; Rhodes, M.; Roberts, V.; Robey, H.; Ross, G.; Sailors, S.; Saunders, R.; Schmitt, M.; Schneider, M. B.; Shiromizu, S.; Spaeth, M.; Stephens, A.; Still, B.; Suter, L. J.; Tietbohl, G.; Tobin, M.; Tuck, J.; Van Wonterghem, B. M.; Vidal, R.; Voloshin, D.; Wallace, R.; Wegner, P.; Whitman, P.; Williams, E. A.; Williams, K.; Winward, K.; Work, K.; Young, B.; Young, P. E.; Zapata, P.; Bahr, R. E.; Seka, W.; Fernandez, J.; Montgomery, D.; Rose, H.
2004-12-01
The first experiments on the National Ignition Facility (NIF) have employed the first four beams to measure propagation and laser backscattering losses in large ignition-size plasmas. Gas-filled targets between 2 and 7 mm length have been heated from one side by overlapping the focal spots of the four beams from one quad operated at 351 nm (3ω) with a total intensity of 2 × 1015 W cm-2. The targets were filled with 1 atm of CO2 producing up to 7 mm long homogeneously heated plasmas with densities of ne = 6 × 1020 cm-3 and temperatures of Te = 2 keV. The high energy in an NIF quad of beams of 16 kJ, illuminating the target from one direction, creates unique conditions for the study of laser-plasma interactions at scale lengths not previously accessible. The propagation through the large-scale plasma was measured with a gated x-ray imager that was filtered for 3.5 keV x-rays. These data indicate that the beams interact with the full length of this ignition-scale plasma during the last ~1 ns of the experiment. During that time, the full aperture measurements of the stimulated Brillouin scattering and stimulated Raman scattering show scattering into the four focusing lenses of 3% for the smallest length (~2 mm), increasing to 10-12% for ~7 mm. These results demonstrate the NIF experimental capabilities and further provide a benchmark for three-dimensional modelling of the laser-plasma interactions at ignition-size scale lengths.
Vulnerability Analyst’s Guide to Geometric Target Description
1992-09-01
not constitute indorsement of any commercial product. Form Approved REPORT DOCUMENTATION PAGE OMB No. 0704-O,8 public reporting burden for this...46 5.3 Surrogacy ..............................................46 5.4 Specialized Targets......................................46 5.5... commercially available documents for other large-scale software. The documentation is not a BRL technical report, but can be obtained by contacting
NASA Astrophysics Data System (ADS)
Flippo, Kirk; Hegelich, B. Manuel; Cort Gautier, D.; Johnson, J. Randy; Kline, John L.; Shimada, Tsutomu; Fernández, Juan C.; Gaillard, Sandrine; Rassuchine, Jennifer; Le Galloudec, Nathalie; Cowan, Thomas E.; Malekos, Steve; Korgan, Grant
2006-10-01
Ion-driven Fast Ignition (IFI) has certain advantages over electron-driven FI due to a possible large reduction in the amount of energy required. Recent experiments at the Los Alamos National Laboratory's Trident facility have yielded ion energies and efficiencies many times in excess of recent published scaling laws, leading to even more potential advantages of IFI. Proton energies in excess of 35 MeV have been observed from targets produced by the University of Nevada, Reno - dubbed ``Pizza-top Cone'' targets - at intensities of only 1x10^19 W/cm^2 with 20 joules in 600 fs. Energies in excess of 24 MeV were observed from simple flat foil targets as well. The observed energies, above any published scaling laws, are attributed to target production, preparation, and shot to shot monitoring of many laser parameters, especially the laser ASE prepulse level and laser pulse duration. The laser parameters are monitored in real-time to keep the laser in optimal condition throughout the run providing high quality, reproducible shots.
Bellamy, Chloe; Altringham, John
2015-01-01
Conservation increasingly operates at the landscape scale. For this to be effective, we need landscape scale information on species distributions and the environmental factors that underpin them. Species records are becoming increasingly available via data centres and online portals, but they are often patchy and biased. We demonstrate how such data can yield useful habitat suitability models, using bat roost records as an example. We analysed the effects of environmental variables at eight spatial scales (500 m - 6 km) on roost selection by eight bat species (Pipistrellus pipistrellus, P. pygmaeus, Nyctalus noctula, Myotis mystacinus, M. brandtii, M. nattereri, M. daubentonii, and Plecotus auritus) using the presence-only modelling software MaxEnt. Modelling was carried out on a selection of 418 data centre roost records from the Lake District National Park, UK. Target group pseudoabsences were selected to reduce the impact of sampling bias. Multi-scale models, combining variables measured at their best performing spatial scales, were used to predict roosting habitat suitability, yielding models with useful predictive abilities. Small areas of deciduous woodland consistently increased roosting habitat suitability, but other habitat associations varied between species and scales. Pipistrellus were positively related to built environments at small scales, and depended on large-scale woodland availability. The other, more specialist, species were highly sensitive to human-altered landscapes, avoiding even small rural towns. The strength of many relationships at large scales suggests that bats are sensitive to habitat modifications far from the roost itself. The fine resolution, large extent maps will aid targeted decision-making by conservationists and planners. We have made available an ArcGIS toolbox that automates the production of multi-scale variables, to facilitate the application of our methods to other taxa and locations. Habitat suitability modelling has the potential to become a standard tool for supporting landscape-scale decision-making as relevant data and open source, user-friendly, and peer-reviewed software become widely available.
Water limited agriculture in Africa: Climate change sensitivity of large scale land investments
NASA Astrophysics Data System (ADS)
Rulli, M. C.; D'Odorico, P.; Chiarelli, D. D.; Davis, K. F.
2015-12-01
The past few decades have seen unprecedented changes in the global agricultural system with a dramatic increase in the rates of food production fueled by an escalating demand for food calories, as a result of demographic growth, dietary changes, and - more recently - new bioenergy policies. Food prices have become consistently higher and increasingly volatile with dramatic spikes in 2007-08 and 2010-11. The confluence of these factors has heightened demand for land and brought a wave of land investment to the developing world: some of the more affluent countries are trying to secure land rights in areas suitable for agriculture. According to some estimates, to date, roughly 38 million hectares have been acquired worldwide by large scale investors, 16 million of which in Africa. More than 85% of large scale land acquisitions in Africa are by foreign investors. Many land deals are motivated not only by the need for fertile land but for the water resources required for crop production. Despite some recent assessments of the water appropriation associated with large scale land investments, their impact on the water resources of the target countries under present conditions and climate change scenarios remains poorly understood. Here we investigate irrigation water requirements by various crops planted in the acquired land as an indicator of the pressure likely placed by land investors on ("blue") water resources of target regions in Africa and evaluate the sensitivity to climate changes scenarios.
Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles
2016-01-01
Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522
Targeted enrichment strategies for next-generation plant biology
Richard Cronn; Brian J. Knaus; Aaron Liston; Peter J. Maughan; Matthew Parks; John V. Syring; Joshua Udall
2012-01-01
The dramatic advances offered by modem DNA sequencers continue to redefine the limits of what can be accomplished in comparative plant biology. Even with recent achievements, however, plant genomes present obstacles that can make it difficult to execute large-scale population and phylogenetic studies on next-generation sequencing platforms. Factors like large genome...
NASA Astrophysics Data System (ADS)
Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan
2018-03-01
Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.
Remote Imaging Applied to Schistosomiasis Control: The Anning River Project
NASA Technical Reports Server (NTRS)
Seto, Edmund Y. W.; Maszle, Don R.; Spear, Robert C.; Gong, Peng
1997-01-01
The use of satellite imaging to remotely detect areas of high risk for transmission of infectious disease is an appealing prospect for large-scale monitoring of these diseases. The detection of large-scale environmental determinants of disease risk, often called landscape epidemiology, has been motivated by several authors (Pavlovsky 1966; Meade et al. 1988). The basic notion is that large-scale factors such as population density, air temperature, hydrological conditions, soil type, and vegetation can determine in a coarse fashion the local conditions contributing to disease vector abundance and human contact with disease agents. These large-scale factors can often be remotely detected by sensors or cameras mounted on satellite or aircraft platforms and can thus be used in a predictive model to mark high risk areas of transmission and to target control or monitoring efforts. A review of satellite technologies for this purpose was recently presented by Washino and Wood (1994) and Hay (1997) and Hay et al. (1997).
Artificial intelligence issues related to automated computing operations
NASA Technical Reports Server (NTRS)
Hornfeck, William A.
1989-01-01
Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.
NASA Astrophysics Data System (ADS)
Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.
2013-12-01
A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.
Utilization of Large Scale Surface Models for Detailed Visibility Analyses
NASA Astrophysics Data System (ADS)
Caha, J.; Kačmařík, M.
2017-11-01
This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.
Gray matter alterations in chronic pain: A network-oriented meta-analytic approach
Cauda, Franco; Palermo, Sara; Costa, Tommaso; Torta, Riccardo; Duca, Sergio; Vercelli, Ugo; Geminiani, Giuliano; Torta, Diana M.E.
2014-01-01
Several studies have attempted to characterize morphological brain changes due to chronic pain. Although it has repeatedly been suggested that longstanding pain induces gray matter modifications, there is still some controversy surrounding the direction of the change (increase or decrease in gray matter) and the role of psychological and psychiatric comorbidities. In this study, we propose a novel, network-oriented, meta-analytic approach to characterize morphological changes in chronic pain. We used network decomposition to investigate whether different kinds of chronic pain are associated with a common or specific set of altered networks. Representational similarity techniques, network decomposition and model-based clustering were employed: i) to verify the presence of a core set of brain areas commonly modified by chronic pain; ii) to investigate the involvement of these areas in a large-scale network perspective; iii) to study the relationship between altered networks and; iv) to find out whether chronic pain targets clusters of areas. Our results showed that chronic pain causes both core and pathology-specific gray matter alterations in large-scale networks. Common alterations were observed in the prefrontal regions, in the anterior insula, cingulate cortex, basal ganglia, thalamus, periaqueductal gray, post- and pre-central gyri and inferior parietal lobule. We observed that the salience and attentional networks were targeted in a very similar way by different chronic pain pathologies. Conversely, alterations in the sensorimotor and attention circuits were differentially targeted by chronic pain pathologies. Moreover, model-based clustering revealed that chronic pain, in line with some neurodegenerative diseases, selectively targets some large-scale brain networks. Altogether these findings indicate that chronic pain can be better conceived and studied in a network perspective. PMID:24936419
FDTD method for laser absorption in metals for large scale problems.
Deng, Chun; Ki, Hyungson
2013-10-21
The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.
A Magnetic Bead-Integrated Chip for the Large Scale Manufacture of Normalized esiRNAs
Wang, Zhao; Huang, Huang; Zhang, Hanshuo; Sun, Changhong; Hao, Yang; Yang, Junyu; Fan, Yu; Xi, Jianzhong Jeff
2012-01-01
The chemically-synthesized siRNA duplex has become a powerful and widely used tool for RNAi loss-of-function studies, but suffers from a high off-target effect problem. Recently, endoribonulease-prepared siRNA (esiRNA) has been shown to be an attractive alternative due to its lower off-target effect and cost effectiveness. However, the current manufacturing method for esiRNA is complicated, mainly in regards to purification and normalization on a large-scale level. In this study, we present a magnetic bead-integrated chip that can immobilize amplification or transcription products on beads and accomplish transcription, digestion, normalization and purification in a robust and convenient manner. This chip is equipped to manufacture ready-to-use esiRNAs on a large-scale level. Silencing specificity and efficiency of these esiRNAs were validated at the transcriptional, translational and functional levels. Manufacture of several normalized esiRNAs in a single well, including those silencing PARP1 and BRCA1, was successfully achieved, and the esiRNAs were subsequently utilized to effectively investigate their synergistic effect on cell viability. A small esiRNA library targeting 68 tyrosine kinase genes was constructed for a loss-of-function study, and four genes were identified in regulating the migration capability of Hela cells. We believe that this approach provides a more robust and cost-effective choice for manufacturing esiRNAs than current approaches, and therefore these heterogeneous RNA strands may have utility in most intensive and extensive applications. PMID:22761791
From drug to protein: using yeast genetics for high-throughput target discovery.
Armour, Christopher D; Lum, Pek Yee
2005-02-01
The budding yeast Saccharomyces cerevisiae has long been an effective eukaryotic model system for understanding basic cellular processes. The genetic tractability and ease of manipulation in the laboratory make yeast well suited for large-scale chemical and genetic screens. Several recent studies describing the use of yeast genetics for high-throughput drug target identification are discussed in this review.
Functional genomics (FG) screens, using RNAi or CRISPR technology, have become a standard tool for systematic, genome-wide loss-of-function studies for therapeutic target discovery. As in many large-scale assays, however, off-target effects, variable reagents' potency and experimental noise must be accounted for appropriately control for false positives.
Duvvuri, Subrahmanyam; McKeon, Beverley
2017-03-13
Phase relations between specific scales in a turbulent boundary layer are studied here by highlighting the associated nonlinear scale interactions in the flow. This is achieved through an experimental technique that allows for targeted forcing of the flow through the use of a dynamic wall perturbation. Two distinct large-scale modes with well-defined spatial and temporal wavenumbers were simultaneously forced in the boundary layer, and the resulting nonlinear response from their direct interactions was isolated from the turbulence signal for the study. This approach advances the traditional studies of large- and small-scale interactions in wall turbulence by focusing on the direct interactions between scales with triadic wavenumber consistency. The results are discussed in the context of modelling high Reynolds number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
Academic-industrial partnerships in drug discovery in the age of genomics.
Harris, Tim; Papadopoulos, Stelios; Goldstein, David B
2015-06-01
Many US FDA-approved drugs have been developed through productive interactions between the biotechnology industry and academia. Technological breakthroughs in genomics, in particular large-scale sequencing of human genomes, is creating new opportunities to understand the biology of disease and to identify high-value targets relevant to a broad range of disorders. However, the scale of the work required to appropriately analyze large genomic and clinical data sets is challenging industry to develop a broader view of what areas of work constitute precompetitive research. Copyright © 2015 Elsevier Ltd. All rights reserved.
Targeted Capture and High-Throughput Sequencing Using Molecular Inversion Probes (MIPs).
Cantsilieris, Stuart; Stessman, Holly A; Shendure, Jay; Eichler, Evan E
2017-01-01
Molecular inversion probes (MIPs) in combination with massively parallel DNA sequencing represent a versatile, yet economical tool for targeted sequencing of genomic DNA. Several thousand genomic targets can be selectively captured using long oligonucleotides containing unique targeting arms and universal linkers. The ability to append sequencing adaptors and sample-specific barcodes allows large-scale pooling and subsequent high-throughput sequencing at relatively low cost per sample. Here, we describe a "wet bench" protocol detailing the capture and subsequent sequencing of >2000 genomic targets from 192 samples, representative of a single lane on the Illumina HiSeq 2000 platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aleksandrova, I. V.; Koresheva, E. R., E-mail: elena.koresheva@gmail.com; Krokhin, O. N.
2016-12-15
In inertial fusion energy research, considerable attention has recently been focused on low-cost fabrication of a large number of targets by developing a specialized layering module of repeatable operation. The targets must be free-standing, or unmounted. Therefore, the development of a target factory for inertial confinement fusion (ICF) is based on methods that can ensure a cost-effective target production with high repeatability. Minimization of the amount of tritium (i.e., minimization of time and space at all production stages) is a necessary condition as well. Additionally, the cryogenic hydrogen fuel inside the targets must have a structure (ultrafine layers—the grain sizemore » should be scaled back to the nanometer range) that supports the fuel layer survivability under target injection and transport through the reactor chamber. To meet the above requirements, significant progress has been made at the Lebedev Physical Institute (LPI) in the technology developed on the basis of rapid fuel layering inside moving free-standing targets (FST), also referred to as the FST layering method. Owing to the research carried out at LPI, unique experience has been gained in the development of the FST-layering module for target fabrication with an ultrafine fuel layer, including a reactor- scale target design. This experience can be used for the development of the next-generation FST-layering module for construction of a prototype of a target factory for power laser facilities and inertial fusion power plants.« less
Kaufmann, Markus; Schuffenhauer, Ansgar; Fruh, Isabelle; Klein, Jessica; Thiemeyer, Anke; Rigo, Pierre; Gomez-Mancilla, Baltazar; Heidinger-Millot, Valerie; Bouwmeester, Tewis; Schopfer, Ulrich; Mueller, Matthias; Fodor, Barna D; Cobos-Correa, Amanda
2015-10-01
Fragile X syndrome (FXS) is the most common form of inherited mental retardation, and it is caused in most of cases by epigenetic silencing of the Fmr1 gene. Today, no specific therapy exists for FXS, and current treatments are only directed to improve behavioral symptoms. Neuronal progenitors derived from FXS patient induced pluripotent stem cells (iPSCs) represent a unique model to study the disease and develop assays for large-scale drug discovery screens since they conserve the Fmr1 gene silenced within the disease context. We have established a high-content imaging assay to run a large-scale phenotypic screen aimed to identify compounds that reactivate the silenced Fmr1 gene. A set of 50,000 compounds was tested, including modulators of several epigenetic targets. We describe an integrated drug discovery model comprising iPSC generation, culture scale-up, and quality control and screening with a very sensitive high-content imaging assay assisted by single-cell image analysis and multiparametric data analysis based on machine learning algorithms. The screening identified several compounds that induced a weak expression of fragile X mental retardation protein (FMRP) and thus sets the basis for further large-scale screens to find candidate drugs or targets tackling the underlying mechanism of FXS with potential for therapeutic intervention. © 2015 Society for Laboratory Automation and Screening.
Choi, Seunghee; Coon, Joshua J.; Goggans, Matthew Scott; Kreisman, Thomas F.; Silver, Daniel M.; Nesson, Michael H.
2016-01-01
Many of the materials that are challenging for large animals to cut or puncture are also cut and punctured by much smaller organisms that are limited to much smaller forces. Small organisms can overcome their force limitations by using sharper tools, but one drawback may be an increased susceptibility to fracture. We use simple contact mechanics models to estimate how much smaller the diameter of the tips or edges of tools such as teeth, claws and cutting blades must be in smaller organisms in order for them to puncture or cut the same materials as larger organisms. In order to produce the same maximum stress when maximum force scales as the square of body length, the diameter of the tool region that is in contact with the target material must scale isometrically for punch-like tools (e.g. scorpion stings) on thick targets, and for crushing tools (e.g. molars). For punch-like tools on thin targets, and for cutting blades on thick targets, the tip or edge diameters must be even smaller than expected from isometry in smaller animals. The diameters of a small sample of unworn punch-like tools from a large range of animal sizes are consistent with the model, scaling isometrically or more steeply (positively allometric). In addition, we find that the force required to puncture a thin target using real biological tools scales linearly with tip diameter, as predicted by the model. We argue that, for smaller tools, the minimum energy to fracture the tool will be a greater fraction of the minimum energy required to puncture the target, making fracture more likely. Finally, energy stored in tool bending, relative to the energy to fracture the tool, increases rapidly with the aspect ratio (length/width), and we expect that smaller organisms often have to employ higher aspect ratio tools in order to puncture or cut to the required depth with available force. The extra stored energy in higher aspect ratio tools is likely to increase the probability of fracture. We discuss some of the implications of the suggested scaling rules and possible adaptations to compensate for fracture sensitivity in smaller organisms. PMID:27274804
Expediting SRM assay development for large-scale targeted proteomics experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chaochao; Shi, Tujin; Brown, Joseph N.
2014-08-22
Due to their high sensitivity and specificity, targeted proteomics measurements, e.g. selected reaction monitoring (SRM), are becoming increasingly popular for biological and translational applications. Selection of optimal transitions and optimization of collision energy (CE) are important assay development steps for achieving sensitive detection and accurate quantification; however, these steps can be labor-intensive, especially for large-scale applications. Herein, we explored several options for accelerating SRM assay development evaluated in the context of a relatively large set of 215 synthetic peptide targets. We first showed that HCD fragmentation is very similar to CID in triple quadrupole (QQQ) instrumentation, and by selection ofmore » top six y fragment ions from HCD spectra, >86% of top transitions optimized from direct infusion on QQQ instrument are covered. We also demonstrated that the CE calculated by existing prediction tools was less accurate for +3 precursors, and a significant increase in intensity for transitions could be obtained using a new CE prediction equation constructed from the present experimental data. Overall, our study illustrates the feasibility of expediting the development of larger numbers of high-sensitivity SRM assays through automation of transitions selection and accurate prediction of optimal CE to improve both SRM throughput and measurement quality.« less
International law poses problems for negative emissions research
NASA Astrophysics Data System (ADS)
Brent, Kerryn; McGee, Jeffrey; McDonald, Jan; Rohling, Eelco J.
2018-06-01
New international governance arrangements that manage environmental risk and potential conflicts of interests are needed to facilitate negative emissions research that is essential to achieving the large-scale CO2 removal implied by the Paris Agreement targets.
[Genome editing of industrial microorganism].
Zhu, Linjiang; Li, Qi
2015-03-01
Genome editing is defined as highly-effective and precise modification of cellular genome in a large scale. In recent years, such genome-editing methods have been rapidly developed in the field of industrial strain improvement. The quickly-updating methods thoroughly change the old mode of inefficient genetic modification, which is "one modification, one selection marker, and one target site". Highly-effective modification mode in genome editing have been developed including simultaneous modification of multiplex genes, highly-effective insertion, replacement, and deletion of target genes in the genome scale, cut-paste of a large DNA fragment. These new tools for microbial genome editing will certainly be applied widely, and increase the efficiency of industrial strain improvement, and promote the revolution of traditional fermentation industry and rapid development of novel industrial biotechnology like production of biofuel and biomaterial. The technological principle of these genome-editing methods and their applications were summarized in this review, which can benefit engineering and construction of industrial microorganism.
Inferring personal economic status from social network location
NASA Astrophysics Data System (ADS)
Luo, Shaojun; Morone, Flaviano; Sarraute, Carlos; Travizano, Matías; Makse, Hernán A.
2017-05-01
It is commonly believed that patterns of social ties affect individuals' economic status. Here we translate this concept into an operational definition at the network level, which allows us to infer the economic well-being of individuals through a measure of their location and influence in the social network. We analyse two large-scale sources: telecommunications and financial data of a whole country's population. Our results show that an individual's location, measured as the optimal collective influence to the structural integrity of the social network, is highly correlated with personal economic status. The observed social network patterns of influence mimic the patterns of economic inequality. For pragmatic use and validation, we carry out a marketing campaign that shows a threefold increase in response rate by targeting individuals identified by our social network metrics as compared to random targeting. Our strategy can also be useful in maximizing the effects of large-scale economic stimulus policies.
Inferring personal economic status from social network location.
Luo, Shaojun; Morone, Flaviano; Sarraute, Carlos; Travizano, Matías; Makse, Hernán A
2017-05-16
It is commonly believed that patterns of social ties affect individuals' economic status. Here we translate this concept into an operational definition at the network level, which allows us to infer the economic well-being of individuals through a measure of their location and influence in the social network. We analyse two large-scale sources: telecommunications and financial data of a whole country's population. Our results show that an individual's location, measured as the optimal collective influence to the structural integrity of the social network, is highly correlated with personal economic status. The observed social network patterns of influence mimic the patterns of economic inequality. For pragmatic use and validation, we carry out a marketing campaign that shows a threefold increase in response rate by targeting individuals identified by our social network metrics as compared to random targeting. Our strategy can also be useful in maximizing the effects of large-scale economic stimulus policies.
A new way to protect privacy in large-scale genome-wide association studies.
Kamm, Liina; Bogdanov, Dan; Laur, Sven; Vilo, Jaak
2013-04-01
Increased availability of various genotyping techniques has initiated a race for finding genetic markers that can be used in diagnostics and personalized medicine. Although many genetic risk factors are known, key causes of common diseases with complex heritage patterns are still unknown. Identification of such complex traits requires a targeted study over a large collection of data. Ideally, such studies bring together data from many biobanks. However, data aggregation on such a large scale raises many privacy issues. We show how to conduct such studies without violating privacy of individual donors and without leaking the data to third parties. The presented solution has provable security guarantees. Supplementary data are available at Bioinformatics online.
Collins, Jeffrey M; Hunter, Mary; Gordon, Wanda; Kempker, Russell R; Blumberg, Henry M; Ray, Susan M
2018-06-01
Following large declines in tuberculosis transmission the United States, large-scale screening programs targeting low-risk healthcare workers are increasingly a source of false-positive results. We report a large cluster of presumed false-positive tuberculin skin test results in healthcare workers following a change to 50-dose vials of Tubersol tuberculin.Infect Control Hosp Epidemiol 2018;39:750-752.
Rodríguez-Gómez, Francisco; Romero-Gil, Verónica; Arroyo-López, Francisco N; Roldán-Reyes, Juan C; Torres-Gallardo, Rosa; Bautista-Gallego, Joaquín; García-García, Pedro; Garrido-Fernández, Antonio
2017-01-01
This work studies the inoculation conditions for allowing the survival/predominance of a potential probiotic strain ( Lactobacillus pentosus TOMC-LAB2) when used as a starter culture in large-scale fermentations of green Spanish-style olives. The study was performed in two successive seasons (2011/2012 and 2012/2013), using about 150 tons of olives. Inoculation immediately after brining (to prevent wild initial microbiota growth) followed by re-inoculation 24 h later (to improve competitiveness) was essential for inoculum predominance. Processing early in the season (September) showed a favorable effect on fermentation and strain predominance on olives (particularly when using acidified brines containing 25 L HCl/vessel) but caused the disappearance of the target strain from both brines and olives during the storage phase. On the contrary, processing in October slightly reduced the target strain predominance on olives (70-90%) but allowed longer survival. The type of inoculum used (laboratory vs. industry pre-adapted) never had significant effects. Thus, this investigation discloses key issues for the survival and predominance of starter cultures in large-scale industrial fermentations of green Spanish-style olives. Results can be of interest for producing probiotic table olives and open new research challenges on the causes of inoculum vanishing during the storage phase.
NASA Astrophysics Data System (ADS)
Gerlitz, Lars; Gafurov, Abror; Apel, Heiko; Unger-Sayesteh, Katy; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
Statistical climate forecast applications typically utilize a small set of large scale SST or climate indices, such as ENSO, PDO or AMO as predictor variables. If the predictive skill of these large scale modes is insufficient, specific predictor variables such as customized SST patterns are frequently included. Hence statistically based climate forecast models are either based on a fixed number of climate indices (and thus might not consider important predictor variables) or are highly site specific and barely transferable to other regions. With the aim of developing an operational seasonal forecast model, which is easily transferable to any region in the world, we present a generic data mining approach which automatically selects potential predictors from gridded SST observations and reanalysis derived large scale atmospheric circulation patterns and generates robust statistical relationships with posterior precipitation anomalies for user selected target regions. Potential predictor variables are derived by means of a cellwise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability based cluster analysis. Finally for every month and lead time, an individual random forest based forecast model is automatically calibrated and evaluated by means of the preliminary generated predictor variables. The model is exemplarily applied and evaluated for selected headwater catchments in Central and South Asia. Particularly the for winter and spring precipitation (which is associated with westerly disturbances in the entire target domain) the model shows solid results with correlation coefficients up to 0.7, although the variability of precipitation rates is highly underestimated. Likewise for the monsoonal precipitation amounts in the South Asian target areas a certain skill of the model could be detected. The skill of the model for the dry summer season in Central Asia and the transition seasons over South Asia is found to be low. A sensitivity analysis by means on well known climate indices reveals the major large scale controlling mechanisms for the seasonal precipitation climate of each target area. For the Central Asian target areas, both, the El Nino Southern Oscillation and the North Atlantic Oscillation are identified as important controlling factors for precipitation totals during moist spring season. Drought conditions are found to be triggered by a warm ENSO phase in combination with a positive phase of the NAO. For the monsoonal summer precipitation amounts over Southern Asia, the model suggests a distinct negative response to El Nino events.
A Parallel Finite Set Statistical Simulator for Multi-Target Detection and Tracking
NASA Astrophysics Data System (ADS)
Hussein, I.; MacMillan, R.
2014-09-01
Finite Set Statistics (FISST) is a powerful Bayesian inference tool for the joint detection, classification and tracking of multi-target environments. FISST is capable of handling phenomena such as clutter, misdetections, and target birth and decay. Implicit within the approach are solutions to the data association and target label-tracking problems. Finally, FISST provides generalized information measures that can be used for sensor allocation across different types of tasks such as: searching for new targets, and classification and tracking of known targets. These FISST capabilities have been demonstrated on several small-scale illustrative examples. However, for implementation in a large-scale system as in the Space Situational Awareness problem, these capabilities require a lot of computational power. In this paper, we implement FISST in a parallel environment for the joint detection and tracking of multi-target systems. In this implementation, false alarms and misdetections will be modeled. Target birth and decay will not be modeled in the present paper. We will demonstrate the success of the method for as many targets as we possibly can in a desktop parallel environment. Performance measures will include: number of targets in the simulation, certainty of detected target tracks, computational time as a function of clutter returns and number of targets, among other factors.
Manufacturing Process Developments for Regeneratively-Cooled Channel Wall Rocket Nozzles
NASA Technical Reports Server (NTRS)
Gradl, Paul; Brandsmeier, Will
2016-01-01
Regeneratively cooled channel wall nozzles incorporate a series of integral coolant channels to contain the coolant to maintain adequate wall temperatures and expand hot gas providing engine thrust and specific impulse. NASA has been evaluating manufacturing techniques targeting large scale channel wall nozzles to support affordability of current and future liquid rocket engine nozzles and thrust chamber assemblies. The development of these large scale manufacturing techniques focus on the liner formation, channel slotting with advanced abrasive water-jet milling techniques and closeout of the coolant channels to replace or augment other cost reduction techniques being evaluated for nozzles. NASA is developing a series of channel closeout techniques including large scale additive manufacturing laser deposition and explosively bonded closeouts. A series of subscale nozzles were completed evaluating these processes. Fabrication of mechanical test and metallography samples, in addition to subscale hardware has focused on Inconel 625, 300 series stainless, aluminum alloys as well as other candidate materials. Evaluations of these techniques are demonstrating potential for significant cost reductions for large scale nozzles and chambers. Hot fire testing is planned using these techniques in the future.
K. Bruce Jones; Anne C. Neale; Timothy G. Wade; James D. Wickham; Chad L. Cross; Curtis M. Edmonds; Thomas R. Loveland; Maliha S. Nash; Kurt H. Riitters; Elizabeth R. Smith
2001-01-01
Spatially explicit identification of changes in ecological conditions over large areas is key to targeting and prioitizing areas for environmental protection and restoration by managers at watershed, basin, and regional scales. A critical limitation to this point has been the development of methods to conduct such broad-scale assessments. Field-based methods have...
NASA Astrophysics Data System (ADS)
Ostermayr, T. M.; Gebhard, J.; Haffa, D.; Kiefer, D.; Kreuzer, C.; Allinger, K.; Bömer, C.; Braenzel, J.; Schnürer, M.; Cermak, I.; Schreiber, J.; Hilz, P.
2018-01-01
We report on a Paul-trap system with large access angles that allows positioning of fully isolated micrometer-scale particles with micrometer precision as targets in high-intensity laser-plasma interactions. This paper summarizes theoretical and experimental concepts of the apparatus as well as supporting measurements that were performed for the trapping process of single particles.
Ovchinnikov, Victor; Karplus, Martin
2012-07-26
The popular targeted molecular dynamics (TMD) method for generating transition paths in complex biomolecular systems is revisited. In a typical TMD transition path, the large-scale changes occur early and the small-scale changes tend to occur later. As a result, the order of events in the computed paths depends on the direction in which the simulations are performed. To identify the origin of this bias, and to propose a method in which the bias is absent, variants of TMD in the restraint formulation are introduced and applied to the complex open ↔ closed transition in the protein calmodulin. Due to the global best-fit rotation that is typically part of the TMD method, the simulated system is guided implicitly along the lowest-frequency normal modes, until the large spatial scales associated with these modes are near the target conformation. The remaining portion of the transition is described progressively by higher-frequency modes, which correspond to smaller-scale rearrangements. A straightforward modification of TMD that avoids the global best-fit rotation is the locally restrained TMD (LRTMD) method, in which the biasing potential is constructed from a number of TMD potentials, each acting on a small connected portion of the protein sequence. With a uniform distribution of these elements, transition paths that lack the length-scale bias are obtained. Trajectories generated by steered MD in dihedral angle space (DSMD), a method that avoids best-fit rotations altogether, also lack the length-scale bias. To examine the importance of the paths generated by TMD, LRTMD, and DSMD in the actual transition, we use the finite-temperature string method to compute the free energy profile associated with a transition tube around a path generated by each algorithm. The free energy barriers associated with the paths are comparable, suggesting that transitions can occur along each route with similar probabilities. This result indicates that a broad ensemble of paths needs to be calculated to obtain a full description of conformational changes in biomolecules. The breadth of the contributing ensemble suggests that energetic barriers for conformational transitions in proteins are offset by entropic contributions that arise from a large number of possible paths.
Analysis of calibration accuracy of cameras with different target sizes for large field of view
NASA Astrophysics Data System (ADS)
Zhang, Jin; Chai, Zhiwen; Long, Changyu; Deng, Huaxia; Ma, Mengchao; Zhong, Xiang; Yu, Huan
2018-03-01
Visual measurement plays an increasingly important role in the field o f aerospace, ship and machinery manufacturing. Camera calibration of large field-of-view is a critical part of visual measurement . For the issue a large scale target is difficult to be produced, and the precision can not to be guaranteed. While a small target has the advantage of produced of high precision, but only local optimal solutions can be obtained . Therefore, studying the most suitable ratio of the target size to the camera field of view to ensure the calibration precision requirement of the wide field-of-view is required. In this paper, the cameras are calibrated by a series of different dimensions of checkerboard calibration target s and round calibration targets, respectively. The ratios of the target size to the camera field-of-view are 9%, 18%, 27%, 36%, 45%, 54%, 63%, 72%, 81% and 90%. The target is placed in different positions in the camera field to obtain the camera parameters of different positions . Then, the distribution curves of the reprojection mean error of the feature points' restructure in different ratios are analyzed. The experimental data demonstrate that with the ratio of the target size to the camera field-of-view increas ing, the precision of calibration is accordingly improved, and the reprojection mean error changes slightly when the ratio is above 45%.
NASA Astrophysics Data System (ADS)
Austin, Kemen G.; González-Roglich, Mariano; Schaffer-Smith, Danica; Schwantes, Amanda M.; Swenson, Jennifer J.
2017-05-01
Deforestation continues across the tropics at alarming rates, with repercussions for ecosystem processes, carbon storage and long term sustainability. Taking advantage of recent fine-scale measurement of deforestation, this analysis aims to improve our understanding of the scale of deforestation drivers in the tropics. We examined trends in forest clearings of different sizes from 2000-2012 by country, region and development level. As tropical deforestation increased from approximately 6900 kha yr-1 in the first half of the study period, to >7900 kha yr-1 in the second half of the study period, >50% of this increase was attributable to the proliferation of medium and large clearings (>10 ha). This trend was most pronounced in Southeast Asia and in South America. Outside of Brazil >60% of the observed increase in deforestation in South America was due to an upsurge in medium- and large-scale clearings; Brazil had a divergent trend of decreasing deforestation, >90% of which was attributable to a reduction in medium and large clearings. The emerging prominence of large-scale drivers of forest loss in many regions and countries suggests the growing need for policy interventions which target industrial-scale agricultural commodity producers. The experience in Brazil suggests that there are promising policy solutions to mitigate large-scale deforestation, but that these policy initiatives do not adequately address small-scale drivers. By providing up-to-date and spatially explicit information on the scale of deforestation, and the trends in these patterns over time, this study contributes valuable information for monitoring, and designing effective interventions to address deforestation.
A Mapping of Drug Space from the Viewpoint of Small Molecule Metabolism
Basuino, Li; Chambers, Henry F.; Lee, Deok-Sun; Wiest, Olaf G.; Babbitt, Patricia C.
2009-01-01
Small molecule drugs target many core metabolic enzymes in humans and pathogens, often mimicking endogenous ligands. The effects may be therapeutic or toxic, but are frequently unexpected. A large-scale mapping of the intersection between drugs and metabolism is needed to better guide drug discovery. To map the intersection between drugs and metabolism, we have grouped drugs and metabolites by their associated targets and enzymes using ligand-based set signatures created to quantify their degree of similarity in chemical space. The results reveal the chemical space that has been explored for metabolic targets, where successful drugs have been found, and what novel territory remains. To aid other researchers in their drug discovery efforts, we have created an online resource of interactive maps linking drugs to metabolism. These maps predict the “effect space” comprising likely target enzymes for each of the 246 MDDR drug classes in humans. The online resource also provides species-specific interactive drug-metabolism maps for each of the 385 model organisms and pathogens in the BioCyc database collection. Chemical similarity links between drugs and metabolites predict potential toxicity, suggest routes of metabolism, and reveal drug polypharmacology. The metabolic maps enable interactive navigation of the vast biological data on potential metabolic drug targets and the drug chemistry currently available to prosecute those targets. Thus, this work provides a large-scale approach to ligand-based prediction of drug action in small molecule metabolism. PMID:19701464
Large-Scale Analysis of Network Bistability for Human Cancers
Shiraishi, Tetsuya; Matsuyama, Shinako; Kitano, Hiroaki
2010-01-01
Protein–protein interaction and gene regulatory networks are likely to be locked in a state corresponding to a disease by the behavior of one or more bistable circuits exhibiting switch-like behavior. Sets of genes could be over-expressed or repressed when anomalies due to disease appear, and the circuits responsible for this over- or under-expression might persist for as long as the disease state continues. This paper shows how a large-scale analysis of network bistability for various human cancers can identify genes that can potentially serve as drug targets or diagnosis biomarkers. PMID:20628618
Motor scaling by viewing distance of early visual motion signals during smooth pursuit
NASA Technical Reports Server (NTRS)
Zhou, Hui-Hui; Wei, Min; Angelaki, Dora E.
2002-01-01
The geometry of gaze stabilization during head translation requires eye movements to scale proportionally to the inverse of target distance. Such a scaling has indeed been demonstrated to exist for the translational vestibuloocular reflex (TVOR), as well as optic flow-selective translational visuomotor reflexes (e.g., ocular following, OFR). The similarities in this scaling by a neural estimate of target distance for both the TVOR and the OFR have been interpreted to suggest that the two reflexes share common premotor processing. Because the neural substrates of OFR are partly shared by those for the generation of pursuit eye movements, we wanted to know if the site of gain modulation for TVOR and OFR is also part of a major pathway for pursuit. Thus, in the present studies, we investigated in rhesus monkeys whether initial eye velocity and acceleration during the open-loop portion of step ramp pursuit scales with target distance. Specifically, with visual motion identical on the retina during tracking at different distances (12, 24, and 60 cm), we compared the first 80 ms of horizontal pursuit. We report that initial eye velocity and acceleration exhibits either no or a very small dependence on vergence angle that is at least an order of magnitude less than the corresponding dependence of the TVOR and OFR. The results suggest that the neural substrates for motor scaling by target distance remain largely distinct from the main pathway for pursuit.
de Groot, Reinoud; Lüthi, Joel; Lindsay, Helen; Holtackers, René; Pelkmans, Lucas
2018-01-23
High-content imaging using automated microscopy and computer vision allows multivariate profiling of single-cell phenotypes. Here, we present methods for the application of the CISPR-Cas9 system in large-scale, image-based, gene perturbation experiments. We show that CRISPR-Cas9-mediated gene perturbation can be achieved in human tissue culture cells in a timeframe that is compatible with image-based phenotyping. We developed a pipeline to construct a large-scale arrayed library of 2,281 sequence-verified CRISPR-Cas9 targeting plasmids and profiled this library for genes affecting cellular morphology and the subcellular localization of components of the nuclear pore complex (NPC). We conceived a machine-learning method that harnesses genetic heterogeneity to score gene perturbations and identify phenotypically perturbed cells for in-depth characterization of gene perturbation effects. This approach enables genome-scale image-based multivariate gene perturbation profiling using CRISPR-Cas9. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.
Lam, Max; Trampush, Joey W; Yu, Jin; Knowles, Emma; Davies, Gail; Liewald, David C; Starr, John M; Djurovic, Srdjan; Melle, Ingrid; Sundet, Kjetil; Christoforou, Andrea; Reinvang, Ivar; DeRosse, Pamela; Lundervold, Astri J; Steen, Vidar M; Espeseth, Thomas; Räikkönen, Katri; Widen, Elisabeth; Palotie, Aarno; Eriksson, Johan G; Giegling, Ina; Konte, Bettina; Roussos, Panos; Giakoumaki, Stella; Burdick, Katherine E; Payton, Antony; Ollier, William; Chiba-Falek, Ornit; Attix, Deborah K; Need, Anna C; Cirulli, Elizabeth T; Voineskos, Aristotle N; Stefanis, Nikos C; Avramopoulos, Dimitrios; Hatzimanolis, Alex; Arking, Dan E; Smyrnis, Nikolaos; Bilder, Robert M; Freimer, Nelson A; Cannon, Tyrone D; London, Edythe; Poldrack, Russell A; Sabb, Fred W; Congdon, Eliza; Conley, Emily Drabant; Scult, Matthew A; Dickinson, Dwight; Straub, Richard E; Donohoe, Gary; Morris, Derek; Corvin, Aiden; Gill, Michael; Hariri, Ahmad R; Weinberger, Daniel R; Pendleton, Neil; Bitsios, Panos; Rujescu, Dan; Lahti, Jari; Le Hellard, Stephanie; Keller, Matthew C; Andreassen, Ole A; Deary, Ian J; Glahn, David C; Malhotra, Anil K; Lencz, Todd
2017-11-28
Here, we present a large (n = 107,207) genome-wide association study (GWAS) of general cognitive ability ("g"), further enhanced by combining results with a large-scale GWAS of educational attainment. We identified 70 independent genomic loci associated with general cognitive ability. Results showed significant enrichment for genes causing Mendelian disorders with an intellectual disability phenotype. Competitive pathway analysis implicated the biological processes of neurogenesis and synaptic regulation, as well as the gene targets of two pharmacologic agents: cinnarizine, a T-type calcium channel blocker, and LY97241, a potassium channel inhibitor. Transcriptome-wide and epigenome-wide analysis revealed that the implicated loci were enriched for genes expressed across all brain regions (most strongly in the cerebellum). Enrichment was exclusive to genes expressed in neurons but not oligodendrocytes or astrocytes. Finally, we report genetic correlations between cognitive ability and disparate phenotypes including psychiatric disorders, several autoimmune disorders, longevity, and maternal age at first birth. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Stable isotope probing to study functional components of complex microbial ecosystems.
Mazard, Sophie; Schäfer, Hendrik
2014-01-01
This protocol presents a method of dissecting the DNA or RNA of key organisms involved in a specific biochemical process within a complex ecosystem. Stable isotope probing (SIP) allows the labelling and separation of nucleic acids from community members that are involved in important biochemical transformations, yet are often not the most numerically abundant members of a community. This pure culture-independent technique circumvents limitations of traditional microbial isolation techniques or data mining from large-scale whole-community metagenomic studies to tease out the identities and genomic repertoires of microorganisms participating in biological nutrient cycles. SIP experiments can be applied to virtually any ecosystem and biochemical pathway under investigation provided a suitable stable isotope substrate is available. This versatile methodology allows a wide range of analyses to be performed, from fatty-acid analyses, community structure and ecology studies, and targeted metagenomics involving nucleic acid sequencing. SIP experiments provide an effective alternative to large-scale whole-community metagenomic studies by specifically targeting the organisms or biochemical transformations of interest, thereby reducing the sequencing effort and time-consuming bioinformatics analyses of large datasets.
Quasi-Experimental Evaluation of the Effectiveness of a Large-Scale Readmission Reduction Program.
Jenq, Grace Y; Doyle, Margaret M; Belton, Beverly M; Herrin, Jeph; Horwitz, Leora I
2016-05-01
Feasibility, effectiveness, and sustainability of large-scale readmission reduction efforts are uncertain. The Greater New Haven Coalition for Safe Transitions and Readmission Reductions was funded by the Center for Medicare & Medicaid Services (CMS) to reduce readmissions among all discharged Medicare fee-for-service (FFS) patients. To evaluate whether overall Medicare FFS readmissions were reduced through an intervention applied to high-risk discharge patients. This quasi-experimental evaluation took place at an urban academic medical center. Target discharge patients were older than 64 years with Medicare FFS insurance, residing in nearby zip codes, and discharged alive to home or facility and not against medical advice or to hospice; control discharge patients were older than 54 years with the same zip codes and discharge disposition but without Medicare FFS insurance if older than 64 years. High-risk target discharge patients were selectively enrolled in the program. Personalized transitional care, including education, medication reconciliation, follow-up telephone calls, and linkage to community resources. We measured the 30-day unplanned same-hospital readmission rates in the baseline period (May 1, 2011, through April 30, 2012) and intervention period (October 1, 2012, through May 31, 2014). We enrolled 10 621 (58.3%) of 18 223 target discharge patients (73.9% of discharge patients screened as high risk) and included all target discharge patients in the analysis. The mean (SD) age of the target discharge patients was 79.7 (8.8) years. The adjusted readmission rate decreased from 21.5% to 19.5% in the target population and from 21.1% to 21.0% in the control population, a relative reduction of 9.3%. The number needed to treat to avoid 1 readmission was 50. In a difference-in-differences analysis using a logistic regression model, the odds of readmission in the target population decreased significantly more than that of the control population in the intervention period (odds ratio, 0.90; 95% CI, 0.83-0.99; P = .03). In a comparative interrupted time series analysis of the difference in monthly adjusted admission rates, the target population decreased an absolute -3.09 (95% CI, -6.47 to 0.29; P = .07) relative to the control population, a similar but nonsignificant effect. This large-scale readmission reduction program reduced readmissions by 9.3% among the full population targeted by the CMS despite being delivered only to high-risk patients. However, it did not achieve the goal reduction set by the CMS.
River Food Web Response to Large-Scale Riparian Zone Manipulations
Wootton, J. Timothy
2012-01-01
Conservation programs often focus on select species, leading to management plans based on the autecology of the focal species, but multiple ecosystem components can be affected both by the environmental factors impacting, and the management targeting, focal species. These broader effects can have indirect impacts on target species through the web of interactions within ecosystems. For example, human activity can strongly alter riparian vegetation, potentially impacting both economically-important salmonids and their associated river food web. In an Olympic Peninsula river, Washington state, USA, replicated large-scale riparian vegetation manipulations implemented with the long-term (>40 yr) goal of improving salmon habitat did not affect water temperature, nutrient limitation or habitat characteristics, but reduced canopy cover, causing reduced energy input via leaf litter, increased incident solar radiation (UV and PAR) and increased algal production compared to controls. In response, benthic algae, most insect taxa, and juvenile salmonids increased in manipulated areas. Stable isotope analysis revealed a predominant contribution of algal-derived energy to salmonid diets in manipulated reaches. The experiment demonstrates that riparian management targeting salmonids strongly affects river food webs via changes in the energy base, illustrates how species-based management strategies can have unanticipated indirect effects on the target species via the associated food web, and supports ecosystem-based management approaches for restoring depleted salmonid stocks. PMID:23284786
Kuijpers, Niels GA; Chroumpi, Soultana; Vos, Tim; Solis-Escalante, Daniel; Bosman, Lizanne; Pronk, Jack T; Daran, Jean-Marc; Daran-Lapujade, Pascale
2013-01-01
In vivo assembly of overlapping fragments by homologous recombination in Saccharomyces cerevisiae is a powerful method to engineer large DNA constructs. Whereas most in vivo assembly methods reported to date result in circular vectors, stable integrated constructs are often preferred for metabolic engineering as they are required for large-scale industrial application. The present study explores the potential of combining in vivo assembly of large, multigene expression constructs with their targeted chromosomal integration in S. cerevisiae. Combined assembly and targeted integration of a ten-fragment 22-kb construct to a single chromosomal locus was successfully achieved in a single transformation process, but with low efficiency (5% of the analyzed transformants contained the correctly assembled construct). The meganuclease I-SceI was therefore used to introduce a double-strand break at the targeted chromosomal locus, thus to facilitate integration of the assembled construct. I-SceI-assisted integration dramatically increased the efficiency of assembly and integration of the same construct to 95%. This study paves the way for the fast, efficient, and stable integration of large DNA constructs in S. cerevisiae chromosomes. PMID:24028550
2012-01-01
Background A central goal in Huntington's disease (HD) research is to identify and prioritize candidate targets for neuroprotective intervention, which requires genome-scale information on the modifiers of early-stage neuron injury in HD. Results Here, we performed a large-scale RNA interference screen in C. elegans strains that express N-terminal huntingtin (htt) in touch receptor neurons. These neurons control the response to light touch. Their function is strongly impaired by expanded polyglutamines (128Q) as shown by the nearly complete loss of touch response in adult animals, providing an in vivo model in which to manipulate the early phases of expanded-polyQ neurotoxicity. In total, 6034 genes were examined, revealing 662 gene inactivations that either reduce or aggravate defective touch response in 128Q animals. Several genes were previously implicated in HD or neurodegenerative disease, suggesting that this screen has effectively identified candidate targets for HD. Network-based analysis emphasized a subset of high-confidence modifier genes in pathways of interest in HD including metabolic, neurodevelopmental and pro-survival pathways. Finally, 49 modifiers of 128Q-neuron dysfunction that are dysregulated in the striatum of either R/2 or CHL2 HD mice, or both, were identified. Conclusions Collectively, these results highlight the relevance to HD pathogenesis, providing novel information on the potential therapeutic targets for neuroprotection in HD. PMID:22413862
Lejeune, François-Xavier; Mesrob, Lilia; Parmentier, Frédéric; Bicep, Cedric; Vazquez-Manrique, Rafael P; Parker, J Alex; Vert, Jean-Philippe; Tourette, Cendrine; Neri, Christian
2012-03-13
A central goal in Huntington's disease (HD) research is to identify and prioritize candidate targets for neuroprotective intervention, which requires genome-scale information on the modifiers of early-stage neuron injury in HD. Here, we performed a large-scale RNA interference screen in C. elegans strains that express N-terminal huntingtin (htt) in touch receptor neurons. These neurons control the response to light touch. Their function is strongly impaired by expanded polyglutamines (128Q) as shown by the nearly complete loss of touch response in adult animals, providing an in vivo model in which to manipulate the early phases of expanded-polyQ neurotoxicity. In total, 6034 genes were examined, revealing 662 gene inactivations that either reduce or aggravate defective touch response in 128Q animals. Several genes were previously implicated in HD or neurodegenerative disease, suggesting that this screen has effectively identified candidate targets for HD. Network-based analysis emphasized a subset of high-confidence modifier genes in pathways of interest in HD including metabolic, neurodevelopmental and pro-survival pathways. Finally, 49 modifiers of 128Q-neuron dysfunction that are dysregulated in the striatum of either R/2 or CHL2 HD mice, or both, were identified. Collectively, these results highlight the relevance to HD pathogenesis, providing novel information on the potential therapeutic targets for neuroprotection in HD. © 2012 Lejeune et al; licensee BioMed Central Ltd.
Neurogenomics and the role of a large mutational target on rapid behavioral change.
Stanley, Craig E; Kulathinal, Rob J
2016-11-08
Behavior, while complex and dynamic, is among the most diverse, derived, and rapidly evolving traits in animals. The highly labile nature of heritable behavioral change is observed in such evolutionary phenomena as the emergence of converged behaviors in domesticated animals, the rapid evolution of preferences, and the routine development of ethological isolation between diverging populations and species. In fact, it is believed that nervous system development and its potential to evolve a seemingly infinite array of behavioral innovations played a major role in the successful diversification of metazoans, including our own human lineage. However, unlike other rapidly evolving functional systems such as sperm-egg interactions and immune defense, the genetic basis of rapid behavioral change remains elusive. Here we propose that the rapid divergence and widespread novelty of innate and adaptive behavior is primarily a function of its genomic architecture. Specifically, we hypothesize that the broad diversity of behavioral phenotypes present at micro- and macroevolutionary scales is promoted by a disproportionately large mutational target of neurogenic genes. We present evidence that these large neuro-behavioral targets are significant and ubiquitous in animal genomes and suggest that behavior's novelty and rapid emergence are driven by a number of factors including more selection on a larger pool of variants, a greater role of phenotypic plasticity, and/or unique molecular features present in large genes. We briefly discuss the origins of these large neurogenic genes, as they relate to the remarkable diversity of metazoan behaviors, and highlight key consequences on both behavioral traits and neurogenic disease across, respectively, evolutionary and ontogenetic time scales. Current approaches to studying the genetic mechanisms underlying rapid phenotypic change primarily focus on identifying signatures of Darwinian selection in protein-coding regions. In contrast, the large mutational target hypothesis places genomic architecture and a larger allelic pool at the forefront of rapid evolutionary change, particularly in genetic systems that are polygenic and regulatory in nature. Genomic data from brain and neural tissues in mammals as well as a preliminary survey of neurogenic genes from comparative genomic data support this hypothesis while rejecting both positive and relaxed selection on proteins or higher mutation rates. In mammals and invertebrates, neurogenic genes harbor larger protein-coding regions and possess a richer regulatory repertoire of miRNA targets and transcription factor binding sites. Overall, neurogenic genes cover a disproportionately large genomic fraction, providing a sizeable substrate for evolutionary, genetic, and molecular mechanisms to act upon. Readily available comparative and functional genomic data provide unexplored opportunities to test whether a distinct neurogenomic architecture can promote rapid behavioral change via several mechanisms unique to large genes, and which components of this large footprint are uniquely metazoan. The large mutational target hypothesis highlights the eminent roles of mutation and functional genomic architecture in generating rapid developmental and evolutionary change. It has broad implications on our understanding of the genetics of complex adaptive traits such as behavior by focusing on the importance of mutational input, from SNPs to alternative transcripts to transposable elements, on driving evolutionary rates of functional systems. Such functional divergence has important implications in promoting behavioral isolation across short- and long-term timescales. Due to genome-scaled polygenic adaptation, the large target effect also contributes to our inability to identify adapted behavioral candidate genes. The presence of large neurogenic genes, particularly in the mammalian brain and other neural tissues, further offers emerging insight into the etiology of neurodevelopmental and neurodegenerative diseases. The well-known correlation between neurological spectrum disorders in children and paternal age may simply be a direct result of aging fathers accumulating mutations across these large neurodevelopmental genes. The large mutational target hypothesis can also explain the rapid evolution of other functional systems covering a large genomic fraction such as male fertility and its preferential association with hybrid male sterility among closely related taxa. Overall, a focus on mutational potential may increase our power in understanding the genetic basis of complex phenotypes such as behavior while filling a general gap in understanding their evolution.
Effects of neonicotinoids and fipronil on non-target invertebrates.
Pisa, L W; Amaral-Rogers, V; Belzunces, L P; Bonmatin, J M; Downs, C A; Goulson, D; Kreutzweiser, D P; Krupke, C; Liess, M; McField, M; Morrissey, C A; Noome, D A; Settele, J; Simon-Delso, N; Stark, J D; Van der Sluijs, J P; Van Dyck, H; Wiemers, M
2015-01-01
We assessed the state of knowledge regarding the effects of large-scale pollution with neonicotinoid insecticides and fipronil on non-target invertebrate species of terrestrial, freshwater and marine environments. A large section of the assessment is dedicated to the state of knowledge on sublethal effects on honeybees (Apis mellifera) because this important pollinator is the most studied non-target invertebrate species. Lepidoptera (butterflies and moths), Lumbricidae (earthworms), Apoidae sensu lato (bumblebees, solitary bees) and the section "other invertebrates" review available studies on the other terrestrial species. The sections on freshwater and marine species are rather short as little is known so far about the impact of neonicotinoid insecticides and fipronil on the diverse invertebrate fauna of these widely exposed habitats. For terrestrial and aquatic invertebrate species, the known effects of neonicotinoid pesticides and fipronil are described ranging from organismal toxicology and behavioural effects to population-level effects. For earthworms, freshwater and marine species, the relation of findings to regulatory risk assessment is described. Neonicotinoid insecticides exhibit very high toxicity to a wide range of invertebrates, particularly insects, and field-realistic exposure is likely to result in both lethal and a broad range of important sublethal impacts. There is a major knowledge gap regarding impacts on the grand majority of invertebrates, many of which perform essential roles enabling healthy ecosystem functioning. The data on the few non-target species on which field tests have been performed are limited by major flaws in the outdated test protocols. Despite large knowledge gaps and uncertainties, enough knowledge exists to conclude that existing levels of pollution with neonicotinoids and fipronil resulting from presently authorized uses frequently exceed the lowest observed adverse effect concentrations and are thus likely to have large-scale and wide ranging negative biological and ecological impacts on a wide range of non-target invertebrates in terrestrial, aquatic, marine and benthic habitats.
GWASeq: targeted re-sequencing follow up to GWAS.
Salomon, Matthew P; Li, Wai Lok Sibon; Edlund, Christopher K; Morrison, John; Fortini, Barbara K; Win, Aung Ko; Conti, David V; Thomas, Duncan C; Duggan, David; Buchanan, Daniel D; Jenkins, Mark A; Hopper, John L; Gallinger, Steven; Le Marchand, Loïc; Newcomb, Polly A; Casey, Graham; Marjoram, Paul
2016-03-03
For the last decade the conceptual framework of the Genome-Wide Association Study (GWAS) has dominated the investigation of human disease and other complex traits. While GWAS have been successful in identifying a large number of variants associated with various phenotypes, the overall amount of heritability explained by these variants remains small. This raises the question of how best to follow up on a GWAS, localize causal variants accounting for GWAS hits, and as a consequence explain more of the so-called "missing" heritability. Advances in high throughput sequencing technologies now allow for the efficient and cost-effective collection of vast amounts of fine-scale genomic data to complement GWAS. We investigate these issues using a colon cancer dataset. After QC, our data consisted of 1993 cases, 899 controls. Using marginal tests of associations, we identify 10 variants distributed among six targeted regions that are significantly associated with colorectal cancer, with eight of the variants being novel to this study. Additionally, we perform so-called 'SNP-set' tests of association and identify two sets of variants that implicate both common and rare variants in the etiology of colorectal cancer. Here we present a large-scale targeted re-sequencing resource focusing on genomic regions implicated in colorectal cancer susceptibility previously identified in several GWAS, which aims to 1) provide fine-scale targeted sequencing data for fine-mapping and 2) provide data resources to address methodological questions regarding the design of sequencing-based follow-up studies to GWAS. Additionally, we show that this strategy successfully identifies novel variants associated with colorectal cancer susceptibility and can implicate both common and rare variants.
Long-term drought sensitivity of trees in second-growth forests in a humid region
Neil Pederson; Kacie Tackett; Ryan W. McEwan; Stacy Clark; Adrienne Cooper; Glade Brosi; Ray Eaton; R. Drew Stockwell
2012-01-01
Classical field methods of reconstructing drought using tree rings in humid, temperate regions typically target old trees from drought-prone sites. This approach limits investigators to a handful of species and excludes large amounts of data that might be useful, especially for coverage gaps in large-scale networks. By sampling in more âtypicalâ forests, network...
Radar Observations of Asteroids 7 Iris, 9 Metis, 12 Victoria, 216 Kleopatra, and 654 Zelinda
NASA Technical Reports Server (NTRS)
Mitchell, David L.; Ostro, Steven J.; Rosema, Keith D.; Hudson, R. Scott; Campbell, Donald B.; Chandler, John F.; Shapiro, Irwin I.
1995-01-01
We report 13-cm wavelength radar observations of the main-belt asteroids 7 Iris, 9 Metis, 12 Victoria, 216 Kleopatra, and 654 Zelinda obtained at Arecibo between 1980 and 1989. The echoes are highly polarized yet broadly distributed in Doppler frequency, indicating that our targets are smooth on decimeter scales but very rough on some scale(s) larger than about I m. The echo spectra are generally consistent with existing size, shape, and spin information based on radiometric, lightcurve, and occultation data. All of our targets possess distinctive radar signatures that reveal large- scale topography. Reflectivity spikes within narrow ranges of rotation phase suggest large flat regions on Iris, Metis, and Zelinda, while bimodal spectra imply nonconvex, possibly bifurcated shapes for Kleopatra and Victoria. Kleopatra has the highest radar albedo yet measured for a main-belt asteroid, indicating a high metal concentration and making Kleopatra the best main-belt candidate for a core remnant of a differentiated and subsequently disrupted parent body. Upon completion of the Arecibo telescope upgrade, there will be several opportunities per year to resolve main-belt asteroids with hundreds of delay-Doppler cells, which can be inverted to provide estimates of both three-dimensional shape and radar scattering properties.
Tkachenko, S.; Baillie, N.; Kuhn, S. E.; ...
2014-04-24
In this study, much less is known about neutron structure than that of the proton due to the absence of free neutron targets. Neutron information is usually extracted from data on nuclear targets such as deuterium, requiring corrections for nuclear binding and nucleon off-shell effects. These corrections are model dependent and have significant uncertainties, especially for large values of the Bjorken scaling variable x. As a consequence, the same data can lead to different conclusions, for example, about the behavior of the d quark distribution in the proton at large x.
Esterhuizen, Johan; Njiru, Basilio; Vale, Glyn A; Lehane, Michael J; Torr, Stephen J
2011-09-01
Control of tsetse flies using insecticide-treated targets is often hampered by vegetation re-growth and encroachment which obscures a target and renders it less effective. Potentially this is of particular concern for the newly developed small targets (0.25 high × 0.5 m wide) which show promise for cost-efficient control of Palpalis group tsetse flies. Consequently the performance of a small target was investigated for Glossina fuscipes fuscipes in Kenya, when the target was obscured following the placement of vegetation to simulate various degrees of natural bush encroachment. Catches decreased significantly only when the target was obscured by more than 80%. Even if a small target is underneath a very low overhanging bush (0.5 m above ground), the numbers of G. f. fuscipes decreased by only about 30% compared to a target in the open. We show that the efficiency of the small targets, even in small (1 m diameter) clearings, is largely uncompromised by vegetation re-growth because G. f. fuscipes readily enter between and under vegetation. The essential characteristic is that there should be some openings between vegetation. This implies that for this important vector of HAT, and possibly other Palpalis group flies, a smaller initial clearance zone around targets can be made and longer interval between site maintenance visits is possible both of which will result in cost savings for large scale operations. We also investigated and discuss other site features e.g. large solid objects and position in relation to the water's edge in terms of the efficacy of the small targets.
Aviation security : long-standing problems impair airport screeners' performance
DOT National Transportation Integrated Search
2000-06-01
The threat of attacks on aircraft by terrorists or others remains a persistent and growing concern for the United States. According to the Federal Bureau of Investigation, the trend in terrorism against U.S. targets is toward large-scale incidents de...
RNAi at work: Targeting invertebrate pests and beneficial organisms' diseases
USDA-ARS?s Scientific Manuscript database
Invertebrates present two types of large scale RNAi application opportunities: pest control and beneficial insect health. The former involves the introduction of sustainable applications to keep pest populations low, and the latter represents the challenge of keeping beneficial organisms healthy. RN...
NASA Astrophysics Data System (ADS)
Liu, M.; Bi, J.; Huang, Y.; Kinney, P. L.
2016-12-01
Jiangsu, which has three national low-carbon pilot cities, is set to be a model province in China for achieving peak carbon targets before 2030. However, according to local planning of responding to climate change, carbon emissions are projected to keep going up before 2020 even the strictest measures are implemented. In other words, innovative measures must be in action after 2020. This work aimed at assessing the air quality and health co-benefits of alternative post-2020 measures to help remove barriers of policy implementation through tying it to local incentives for air quality improvement. To achieve the aim, we select 2010 as baseline year and develop Bussiness As Usual (BAU) and Traditional Carbon Reduction (TCR) scenarios before 2020. Under BAU, only existing climate and air pollution control policies are considered; under TCR, potential climate policies in local planning and existing air pollution control policies are considered. After 2020, integrated gasification combined cycle (IGCC) plant with carbon capture and storage (CCS) technology and large-scale substitution of renewable energy seem to be two promising pathways for achieving peak carbon targets. Therefore, two additional scenarios (TCR-IGCC and TCR-SRE) are set after 2020. Based on the projections of future energy balances and industrial productions, we estimate the pollutant emissions and simulate PM2.5 and ozone concentrations by 2017, 2020, 2030 and 2050 using CMAQ. Then using health impact assessment approach, the premature deaths are estimated and monetized. Results show that the carbon peak in Jiangsu will be achieved before 2030 only under TCR-IGCC and TCR-SRE scenarios. Under three policy scenarios, Jiangsu's carbon emission control targets would have substantial effects on primary air pollutant emissions far beyond those we estimate would be needed to meet the PM2.5 concentration targets in 2017. Compared with IGCC with CCS, large-scale substitutions of renewable energy bring comparable pollutant emission reductions but more health benefits because it reduces more emissions from traffic sources which are more harmful to health. However, large-scale substitution of renewable energy posed challenges on energy supply capacity, which need to be seriously considered in future policy decision.
Kongelf, Anine; Bandewar, Sunita V S; Bharat, Shalini; Collumbien, Martine
2015-01-01
In the last decade, community mobilisation (CM) interventions targeting female sex workers (FSWs) have been scaled-up in India's national response to the HIV epidemic. This included the Bill and Melinda Gates Foundation's Avahan programme which adopted a business approach to plan and manage implementation at scale. With the focus of evaluation efforts on measuring effectiveness and health impacts there has been little analysis thus far of the interaction of the CM interventions with the sex work industry in complex urban environments. Between March and July 2012 semi-structured, in-depth interviews and focus group discussions were conducted with 63 HIV intervention implementers, to explore challenges of HIV prevention among FSWs in Mumbai. A thematic analysis identified contextual factors that impact CM implementation. Large-scale interventions are not only impacted by, but were shown to shape the dynamic social context. Registration practices and programme monitoring were experienced as stigmatising, reflected in shifting client preferences towards women not disclosing as 'sex workers'. This combined with urban redevelopment and gentrification of traditional red light areas, forcing dispersal and more 'hidden' ways of solicitation, further challenging outreach and collectivisation. Participants reported that brothel owners and 'pimps' continued to restrict access to sex workers and the heterogeneous 'community' of FSWs remains fragmented with high levels of mobility. Stakeholder engagement was poor and mobilising around HIV prevention not compelling. Interventions largely failed to respond to community needs as strong target-orientation skewed activities towards those most easily measured and reported. Large-scale interventions have been impacted by and contributed to an increasingly complex sex work environment in Mumbai, challenging outreach and mobilisation efforts. Sex workers remain a vulnerable and disempowered group needing continued support and more comprehensive services.
Experimental measurements of hydrodynamic instabilities on NOVA of relevance to astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budil, K S; Cherfils, C; Drake, R P
1998-09-11
Large lasers such as Nova allow the possibility of achieving regimes of high energy densities in plasmas of millimeter spatial scales and nanosecond time scales. In those plasmas where thermal conductivity and viscosity do not play a significant role, the hydrodynamic evolution is suitable for benchmarking hydrodynamics modeling in astrophysical codes. Several experiments on Nova examine hydrodynamically unstable interfaces. A typical Nova experiment uses a gold millimeter-scale hohlraum to convert the laser energy to a 200 eV blackbody source lasting about a nanosecond. The x-rays ablate a planar target, generating a series of shocks and accelerating the target. The evolvingmore » area1 density is diagnosed by time-resolved radiography, using a second x-ray source. Data from several experiments are presented and diagnostic techniques are discussed.« less
Thogmartin, Wayne E.; Crimmins, Shawn M.; Pearce, Jennie
2014-01-01
Large-scale planning for the conservation of species is often hindered by a poor understanding of factors limiting populations. In regions with declining wildlife populations, it is critical that objective metrics of conservation success are developed to ensure that conservation actions achieve desired results. Using spatially explicit estimates of bird abundance, we evaluated several management alternatives for conserving bird populations in the Prairie Hardwood Transition of the United States. We designed landscapes conserving species at 50% of their current predicted abundance as well as landscapes attempting to achieve species population targets (which often required the doubling of current abundance). Conserving species at reduced (half of current) abundance led to few conservation conflicts. However, because of extensive modification of the landscape to suit human use, strategies for achieving regional population targets for forest bird species would be difficult under even ideal circumstances, and even more so if maintenance of grassland bird populations is also desired. Our results indicated that large-scale restoration of agricultural lands to native grassland and forest habitats may be the most productive conservation action for increasing bird population sizes but the level of landscape transition required to approach target bird population sizes may be societally unacceptable.
Large-scale conservation planning in a multinational marine environment: cost matters.
Mazor, Tessa; Giakoumi, Sylvaine; Kark, Salit; Possingham, Hugh P
2014-07-01
Explicitly including cost in marine conservation planning is essential for achieving feasible and efficient conservation outcomes. Yet, spatial priorities for marine conservation are still often based solely on biodiversity hotspots, species richness, and/or cumulative threat maps. This study aims to provide an approach for including cost when planning large-scale Marine Protected Area (MPA) networks that span multiple countries. Here, we explore the incorporation of cost in the complex setting of the Mediterranean Sea. In order to include cost in conservation prioritization, we developed surrogates that account for revenue from multiple marine sectors: commercial fishing, noncommercial fishing, and aquaculture. Such revenue can translate into an opportunity cost for the implementation of an MPA network. Using the software Marxan, we set conservation targets to protect 10% of the distribution of 77 threatened marine species in the Mediterranean Sea. We compared nine scenarios of opportunity cost by calculating the area and cost required to meet our targets. We further compared our spatial priorities with those that are considered consensus areas by several proposed prioritization schemes in the Mediterranean Sea, none of which explicitly considers cost. We found that for less than 10% of the Sea's area, our conservation targets can be achieved while incurring opportunity costs of less than 1%. In marine systems, we reveal that area is a poor cost surrogate and that the most effective surrogates are those that account for multiple sectors or stakeholders. Furthermore, our results indicate that including cost can greatly influence the selection of spatial priorities for marine conservation of threatened species. Although there are known limitations in multinational large-scale planning, attempting to devise more systematic and rigorous planning methods is especially critical given that collaborative conservation action is on the rise and global financial crisis restricts conservation investments.
Rodríguez-Gómez, Francisco; Romero-Gil, Verónica; Arroyo-López, Francisco N.; Roldán-Reyes, Juan C.; Torres-Gallardo, Rosa; Bautista-Gallego, Joaquín; García-García, Pedro; Garrido-Fernández, Antonio
2017-01-01
This work studies the inoculation conditions for allowing the survival/predominance of a potential probiotic strain (Lactobacillus pentosus TOMC-LAB2) when used as a starter culture in large-scale fermentations of green Spanish-style olives. The study was performed in two successive seasons (2011/2012 and 2012/2013), using about 150 tons of olives. Inoculation immediately after brining (to prevent wild initial microbiota growth) followed by re-inoculation 24 h later (to improve competitiveness) was essential for inoculum predominance. Processing early in the season (September) showed a favorable effect on fermentation and strain predominance on olives (particularly when using acidified brines containing 25 L HCl/vessel) but caused the disappearance of the target strain from both brines and olives during the storage phase. On the contrary, processing in October slightly reduced the target strain predominance on olives (70–90%) but allowed longer survival. The type of inoculum used (laboratory vs. industry pre-adapted) never had significant effects. Thus, this investigation discloses key issues for the survival and predominance of starter cultures in large-scale industrial fermentations of green Spanish-style olives. Results can be of interest for producing probiotic table olives and open new research challenges on the causes of inoculum vanishing during the storage phase. PMID:28567038
Large-scale virtual screening on public cloud resources with Apache Spark.
Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola
2017-01-01
Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Alan E.
Here, proposed dark matter detectors with eV-scale sensitivities will detect a large background of atomic (nuclear) recoils from coherent photon scattering of MeV-scale photons. This background climbs steeply below ~10 eV, far exceeding the declining rate of low-energy Compton recoils. The upcoming generation of dark matter detectors will not be limited by this background, but further development of eV-scale and sub-eV detectors will require strategies, including the use of low nuclear mass target materials, to maximize dark matter sensitivity while minimizing the coherent photon scattering background.
A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.
Halloran, John T; Rocke, David M
2018-05-04
Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .
Two-Dimensional Simulations of Electron Shock Ignition at the Megajoule Scale
NASA Astrophysics Data System (ADS)
Shang, W.; Betti, R.
2016-10-01
Shock ignition uses a late strong shock to ignite the hot spot of an inertial confinement fusion capsule. In the standard shock-ignition scheme, an ignitor shock is launched by the ablation pressure from a spike in laser intensity. Recent experiments on OMEGA have shown that focused beams with intensity up to 6 ×1015 W /cm2 can produce copious amounts of hot electrons. The hot electrons are produced by laser-plasma instabilities (LPI's) and can carry up to 15 % of the instantaneous laser power. Megajoule-scale targets will likely produce even more hot electrons because of the large plasma scale length. We show that it is possible to design ignition targets with low implosion velocities that can be shock ignited using LPI-generated hot electrons to obtain high energy gains. These designs are robust to low-mode asymmetries and they ignite even for highly distorted implosions. Electron shock ignition requires tens of kilojoules of hot electrons, which can only be produced on a large laser facility like the National Ignition Facility. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
Jones, K.B.; Neale, A.C.; Wade, T.G.; Wickham, J.D.; Cross, C.L.; Edmonds, C.M.; Loveland, Thomas R.; Nash, M.S.; Riitters, K.H.; Smith, E.R.
2001-01-01
Spatially explicit identification of changes in ecological conditions over large areas is key to targeting and prioritizing areas for environmental protection and restoration by managers at watershed, basin, and regional scales. A critical limitation to this point has been the development of methods to conduct such broad-scale assessments. Field-based methods have proven to be too costly and too inconsistent in their application to make estimates of ecological conditions over large areas. New spatial data derived from satellite imagery and other sources, the development of statistical models relating landscape composition and pattern to ecological endpoints, and geographic information systems (GIS) make it possible to evaluate ecological conditions at multiple scales over broad geographic regions. In this study, we demonstrate the application of spatially distributed models for bird habitat quality and nitrogen yield to streams to assess the consequences of landcover change across the mid-Atlantic region between the 1970s and 1990s. Moreover, we present a way to evaluate spatial concordance between models related to different environmental endpoints. Results of this study should help environmental managers in the mid-Atlantic region target those areas in need of conservation and protection.
Three-Dimensional Hydrodynamic Simulations of OMEGA Implosions
NASA Astrophysics Data System (ADS)
Igumenshchev, I. V.
2016-10-01
The effects of large-scale (with Legendre modes less than 30) asymmetries in OMEGA direct-drive implosions caused by laser illumination nonuniformities (beam-power imbalance and beam mispointing and mistiming) and target offset, mount, and layers nonuniformities were investigated using three-dimensional (3-D) hydrodynamic simulations. Simulations indicate that the performance degradation in cryogenic implosions is caused mainly by the target offsets ( 10 to 20 μm), beampower imbalance (σrms 10 %), and initial target asymmetry ( 5% ρRvariation), which distort implosion cores, resulting in a reduced hot-spot confinement and an increased residual kinetic energy of the stagnated target. The ion temperature inferred from the width of simulated neutron spectra are influenced by bulk fuel motion in the distorted hot spot and can result in up to 2-keV apparent temperature increase. Similar temperature variations along different lines of sight are observed. Simulated x-ray images of implosion cores in the 4- to 8-keV energy range show good agreement with experiments. Demonstrating hydrodynamic equivalence to ignition designs on OMEGA requires reducing large-scale target and laser-imposed nonuniformities, minimizing target offset, and employing high-efficient mid-adiabat (α = 4) implosion designs that mitigate cross-beam energy transfer (CBET) and suppress short-wavelength Rayleigh-Taylor growth. These simulations use a new low-noise 3-D Eulerian hydrodynamic code ASTER. Existing 3-D hydrodynamic codes for direct-drive implosions currently miss CBET and noise-free ray-trace laser deposition algorithms. ASTER overcomes these limitations using a simplified 3-D laser-deposition model, which includes CBET and is capable of simulating the effects of beam-power imbalance, beam mispointing, mistiming, and target offset. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
Testing Inflation with Large Scale Structure: Connecting Hopes with Reality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alvarez, Marcello; Baldauf, T.; Bond, J. Richard
2014-12-15
The statistics of primordial curvature fluctuations are our window into the period of inflation, where these fluctuations were generated. To date, the cosmic microwave background has been the dominant source of information about these perturbations. Large-scale structure is, however, from where drastic improvements should originate. In this paper, we explain the theoretical motivations for pursuing such measurements and the challenges that lie ahead. In particular, we discuss and identify theoretical targets regarding the measurement of primordial non-Gaussianity. We argue that when quantified in terms of the local (equilateral) template amplitude fmore » $$loc\\atop{NL}$$ (f$$eq\\atop{NL}$$), natural target levels of sensitivity are Δf$$loc, eq\\atop{NL}$$ ≃ 1. We highlight that such levels are within reach of future surveys by measuring 2-, 3- and 4-point statistics of the galaxy spatial distribution. This paper summarizes a workshop held at CITA (University of Toronto) on October 23-24, 2014.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ben-Naim, Eli; Krapivsky, Paul
Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less
Enabling Large-Scale Design, Synthesis and Validation of Small Molecule Protein-Protein Antagonists
Koes, David; Khoury, Kareem; Huang, Yijun; Wang, Wei; Bista, Michal; Popowicz, Grzegorz M.; Wolf, Siglinde; Holak, Tad A.; Dömling, Alexander; Camacho, Carlos J.
2012-01-01
Although there is no shortage of potential drug targets, there are only a handful known low-molecular-weight inhibitors of protein-protein interactions (PPIs). One problem is that current efforts are dominated by low-yield high-throughput screening, whose rigid framework is not suitable for the diverse chemotypes present in PPIs. Here, we developed a novel pharmacophore-based interactive screening technology that builds on the role anchor residues, or deeply buried hot spots, have in PPIs, and redesigns these entry points with anchor-biased virtual multicomponent reactions, delivering tens of millions of readily synthesizable novel compounds. Application of this approach to the MDM2/p53 cancer target led to high hit rates, resulting in a large and diverse set of confirmed inhibitors, and co-crystal structures validate the designed compounds. Our unique open-access technology promises to expand chemical space and the exploration of the human interactome by leveraging in-house small-scale assays and user-friendly chemistry to rationally design ligands for PPIs with known structure. PMID:22427896
Ectopically tethered CP190 induces large-scale chromatin decondensation
NASA Astrophysics Data System (ADS)
Ahanger, Sajad H.; Günther, Katharina; Weth, Oliver; Bartkuhn, Marek; Bhonde, Ramesh R.; Shouche, Yogesh S.; Renkawitz, Rainer
2014-01-01
Insulator mediated alteration in higher-order chromatin and/or nucleosome organization is an important aspect of epigenetic gene regulation. Recent studies have suggested a key role for CP190 in such processes. In this study, we analysed the effects of ectopically tethered insulator factors on chromatin structure and found that CP190 induces large-scale decondensation when targeted to a condensed lacO array in mammalian and Drosophila cells. In contrast, dCTCF alone, is unable to cause such a decondensation, however, when CP190 is present, dCTCF recruits it to the lacO array and mediates chromatin unfolding. The CP190 induced opening of chromatin may not be correlated with transcriptional activation, as binding of CP190 does not enhance luciferase activity in reporter assays. We propose that CP190 may mediate histone modification and chromatin remodelling activity to induce an open chromatin state by its direct recruitment or targeting by a DNA binding factor such as dCTCF.
Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš
2016-01-01
Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model’s components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis-related components and tumor-suppressor genes, suggesting that this combinatorial perturbation may lead to a better target for decreasing cell proliferation and inducing apoptosis. Finally, our approach shows a potential to identify and prioritize therapeutic targets through systemic perturbation analysis of large-scale computational models of signal transduction. Although some components of the presented computational results have been validated against independent gene expression data sets, more laboratory experiments are warranted to more comprehensively validate the presented results. PMID:26904540
Large aperture diffractive space telescope
Hyde, Roderick A.
2001-01-01
A large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary objective lens functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass "aiming" at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The objective lens includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the objective lens, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets which may be either earth bound or celestial.
Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing
2013-01-01
The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination.
Zaehringer, Julie G; Wambugu, Grace; Kiteme, Boniface; Eckert, Sandra
2018-05-01
Africa has been heavily targeted by large-scale agricultural investments (LAIs) throughout the last decade, with scarcely known impacts on local social-ecological systems. In Kenya, a large number of LAIs were made in the region northwest of Mount Kenya. These large-scale farms produce vegetables and flowers mainly for European markets. However, land use in the region remains dominated by small-scale crop and livestock farms with less than 1 ha of land each, who produce both for their own subsistence and for the local markets. We interviewed 100 small-scale farmers living near five different LAIs to elicit their perceptions of the impacts that these LAIs have on their land use and the overall environment. Furthermore, we analyzed remotely sensed land cover and land use data to assess land use change in the vicinity of the five LAIs. While land use change did not follow a clear trend, a number of small-scale farmers did adapt their crop management to environmental changes such as a reduced river water flows and increased pests, which they attributed to the presence of LAIs. Despite the high number of open conflicts between small-scale land users and LAIs around the issue of river water abstraction, the main environmental impact, felt by almost half of the interviewed land users, was air pollution with agrochemicals sprayed on the LAIs' land. Even though only a low percentage of local land users and their household members were directly involved with LAIs, a large majority of respondents favored the presence of LAIs nearby, as they are believed to contribute to the region's overall economic development. Copyright © 2018 Elsevier Ltd. All rights reserved.
2011-09-01
and Imaging Framework First, the parallelized 3-D FDTD algorithm is applied to simulate composite scattering from targets in a rough ground...solver as pertinent to forward-looking radar sensing , the effects of surface clutter on multistatic target imaging are illustrated with large-scale...Full-wave Characterization of Rough Terrain Surface Effects for Forward-looking Radar Applications: A Scattering and Imaging Study from the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calvo, J.; Cantini, C.; Crivelli, P.
The Argon Dark Matter (ArDM) experiment consists of a liquid argon (LAr) time projection chamber (TPC) sensitive to nuclear recoils, resulting from scattering of hypothetical Weakly Interacting Massive Particles (WIMPs) on argon targets. With an active target mass of 850 kg ArDM represents an important milestone towards developments for large LAr Dark Matter detectors. Here we present the experimental apparatus currently installed underground at the Laboratorio Subterráneo de Canfranc (LSC), Spain. We show data on gaseous or liquid argon targets recorded in 2015 during the commissioning of ArDM in single phase at zero E-field (ArDM Run I). The data confirmsmore » the overall good and stable performance of the ArDM tonne-scale LAr detector.« less
High Quantum Efficiency OLED Lighting Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiang, Joseph
The overall goal of the program was to apply improvements in light outcoupling technology to a practical large area plastic luminaire, and thus enable the product vision of an extremely thin form factor high efficiency large area light source. The target substrate was plastic and the baseline device was operating at 35 LPW at the start of the program. The target LPW of the program was a >2x improvement in the LPW efficacy and the overall amount of light to be delivered was relatively high 900 lumens. Despite the extremely difficult challenges associated with scaling up a wet solution processmore » on plastic substrates, the program was able to make substantial progress. A small molecule wet solution process was successfully implemented on plastic substrates with almost no loss in efficiency in transitioning from the laboratory scale glass to large area plastic substrates. By transitioning to a small molecule based process, the LPW entitlement increased from 35 LPW to 60 LPW. A further 10% improvement in outcoupling efficiency was demonstrated via the use of a highly reflecting cathode, which reduced absorptive loss in the OLED device. The calculated potential improvement in some cases is even larger, ~30%, and thus there is considerable room for optimism in improving the net light coupling efficacy, provided absorptive loss mechanisms are eliminated. Further improvements are possible if scattering schemes such as the silver nanowire based hard coat structure are fully developed. The wet coating processes were successfully scaled to large area plastic substrate and resulted in the construction of a 900 lumens luminaire device.« less
Image scale measurement with correlation filters in a volume holographic optical correlator
NASA Astrophysics Data System (ADS)
Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan
2013-08-01
A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.
Next-generation sequencing provides unprecedented access to genomic information in archival FFPE tissue samples. However, costs and technical challenges related to RNA isolation and enrichment limit use of whole-genome RNA-sequencing for large-scale studies of FFPE specimens. Rec...
Evaluation of nucleus segmentation in digital pathology images through large scale image synthesis
NASA Astrophysics Data System (ADS)
Zhou, Naiyun; Yu, Xiaxia; Zhao, Tianhao; Wen, Si; Wang, Fusheng; Zhu, Wei; Kurc, Tahsin; Tannenbaum, Allen; Saltz, Joel; Gao, Yi
2017-03-01
Digital histopathology images with more than 1 Gigapixel are drawing more and more attention in clinical, biomedical research, and computer vision fields. Among the multiple observable features spanning multiple scales in the pathology images, the nuclear morphology is one of the central criteria for diagnosis and grading. As a result it is also the mostly studied target in image computing. Large amount of research papers have devoted to the problem of extracting nuclei from digital pathology images, which is the foundation of any further correlation study. However, the validation and evaluation of nucleus extraction have yet been formulated rigorously and systematically. Some researches report a human verified segmentation with thousands of nuclei, whereas a single whole slide image may contain up to million. The main obstacle lies in the difficulty of obtaining such a large number of validated nuclei, which is essentially an impossible task for pathologist. We propose a systematic validation and evaluation approach based on large scale image synthesis. This could facilitate a more quantitatively validated study for current and future histopathology image analysis field.
Li, Chen; Yongbo, Lv; Chi, Chen
2015-01-01
Based on the data from 30 provincial regions in China, an assessment and empirical analysis was carried out on the utilizing and sharing of the large-scale scientific equipment with a comprehensive assessment model established on the three dimensions, namely, equipment, utilization and sharing. The assessment results were interpreted in light of relevant policies. The results showed that on the whole, the overall development level in the provincial regions in eastern and central China is higher than that in western China. This is mostly because of the large gap among the different provincial regions with respect to the equipped level. But in terms of utilizing and sharing, some of the Western provincial regions, such as Ningxia, perform well, which is worthy of our attention. Policy adjustment targeting at the differentiation, elevation of the capacity of the equipment management personnel, perfection of the sharing and cooperation platform, and the promotion of the establishment of open sharing funds, are all important measures to promote the utilization and sharing of the large-scale scientific equipment and to narrow the gap among different regions. PMID:25937850
Chromatin Landscapes of Retroviral and Transposon Integration Profiles
Badhai, Jitendra; Rust, Alistair G.; Rad, Roland; Hilkens, John; Berns, Anton; van Lohuizen, Maarten; Wessels, Lodewyk F. A.; de Ridder, Jeroen
2014-01-01
The ability of retroviruses and transposons to insert their genetic material into host DNA makes them widely used tools in molecular biology, cancer research and gene therapy. However, these systems have biases that may strongly affect research outcomes. To address this issue, we generated very large datasets consisting of to unselected integrations in the mouse genome for the Sleeping Beauty (SB) and piggyBac (PB) transposons, and the Mouse Mammary Tumor Virus (MMTV). We analyzed (epi)genomic features to generate bias maps at both local and genome-wide scales. MMTV showed a remarkably uniform distribution of integrations across the genome. More distinct preferences were observed for the two transposons, with PB showing remarkable resemblance to bias profiles of the Murine Leukemia Virus. Furthermore, we present a model where target site selection is directed at multiple scales. At a large scale, target site selection is similar across systems, and defined by domain-oriented features, namely expression of proximal genes, proximity to CpG islands and to genic features, chromatin compaction and replication timing. Notable differences between the systems are mainly observed at smaller scales, and are directed by a diverse range of features. To study the effect of these biases on integration sites occupied under selective pressure, we turned to insertional mutagenesis (IM) screens. In IM screens, putative cancer genes are identified by finding frequently targeted genomic regions, or Common Integration Sites (CISs). Within three recently completed IM screens, we identified 7%–33% putative false positive CISs, which are likely not the result of the oncogenic selection process. Moreover, results indicate that PB, compared to SB, is more suited to tag oncogenes. PMID:24721906
Cosmological consistency tests of gravity theory and cosmic acceleration
NASA Astrophysics Data System (ADS)
Ishak-Boushaki, Mustapha B.
2017-01-01
Testing general relativity at cosmological scales and probing the cause of cosmic acceleration are among the important objectives targeted by incoming and future astronomical surveys and experiments. I present our recent results on consistency tests that can provide insights about the underlying gravity theory and cosmic acceleration using cosmological data sets. We use statistical measures, the rate of cosmic expansion, the growth rate of large scale structure, and the physical consistency of these probes with one another.
Commercial-scale biotherapeutics manufacturing facility for plant-made pharmaceuticals.
Holtz, Barry R; Berquist, Brian R; Bennett, Lindsay D; Kommineni, Vally J M; Munigunti, Ranjith K; White, Earl L; Wilkerson, Don C; Wong, Kah-Yat I; Ly, Lan H; Marcel, Sylvain
2015-10-01
Rapid, large-scale manufacture of medical countermeasures can be uniquely met by the plant-made-pharmaceutical platform technology. As a participant in the Defense Advanced Research Projects Agency (DARPA) Blue Angel project, the Caliber Biotherapeutics facility was designed, constructed, commissioned and released a therapeutic target (H1N1 influenza subunit vaccine) in <18 months from groundbreaking. As of 2015, this facility was one of the world's largest plant-based manufacturing facilities, with the capacity to process over 3500 kg of plant biomass per week in an automated multilevel growing environment using proprietary LED lighting. The facility can commission additional plant grow rooms that are already built to double this capacity. In addition to the commercial-scale manufacturing facility, a pilot production facility was designed based on the large-scale manufacturing specifications as a way to integrate product development and technology transfer. The primary research, development and manufacturing system employs vacuum-infiltrated Nicotiana benthamiana plants grown in a fully contained, hydroponic system for transient expression of recombinant proteins. This expression platform has been linked to a downstream process system, analytical characterization, and assessment of biological activity. This integrated approach has demonstrated rapid, high-quality production of therapeutic monoclonal antibody targets, including a panel of rituximab biosimilar/biobetter molecules and antiviral antibodies against influenza and dengue fever. © 2015 Society for Experimental Biology, Association of Applied Biologists and John Wiley & Sons Ltd.
Targeted carbon conservation at national scales with high-resolution monitoring
Asner, Gregory P.; Knapp, David E.; Martin, Roberta E.; Tupayachi, Raul; Anderson, Christopher B.; Mascaro, Joseph; Sinca, Felipe; Chadwick, K. Dana; Higgins, Mark; Farfan, William; Llactayo, William; Silman, Miles R.
2014-01-01
Terrestrial carbon conservation can provide critical environmental, social, and climate benefits. Yet, the geographically complex mosaic of threats to, and opportunities for, conserving carbon in landscapes remain largely unresolved at national scales. Using a new high-resolution carbon mapping approach applied to Perú, a megadiverse country undergoing rapid land use change, we found that at least 0.8 Pg of aboveground carbon stocks are at imminent risk of emission from land use activities. Map-based information on the natural controls over carbon density, as well as current ecosystem threats and protections, revealed three biogeographically explicit strategies that fully offset forthcoming land-use emissions. High-resolution carbon mapping affords targeted interventions to reduce greenhouse gas emissions in rapidly developing tropical nations. PMID:25385593
Targeted carbon conservation at national scales with high-resolution monitoring.
Asner, Gregory P; Knapp, David E; Martin, Roberta E; Tupayachi, Raul; Anderson, Christopher B; Mascaro, Joseph; Sinca, Felipe; Chadwick, K Dana; Higgins, Mark; Farfan, William; Llactayo, William; Silman, Miles R
2014-11-25
Terrestrial carbon conservation can provide critical environmental, social, and climate benefits. Yet, the geographically complex mosaic of threats to, and opportunities for, conserving carbon in landscapes remain largely unresolved at national scales. Using a new high-resolution carbon mapping approach applied to Perú, a megadiverse country undergoing rapid land use change, we found that at least 0.8 Pg of aboveground carbon stocks are at imminent risk of emission from land use activities. Map-based information on the natural controls over carbon density, as well as current ecosystem threats and protections, revealed three biogeographically explicit strategies that fully offset forthcoming land-use emissions. High-resolution carbon mapping affords targeted interventions to reduce greenhouse gas emissions in rapidly developing tropical nations.
A simulation study demonstrating the importance of large-scale trailing vortices in wake steering
Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew; ...
2018-05-14
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
A simulation study demonstrating the importance of large-scale trailing vortices in wake steering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
Reaching the global target to reduce stunting: an investment framework
Shekar, Meera; D’Alimonte, Mary R; Rogers, Hilary E; Eberwein, Julia Dayton; Akuoku, Jon Kweku; Pereira, Audrey; Soe-Lin, Shan; Hecht, Robert
2017-01-01
Abstract Childhood stunting, being short for one’s age, has life-long consequences for health, human capital and economic growth. Being stunted in early childhood is associated with slower cognitive development, reduced schooling attainment and adult incomes decreased by 5–53%. The World Health Assembly has endorsed global nutrition targets including one to reduce the number of stunted children under five by 40% by 2025. The target has been included in the Sustainable Development Goals (SDG target 2.2). This paper estimates the cost of achieving this target and develops scenarios for generating the necessary financing. We focus on a key intervention package for stunting (KIPS) with strong evidence of effectiveness. Annual scale-up costs for the period of 2016–25 were estimated for a sample of 37 high burden countries and extrapolated to all low and middle income countries. The Lives Saved Tool was used to model the impact of the scale-up on stunting prevalence. We analysed data on KIPS budget allocations and expenditure by governments, donors and households to derive a global baseline financing estimate. We modelled two financing scenarios, a ‘business as usual’, which extends the current trends in domestic and international financing for nutrition through 2025, and another that proposes increases in financing from all sources under a set of burden-sharing rules. The 10-year financial need to scale up KIPS is US$49.5 billion. Under ‘business as usual’, this financial need is not met and the global stunting target is not reached. To reach the target, current financing will have to increase from US$2.6 billion to US$7.4 billion a year on average. Reaching the stunting target is feasible but will require large coordinated investments in KIPS and a supportive enabling environment. The example of HIV scale-up over 2001–11 is instructive in identifying the factors that could drive such a global response to childhood stunting. PMID:28453717
Multiscale sagebrush rangeland habitat modeling in southwest Wyoming
Homer, Collin G.; Aldridge, Cameron L.; Meyer, Debra K.; Coan, Michael J.; Bowen, Zachary H.
2009-01-01
Sagebrush-steppe ecosystems in North America have experienced dramatic elimination and degradation since European settlement. As a result, sagebrush-steppe dependent species have experienced drastic range contractions and population declines. Coordinated ecosystem-wide research, integrated with monitoring and management activities, would improve the ability to maintain existing sagebrush habitats. However, current data only identify resource availability locally, with rigorous spatial tools and models that accurately model and map sagebrush habitats over large areas still unavailable. Here we report on an effort to produce a rigorous large-area sagebrush-habitat classification and inventory with statistically validated products and estimates of precision in the State of Wyoming. This research employs a combination of significant new tools, including (1) modeling sagebrush rangeland as a series of independent continuous field components that can be combined and customized by any user at multiple spatial scales; (2) collecting ground-measured plot data on 2.4-meter imagery in the same season the satellite imagery is acquired; (3) effective modeling of ground-measured data on 2.4-meter imagery to maximize subsequent extrapolation; (4) acquiring multiple seasons (spring, summer, and fall) of an additional two spatial scales of imagery (30 meter and 56 meter) for optimal large-area modeling; (5) using regression tree classification technology that optimizes data mining of multiple image dates, ratios, and bands with ancillary data to extrapolate ground training data to coarser resolution sensors; and (6) employing rigorous accuracy assessment of model predictions to enable users to understand the inherent uncertainties. First-phase results modeled eight rangeland components (four primary targets and four secondary targets) as continuous field predictions. The primary targets included percent bare ground, percent herbaceousness, percent shrub, and percent litter. The four secondary targets included percent sagebrush (Artemisia spp.), percent big sagebrush (Artemisia tridentata), percent Wyoming sagebrush (Artemisia tridentata wyomingensis), and sagebrush height (centimeters). Results were validated by an independent accuracy assessment with root mean square error (RMSE) values ranging from 6.38 percent for bare ground to 2.99 percent for sagebrush at the QuickBird scale and RMSE values ranging from 12.07 percent for bare ground to 6.34 percent for sagebrush at the full Landsat scale. Subsequent project phases are now in progress, with plans to deliver products that improve accuracies of existing components, model new components, complete models over larger areas, track changes over time (from 1988 to 2007), and ultimately model wildlife population trends against these changes. We believe these results offer significant improvement in sagebrush rangeland quantification at multiple scales and offer users products that have been rigorously validated.
Ruijsbroek, Annemarie; Wong, Albert; Kunst, Anton E; van den Brink, Carolien; van Oers, Hans A M; Droomers, Mariël; Stronks, Karien
2017-01-01
Large-scale regeneration programmes to improve the personal conditions and living circumstances in deprived areas may affect health and the lifestyle of the residents. Previous evaluations concluded that a large-scale urban regeneration programme in the Netherlands had some positive effects within 3.5 years. The aim of the current study was to evaluate the effects at the longer run. With a quasi-experimental research design we assessed changes in the prevalence of general health, mental health, physical activity, overweight, obesity, and smoking between the pre-intervention (2003-04 -mid 2008) and intervention period (mid 2008-2013-14) in 40 deprived target districts and comparably deprived control districts. We used the Difference-in-Difference (DiD) to assess programme impact. Additionally, we stratified analyses by sex and by the intensity of the regeneration programme. Changes in health and health related behaviours from pre-intervention to the intervention period were about equally large in the target districts as in control districts. DiD impact estimates were inconsistent and not statistically significant. Sex differences in DiD estimates were not consistent or significant. Furthermore, DiD impact estimates were not consistently larger in target districts with more intensive intervention programmes. We found no evidence that this Dutch urban regeneration programme had an impact in the longer run on self-reported health and related behaviour at the area level.
Utilizing Online Training for Child Sexual Abuse Prevention: Benefits and Limitations
ERIC Educational Resources Information Center
Paranal, Rechelle; Thomas, Kiona Washington; Derrick, Christina
2012-01-01
The prevalence of child sexual abuse demands innovative approaches to prevent further victimization. The online environment provides new opportunities to expand existing child sexual abuse prevention trainings that target adult gatekeepers and allow for large scale interventions that are fiscally viable. This article discusses the benefits and…
PROBLEM OF FORMING IN A MAN-OPERATOR A HABIT OF TRACKING A MOVING TARGET,
Cybernetics stimulated the large-scale use of the method of functional analogy which makes it possible to compare technical and human activity systems...interesting and highly efficient human activity because of the psychological control factor involved in its operation. The human tracking system is
Monitoring ecosystem restoration at various scales in LAEs can be challenging, frustrating and rewarding. Some of the major ecosystem restoration monitoring occurring in LAEs include: seagrass expansion/contraction; dead zone sizes; oyster reefs; sea turtle nesting; toxic and nu...
"Second Chance": Some Theoretical and Empirical Remarks.
ERIC Educational Resources Information Center
Inbar, Dan E.; Sever, Rita
1986-01-01
Presents a conceptual framework of second-chance systems analyzable in terms of several basic parameters (targeted population, declared goals, processes, options for students, evaluation criteria, and implications for the regular system). Uses this framework to analyze an Israeli external high school, the subject of a large-scale study. Includes 3…
Spatially explicit identification of status and changes in ecological conditions over large, regional areas is key to targeting and prioritizing areas for potential further study and environmental protection and restoration. A critical limitation to this point has been our abili...
ERIC Educational Resources Information Center
Keep, Ewart; Westwood, Andy
The United Kingdom management population is a large and moving target. A growing number of individuals describe themselves as managers; the widely held view is there will be many, many more. Figures suggest the scale of the potential market and need for management education and training development (METD) is considerable. Levels of qualifications…
Spatially explicit identification of changes in ecological conditions over large areas is key to targeting and prioritizing areas for environmental protection and restoration by managers at watershed, basin, and regional scales. A critical limitation to this point has bee...
How to spend a dwindling greenhouse gas budget
NASA Astrophysics Data System (ADS)
Obersteiner, Michael; Bednar, Johannes; Wagner, Fabian; Gasser, Thomas; Ciais, Philippe; Forsell, Nicklas; Frank, Stefan; Havlik, Petr; Valin, Hugo; Janssens, Ivan A.; Peñuelas, Josep; Schmidt-Traub, Guido
2018-01-01
The Paris Agreement is based on emission scenarios that move from a sluggish phase-out of fossil fuels to large-scale late-century negative emissions. Alternative pathways of early deployment of negative emission technologies need to be considered to ensure that climate targets are reached safely and sustainably.
Spatially explicit identification of changes in ecological conditions over large areas is key to targeting and prioritizing areas for environmental protection and restoration by managers at watershed, basin, and regional scales. A critical limitation to this point has been the d...
Method for replicating an array of nucleic acid probes
Cantor, Charles R.; Przetakiewicz, Marek; Smith, Cassandra L.; Sano, Takeshi
1998-01-01
The invention relates to the replication of probe arrays and methods for replicating arrays of probes which are useful for the large scale manufacture of diagnostic aids used to screen biological samples for specific target sequences. Arrays created using PCR technology may comprise probes with 5'- and/or 3'-overhangs.
2011-02-01
Thrombocytopenia, Grade 3 in 1 patient • Hypomagnesemia, Grade 3 in 1 patient • Hypokalemia, Grade 3 in 2 patient • Pneumonia , Grade 3 in 7 patients...urgently needed. While the molecular events involved in lung cancer pathogenesis are being unraveled by ongoing large scale genomics, proteomics, and...tumor initiation, progression and metastasis are an important first step leading to the development of new prognostic markers and targets for therapy
High Efficiency Solar Thermochemical Reactor for Hydrogen Production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Anthony H.
2017-09-30
This research and development project is focused on the advancement of a technology that produces hydrogen at a cost that is competitive with fossil-based fuels for transportation. A twostep, solar-driven WS thermochemical cycle is theoretically capable of achieving an STH conversion ratio that exceeds the DOE target of 26% at a scale large enough to support an industrialized economy [1]. The challenge is to transition this technology from the laboratory to the marketplace and produce hydrogen at a cost that meets or exceeds DOE targets.
NASA Astrophysics Data System (ADS)
Wang, S.; Sobel, A. H.; Nie, J.
2015-12-01
Two Madden Julian Oscillation (MJO) events were observed during October and November 2011 in the equatorial Indian Ocean during the DYNAMO field campaign. Precipitation rates and large-scale vertical motion profiles derived from the DYNAMO northern sounding array are simulated in a small-domain cloud-resolving model using parameterized large-scale dynamics. Three parameterizations of large-scale dynamics --- the conventional weak temperature gradient (WTG) approximation, vertical mode based spectral WTG (SWTG), and damped gravity wave coupling (DGW) --- are employed. The target temperature profiles and radiative heating rates are taken from a control simulation in which the large-scale vertical motion is imposed (rather than directly from observations), and the model itself is significantly modified from that used in previous work. These methodological changes lead to significant improvement in the results.Simulations using all three methods, with imposed time -dependent radiation and horizontal moisture advection, capture the time variations in precipitation associated with the two MJO events well. The three methods produce significant differences in the large-scale vertical motion profile, however. WTG produces the most top-heavy and noisy profiles, while DGW's is smoother with a peak in midlevels. SWTG produces a smooth profile, somewhere between WTG and DGW, and in better agreement with observations than either of the others. Numerical experiments without horizontal advection of moisture suggest that that process significantly reduces the precipitation and suppresses the top-heaviness of large-scale vertical motion during the MJO active phases, while experiments in which the effect of cloud on radiation are disabled indicate that cloud-radiative interaction significantly amplifies the MJO. Experiments in which interactive radiation is used produce poorer agreement with observation than those with imposed time-varying radiative heating. Our results highlight the importance of both horizontal advection of moisture and cloud-radiative feedback to the dynamics of the MJO, as well as to accurate simulation and prediction of it in models.
NASA Astrophysics Data System (ADS)
Roverso, Davide
2003-08-01
Many-class learning is the problem of training a classifier to discriminate among a large number of target classes. Together with the problem of dealing with high-dimensional patterns (i.e. a high-dimensional input space), the many class problem (i.e. a high-dimensional output space) is a major obstacle to be faced when scaling-up classifier systems and algorithms from small pilot applications to large full-scale applications. The Autonomous Recursive Task Decomposition (ARTD) algorithm is here proposed as a solution to the problem of many-class learning. Example applications of ARTD to neural classifier training are also presented. In these examples, improvements in training time are shown to range from 4-fold to more than 30-fold in pattern classification tasks of both static and dynamic character.
Promoting Handwashing Behavior: The Effects of Large-scale Community and School-level Interventions.
Galiani, Sebastian; Gertler, Paul; Ajzenman, Nicolas; Orsola-Vidal, Alexandra
2016-12-01
This paper analyzes a randomized experiment that uses novel strategies to promote handwashing with soap at critical points in time in Peru. It evaluates a large-scale comprehensive initiative that involved both community and school activities in addition to communication campaigns. The analysis indicates that the initiative was successful in reaching the target audience and in increasing the treated population's knowledge about appropriate handwashing behavior. These improvements translated into higher self-reported and observed handwashing with soap at critical junctures. However, no significant improvements in the health of children under the age of 5 years were observed. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Evolution of scaling emergence in large-scale spatial epidemic spreading.
Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan
2011-01-01
Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.
Wang, Nanyi; Wang, Lirong; Xie, Xiang-Qun
2017-11-27
Molecular docking is widely applied to computer-aided drug design and has become relatively mature in the recent decades. Application of docking in modeling varies from single lead compound optimization to large-scale virtual screening. The performance of molecular docking is highly dependent on the protein structures selected. It is especially challenging for large-scale target prediction research when multiple structures are available for a single target. Therefore, we have established ProSelection, a docking preferred-protein selection algorithm, in order to generate the proper structure subset(s). By the ProSelection algorithm, protein structures of "weak selectors" are filtered out whereas structures of "strong selectors" are kept. Specifically, the structure which has a good statistical performance of distinguishing active ligands from inactive ligands is defined as a strong selector. In this study, 249 protein structures of 14 autophagy-related targets are investigated. Surflex-dock was used as the docking engine to distinguish active and inactive compounds against these protein structures. Both t test and Mann-Whitney U test were used to distinguish the strong from the weak selectors based on the normality of the docking score distribution. The suggested docking score threshold for active ligands (SDA) was generated for each strong selector structure according to the receiver operating characteristic (ROC) curve. The performance of ProSelection was further validated by predicting the potential off-targets of 43 U.S. Federal Drug Administration approved small molecule antineoplastic drugs. Overall, ProSelection will accelerate the computational work in protein structure selection and could be a useful tool for molecular docking, target prediction, and protein-chemical database establishment research.
Coherent photon scattering background in sub- GeV / c 2 direct dark matter searches
Robinson, Alan E.
2017-01-18
Here, proposed dark matter detectors with eV-scale sensitivities will detect a large background of atomic (nuclear) recoils from coherent photon scattering of MeV-scale photons. This background climbs steeply below ~10 eV, far exceeding the declining rate of low-energy Compton recoils. The upcoming generation of dark matter detectors will not be limited by this background, but further development of eV-scale and sub-eV detectors will require strategies, including the use of low nuclear mass target materials, to maximize dark matter sensitivity while minimizing the coherent photon scattering background.
Research on quantitative relationship between NIIRS and the probabilities of discrimination
NASA Astrophysics Data System (ADS)
Bai, Honggang
2011-08-01
There are a large number of electro-optical (EO) and infrared (IR) sensors used on military platforms including ground vehicle, low altitude air vehicle, high altitude air vehicle, and satellite systems. Ground vehicle and low-altitude air vehicle (rotary and fixed-wing aircraft) sensors typically use the probabilities of discrimination (detection, recognition, and identification) as design requirements and system performance indicators. High-altitude air vehicles and satellite sensors have traditionally used the National Imagery Interpretation Rating Scale (NIIRS) performance measures for guidance in design and measures of system performance. Recently, there has a large effort to make strategic sensor information available to tactical forces or make the information of targets acquisition can be used by strategic systems. In this paper, the two techniques about the probabilities of discrimination and NIIRS for sensor design are presented separately. For the typical infrared remote sensor design parameters, the function of the probability of recognition and NIIRS scale as the distance R is given to Standard NATO Target and M1Abrams two different size targets based on the algorithm of predicting the field performance and NIIRS. For Standard NATO Target, M1Abrams, F-15, and B-52 four different size targets, the conversion from NIIRS to the probabilities of discrimination are derived and calculated, and the similarities and differences between NIIRS and the probabilities of discrimination are analyzed based on the result of calculation. Comparisons with preliminary calculation results show that the conversion between NIIRS and the probabilities of discrimination is probable although more validation experiments are needed.
Multi-Conformer Ensemble Docking to Difficult Protein Targets
Ellingson, Sally R.; Miao, Yinglong; Baudry, Jerome; ...
2014-09-08
We investigate large-scale ensemble docking using five proteins from the Directory of Useful Decoys (DUD, dud.docking.org) for which docking to crystal structures has proven difficult. Molecular dynamics trajectories are produced for each protein and an ensemble of representative conformational structures extracted from the trajectories. Docking calculations are performed on these selected simulation structures and ensemble-based enrichment factors compared with those obtained using docking in crystal structures of the same protein targets or random selection of compounds. We also found simulation-derived snapshots with improved enrichment factors that increased the chemical diversity of docking hits for four of the five selected proteins.more » A combination of all the docking results obtained from molecular dynamics simulation followed by selection of top-ranking compounds appears to be an effective strategy for increasing the number and diversity of hits when using docking to screen large libraries of chemicals against difficult protein targets.« less
NASA Astrophysics Data System (ADS)
Haigang, Sui; Zhina, Song
2016-06-01
Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.
Wicki, Andreas; Ritschard, Reto; Loesch, Uli; Deuster, Stefanie; Rochlitz, Christoph; Mamot, Christoph
2015-04-30
We describe the large-scale, GMP-compliant production process of doxorubicin-loaded and anti-EGFR-coated immunoliposomes (anti-EGFR-ILs-dox) used in a first-in-man, dose escalation clinical trial. 10 batches of this nanoparticle have been produced in clean room facilities. Stability data from the pre-GMP and the GMP batch indicate that the anti-EGFR-ILs-dox nanoparticle was stable for at least 18 months after release. Release criteria included visual inspection, sterility testing, as well as measurements of pH (pH 5.0-7.0), doxorubicin HCl concentration (0.45-0.55 mg/ml), endotoxin concentration (<1.21 IU/ml), leakage (<10%), particle size (Z-average of Caelyx ± 20 nm), and particle uptake (uptake absolute: >0.50 ng doxorubicin/μg protein; uptake relatively to PLD: >5 fold). All batches fulfilled the defined release criteria, indicating a high reproducibility as well as batch-to-batch uniformity of the main physico-chemical features of the nanoparticles in the setting of the large-scale GMP process. In the clinical trial, 29 patients were treated with this nanoparticle between 2007 and 2010. Pharmacokinetic data of anti-EGFR-ILs-dox collected during the clinical study revealed stability of the nanocarrier in vivo. Thus, reliable and GMP-compliant production of anti-EGFR-targeted nanoparticles for clinical application is feasible. Copyright © 2015 Elsevier B.V. All rights reserved.
Zhu, Zhou; Ihle, Nathan T; Rejto, Paul A; Zarrinkar, Patrick P
2016-06-13
Genome-scale functional genomic screens across large cell line panels provide a rich resource for discovering tumor vulnerabilities that can lead to the next generation of targeted therapies. Their data analysis typically has focused on identifying genes whose knockdown enhances response in various pre-defined genetic contexts, which are limited by biological complexities as well as the incompleteness of our knowledge. We thus introduce a complementary data mining strategy to identify genes with exceptional sensitivity in subsets, or outlier groups, of cell lines, allowing an unbiased analysis without any a priori assumption about the underlying biology of dependency. Genes with outlier features are strongly and specifically enriched with those known to be associated with cancer and relevant biological processes, despite no a priori knowledge being used to drive the analysis. Identification of exceptional responders (outliers) may not lead only to new candidates for therapeutic intervention, but also tumor indications and response biomarkers for companion precision medicine strategies. Several tumor suppressors have an outlier sensitivity pattern, supporting and generalizing the notion that tumor suppressors can play context-dependent oncogenic roles. The novel application of outlier analysis described here demonstrates a systematic and data-driven analytical strategy to decipher large-scale functional genomic data for oncology target and precision medicine discoveries.
PIRATE: pediatric imaging response assessment and targeting environment
NASA Astrophysics Data System (ADS)
Glenn, Russell; Zhang, Yong; Krasin, Matthew; Hua, Chiaho
2010-02-01
By combining the strengths of various imaging modalities, the multimodality imaging approach has potential to improve tumor staging, delineation of tumor boundaries, chemo-radiotherapy regime design, and treatment response assessment in cancer management. To address the urgent needs for efficient tools to analyze large-scale clinical trial data, we have developed an integrated multimodality, functional and anatomical imaging analysis software package for target definition and therapy response assessment in pediatric radiotherapy (RT) patients. Our software provides quantitative tools for automated image segmentation, region-of-interest (ROI) histogram analysis, spatial volume-of-interest (VOI) analysis, and voxel-wise correlation across modalities. To demonstrate the clinical applicability of this software, histogram analyses were performed on baseline and follow-up 18F-fluorodeoxyglucose (18F-FDG) PET images of nine patients with rhabdomyosarcoma enrolled in an institutional clinical trial at St. Jude Children's Research Hospital. In addition, we combined 18F-FDG PET, dynamic-contrast-enhanced (DCE) MR, and anatomical MR data to visualize the heterogeneity in tumor pathophysiology with the ultimate goal of adaptive targeting of regions with high tumor burden. Our software is able to simultaneously analyze multimodality images across multiple time points, which could greatly speed up the analysis of large-scale clinical trial data and validation of potential imaging biomarkers.
Fong, Baley A; Wood, David W
2010-10-19
Elastin-like polypeptides (ELPs) are useful tools that can be used to non-chromatographically purify proteins. When paired with self-cleaving inteins, they can be used as economical self-cleaving purification tags. However, ELPs and ELP-tagged target proteins have been traditionally expressed using highly enriched media in shake flask cultures, which are generally not amenable to scale-up. In this work, we describe the high cell-density expression of self-cleaving ELP-tagged targets in a supplemented minimal medium at a 2.5 liter fermentation scale, with increased yields and purity compared to traditional shake flask cultures. This demonstration of ELP expression in supplemented minimal media is juxtaposed to previous expression of ELP tags in extract-based rich media. We also describe several sets of fed-batch conditions and their impact on ELP expression and growth medium cost. By using fed batch E. coli fermentation at high cell density, ELP-intein-tagged proteins can be expressed and purified at high yield with low cost. Further, the impact of media components and fermentation design can significantly impact the overall process cost, particularly at large scale. This work thus demonstrates an important advances in the scale up of self-cleaving ELP tag-mediated processes.
2010-01-01
Background Elastin-like polypeptides (ELPs) are useful tools that can be used to non-chromatographically purify proteins. When paired with self-cleaving inteins, they can be used as economical self-cleaving purification tags. However, ELPs and ELP-tagged target proteins have been traditionally expressed using highly enriched media in shake flask cultures, which are generally not amenable to scale-up. Results In this work, we describe the high cell-density expression of self-cleaving ELP-tagged targets in a supplemented minimal medium at a 2.5 liter fermentation scale, with increased yields and purity compared to traditional shake flask cultures. This demonstration of ELP expression in supplemented minimal media is juxtaposed to previous expression of ELP tags in extract-based rich media. We also describe several sets of fed-batch conditions and their impact on ELP expression and growth medium cost. Conclusions By using fed batch E. coli fermentation at high cell density, ELP-intein-tagged proteins can be expressed and purified at high yield with low cost. Further, the impact of media components and fermentation design can significantly impact the overall process cost, particularly at large scale. This work thus demonstrates an important advances in the scale up of self-cleaving ELP tag-mediated processes. PMID:20959011
NASA Astrophysics Data System (ADS)
Bhutwala, Krish; Beg, Farhat; Mariscal, Derek; Wilks, Scott; Ma, Tammy
2017-10-01
The Advanced Radiographic Capability (ARC) laser at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is the world's most energetic short-pulse laser. It comprises four beamlets, each of substantial energy ( 1.5 kJ), extended short-pulse duration (10-30 ps), and large focal spot (>=50% of energy in 150 µm spot). This allows ARC to achieve proton and light ion acceleration via the Target Normal Sheath Acceleration (TNSA) mechanism, but it is yet unknown how proton beam characteristics scale with ARC-regime laser parameters. As theory has also not yet been validated for laser-generated protons at ARC-regime laser parameters, we attempt to formulate the scaling physics of proton beam characteristics as a function of laser energy, intensity, focal spot size, pulse length, target geometry, etc. through a review of relevant proton acceleration experiments from laser facilities across the world. These predicted scaling laws should then guide target design and future diagnostics for desired proton beam experiments on the NIF ARC. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LLNL LDRD program under tracking code 17-ERD-039.
Echo scintillation Index affected by cat-eye target's caliber with Cassegrain lens
NASA Astrophysics Data System (ADS)
Shan, Cong-miao; Sun, Hua-yan; Zhao, Yan-zhong; Zheng, Yong-hui
2015-10-01
The optical aperture of cat-eye target has the aperture averaging effect to the active detecting laser of active laser detection system, which can be used to identify optical targets. The echo scintillation characteristics of the transmission-type lens target have been studied in previous work. Discussing the differences of the echo scintillation characteristics between the transmission-type lens target and Cassegrain lens target can be helpful to targets classified. In this paper, the echo scintillation characteristics of Cat-eye target's caliber with Cassegrain lens has been discussed . By using the flashing theory of spherical wave in the weak atmospheric turbulence, the annular aperture filter function and the Kolmogorov power spectrum, the analytic expression of the scintillation index of the cat-eye target echo of the horizontal path two-way transmission was given when the light is normal incidence. Then the impact of turbulence inner and outer scale to the echo scintillation index and the analytic expression of the echo scintillation index at the receiving aperture were presented using the modified Hill spectrum and the modified Von Karman spectrum. Echo scintillation index shows the tendency of decreasing with the target aperture increases and different ratios of the inner and outer aperture diameter show the different echo scintillation index curves. This conclusion has a certain significance for target recognition in the active laser detection system that can largely determine the target type by largely determining the scope of the cat-eye target which depending on echo scintillation index.
Galaxies and large scale structure at high redshifts
Steidel, Charles C.
1998-01-01
It is now straightforward to assemble large samples of very high redshift (z ∼ 3) field galaxies selected by their pronounced spectral discontinuity at the rest frame Lyman limit of hydrogen (at 912 Å). This makes possible both statistical analyses of the properties of the galaxies and the first direct glimpse of the progression of the growth of their large-scale distribution at such an early epoch. Here I present a summary of the progress made in these areas to date and some preliminary results of and future plans for a targeted redshift survey at z = 2.7–3.4. Also discussed is how the same discovery method may be used to obtain a “census” of star formation in the high redshift Universe, and the current implications for the history of galaxy formation as a function of cosmic epoch. PMID:9419319
What are the low- Q and large- x boundaries of collinear QCD factorization theorems?
Moffat, E.; Melnitchouk, W.; Rogers, T. C.; ...
2017-05-26
Familiar factorized descriptions of classic QCD processes such as deeply-inelastic scattering (DIS) apply in the limit of very large hard scales, much larger than nonperturbative mass scales and other nonperturbative physical properties like intrinsic transverse momentum. Since many interesting DIS studies occur at kinematic regions where the hard scale,more » $$Q \\sim$$ 1-2 GeV, is not very much greater than the hadron masses involved, and the Bjorken scaling variable $$x_{bj}$$ is large, $$x_{bj} \\gtrsim 0.5$$, it is important to examine the boundaries of the most basic factorization assumptions and assess whether improved starting points are needed. Using an idealized field-theoretic model that contains most of the essential elements that a factorization derivation must confront, we retrace in this paper the steps of factorization approximations and compare with calculations that keep all kinematics exact. We examine the relative importance of such quantities as the target mass, light quark masses, and intrinsic parton transverse momentum, and argue that a careful accounting of parton virtuality is essential for treating power corrections to collinear factorization. Finally, we use our observations to motivate searches for new or enhanced factorization theorems specifically designed to deal with moderately low-$Q$ and large-$$x_{bj}$$ physics.« less
Honda, Michitaka; Wakita, Takafumi; Onishi, Yoshihiro; Nunobe, Souya; Miura, Akinori; Nishigori, Tatsuto; Kusanagi, Hiroshi; Yamamoto, Takatsugu; Boddy, Alexander; Fukuhara, Shunichi
2015-12-01
Patients who have undergone esophagectomy or gastrectomy have certain dietary limitations because of changes to the alimentary tract. This study attempted to develop a psychometric scale, named "Esophago-Gastric surgery and Quality of Dietary life (EGQ-D)," for assessment of impact of upper gastrointestinal surgery on diet-targeted quality of life. Using qualitative methods, the study team interviewed both patients and surgeons involved in esophagogastric cancer surgery, and we prepared an item pool and a draft scale. To evaluate the scale's psychometric reliability and validity, a survey involving a large number of patients was conducted. Items for the final scale were selected by factor analysis and item response theory. Cronbach's alpha was used for assessment of reliability, and correlations with the short form (SF)-12, esophagus and stomach surgery symptom scale (ES(4)), and nutritional indicators were analyzed to assess the criterion-related validity. Through multifaceted discussion and the pilot study, a draft questionnaire comprising 14 items was prepared, and a total of 316 patients were enrolled. On the basis of factor analysis and item response theory, six items were excluded, and the remaining eight items demonstrated strong unidimensionality for the final scale. Cronbach's alpha was 0.895. There were significant associations with all the subscale scores for SF-12, ES(4), and nutritional indicators. The EGQ-D scale has good contents and psychometric validity and can be used to evaluate disease-specific instrument to measure diet-targeted quality of life for postoperative patients with esophagogastric cancer.
The curious case of large-N expansions on a (pseudo)sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polyakov, Alexander M.; Saleem, Zain H.; Stokes, James
We elucidate the large-N dynamics of one-dimensional sigma models with spherical and hyperbolic target spaces and find a duality between the Lagrange multiplier and the angular momentum. In the hyperbolic model we propose a new class of operators based on the irreducible representations of hyperbolic space. We also uncover unexpected zero modes which lead to the double scaling of the 1/N expansion and explore these modes using Gelfand-Dikiy equations.
The curious case of large-N expansions on a (pseudo)sphere
Polyakov, Alexander M.; Saleem, Zain H.; Stokes, James
2015-02-03
We elucidate the large-N dynamics of one-dimensional sigma models with spherical and hyperbolic target spaces and find a duality between the Lagrange multiplier and the angular momentum. In the hyperbolic model we propose a new class of operators based on the irreducible representations of hyperbolic space. We also uncover unexpected zero modes which lead to the double scaling of the 1/N expansion and explore these modes using Gelfand-Dikiy equations.
Distributed multimodal data fusion for large scale wireless sensor networks
NASA Astrophysics Data System (ADS)
Ertin, Emre
2006-05-01
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
Katz, Itamar; Komatsu, Ryuichi; Low-Beer, Daniel; Atun, Rifat
2011-02-23
The paper projects the contribution to 2011-2015 international targets of three major pandemics by programs in 140 countries funded by the Global Fund to Fight AIDS, Tuberculosis and Malaria, the largest external financier of tuberculosis and malaria programs and a major external funder of HIV programs in low and middle income countries. Estimates, using past trends, for the period 2011-2015 of the number of persons receiving antiretroviral (ARV) treatment, tuberculosis case detection using the internationally approved DOTS strategy, and insecticide-treated nets (ITNs) to be delivered by programs in low and middle income countries supported by the Global Fund compared to international targets established by UNAIDS, Stop TB Partnership, Roll Back Malaria Partnership and the World Health Organisation. Global Fund-supported programs are projected to provide ARV treatment to 5.5-5.8 million people, providing 30%-31% of the 2015 international target. Investments in tuberculosis and malaria control will enable reaching in 2015 60%-63% of the international target for tuberculosis case detection and 30%-35% of the ITN distribution target in sub-Saharan Africa. Global Fund investments will substantially contribute to the achievement by 2015 of international targets for HIV, TB and malaria. However, additional large scale international and domestic financing is needed if these targets are to be reached by 2015.
Bioinformatics by Example: From Sequence to Target
NASA Astrophysics Data System (ADS)
Kossida, Sophia; Tahri, Nadia; Daizadeh, Iraj
2002-12-01
With the completion of the human genome, and the imminent completion of other large-scale sequencing and structure-determination projects, computer-assisted bioscience is aimed to become the new paradigm for conducting basic and applied research. The presence of these additional bioinformatics tools stirs great anxiety for experimental researchers (as well as for pedagogues), since they are now faced with a wider and deeper knowledge of differing disciplines (biology, chemistry, physics, mathematics, and computer science). This review targets those individuals who are interested in using computational methods in their teaching or research. By analyzing a real-life, pharmaceutical, multicomponent, target-based example the reader will experience this fascinating new discipline.
Yan, Ruiting; Ghilane, Jalal; Phuah, Kia Chai; Pham Truong, Thuan Nguyen; Adams, Stefan; Randriamahazaka, Hyacinthe; Wang, Qing
2018-02-01
The redox targeting reaction of Li + -storage materials with redox mediators is the key process in redox flow lithium batteries, a promising technology for next-generation large-scale energy storage. The kinetics of the Li + -coupled heterogeneous charge transfer between the energy storage material and redox mediator dictates the performance of the device, while as a new type of charge transfer process it has been rarely studied. Here, scanning electrochemical microscopy (SECM) was employed for the first time to determine the interfacial charge transfer kinetics of LiFePO 4 /FePO 4 upon delithiation and lithiation by a pair of redox shuttle molecules FcBr 2 + and Fc. The effective rate constant k eff was determined to be around 3.70-6.57 × 10 -3 cm/s for the two-way pseudo-first-order reactions, which feature a linear dependence on the composition of LiFePO 4 , validating the kinetic process of interfacial charge transfer rather than bulk solid diffusion. In addition, in conjunction with chronoamperometry measurement, the SECM study disproves the conventional "shrinking-core" model for the delithiation of LiFePO 4 and presents an intriguing way of probing the phase boundary propagations induced by interfacial redox reactions. This study demonstrates a reliable method for the kinetics of redox targeting reactions, and the results provide useful guidance for the optimization of redox targeting systems for large-scale energy storage.
The Australian Longitudinal Study on Women's Health: Using Focus Groups to Inform Recruitment
2016-01-01
Background Recruitment and retention of participants to large-scale, longitudinal studies can be a challenge, particularly when trying to target young women. Qualitative inquiries with members of the target population can prove valuable in assisting with the development of effective recruiting techniques. Researchers in the current study made use of focus group methodology to identify how to encourage young women aged 18-23 to participate in a national cohort online survey. Objective Our objectives were to gain insight into how to encourage young women to participate in a large-scale, longitudinal health survey, as well as to evaluate the survey instrument and mode of administration. Methods The Australian Longitudinal Study on Women’s Health used focus group methodology to learn how to encourage young women to participate in a large-scale, longitudinal Web-based health survey and to evaluate the survey instrument and mode of administration. Nineteen groups, involving 75 women aged 18-23 years, were held in remote, regional, and urban areas of New South Wales and Queensland. Results Focus groups were held in 2 stages, with discussions lasting from 19 minutes to over 1 hour. The focus groups allowed concord to be reached regarding survey promotion using social media, why personal information was needed, strategies to ensure confidentiality, how best to ask sensitive questions, and survey design for ease of completion. Recruitment into the focus groups proved difficult: the groups varied in size between 1 and 8 participants, with the majority conducted with 2 participants. Conclusions Intense recruitment efforts and variation in final focus group numbers highlights the “hard to reach” character of young women. However, the benefits of conducting focus group discussions as a preparatory stage to the recruitment of a large cohort for a longitudinal Web-based health survey were upheld. PMID:26902160
Modelling disease outbreaks in realistic urban social networks
NASA Astrophysics Data System (ADS)
Eubank, Stephen; Guclu, Hasan; Anil Kumar, V. S.; Marathe, Madhav V.; Srinivasan, Aravind; Toroczkai, Zoltán; Wang, Nan
2004-05-01
Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population.
Scaling earthquake ground motions for performance-based assessment of buildings
Huang, Y.-N.; Whittaker, A.S.; Luco, N.; Hamburger, R.O.
2011-01-01
The impact of alternate ground-motion scaling procedures on the distribution of displacement responses in simplified structural systems is investigated. Recommendations are provided for selecting and scaling ground motions for performance-based assessment of buildings. Four scaling methods are studied, namely, (1)geometric-mean scaling of pairs of ground motions, (2)spectrum matching of ground motions, (3)first-mode-period scaling to a target spectral acceleration, and (4)scaling of ground motions per the distribution of spectral demands. Data were developed by nonlinear response-history analysis of a large family of nonlinear single degree-of-freedom (SDOF) oscillators that could represent fixed-base and base-isolated structures. The advantages and disadvantages of each scaling method are discussed. The relationship between spectral shape and a ground-motion randomness parameter, is presented. A scaling procedure that explicitly considers spectral shape is proposed. ?? 2011 American Society of Civil Engineers.
NASA Astrophysics Data System (ADS)
Eftekharzadeh, S.; Myers, A. D.; Hennawi, J. F.; Djorgovski, S. G.; Richards, G. T.; Mahabal, A. A.; Graham, M. J.
2017-06-01
We present the most precise estimate to date of the clustering of quasars on very small scales, based on a sample of 47 binary quasars with magnitudes of g < 20.85 and proper transverse separations of ˜25 h-1 kpc. Our sample of binary quasars, which is about six times larger than any previous spectroscopically confirmed sample on these scales, is targeted using a kernel density estimation (KDE) technique applied to Sloan Digital Sky Survey (SDSS) imaging over most of the SDSS area. Our sample is 'complete' in that all of the KDE target pairs with 17.0 ≲ R ≲ 36.2 h-1 kpc in our area of interest have been spectroscopically confirmed from a combination of previous surveys and our own long-slit observational campaign. We catalogue 230 candidate quasar pairs with angular separations of <8 arcsec, from which our binary quasars were identified. We determine the projected correlation function of quasars (\\bar{W}_p) in four bins of proper transverse scale over the range 17.0 ≲ R ≲ 36.2 h-1 kpc. The implied small-scale quasar clustering amplitude from the projected correlation function, integrated across our entire redshift range, is A = 24.1 ± 3.6 at ˜26.6 h-1 kpc. Our sample is the first spectroscopically confirmed sample of quasar pairs that is sufficiently large to study how quasar clustering evolves with redshift at ˜25 h-1 kpc. We find that empirical descriptions of how quasar clustering evolves with redshift at ˜25 h-1 Mpc also adequately describe the evolution of quasar clustering at ˜25 h-1 kpc.
Assessing sufficiency of thermal riverscapes for resilient ...
Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
Data Intensive Systems (DIS) Benchmark Performance Summary
2003-08-01
models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures
Method for replicating an array of nucleic acid probes
Cantor, C.R.; Przetakiewicz, M.; Smith, C.L.; Sano, T.
1998-08-18
The invention relates to the replication of probe arrays and methods for replicating arrays of probes which are useful for the large scale manufacture of diagnostic aids used to screen biological samples for specific target sequences. Arrays created using PCR technology may comprise probes with 5{prime}- and/or 3{prime}-overhangs. 16 figs.
Worlding through Play: Alternate Reality Games, Large-Scale Learning, and "The Source"
ERIC Educational Resources Information Center
Jagoda, Patrick; Gilliam, Melissa; McDonald, Peter; Russell, Christopher
2015-01-01
Gamification--the use of game mechanics in conventionally nongame activities--has received attention in the field of education. Games, however, are not reducible to the common mechanisms of gamification that target extrinsic motivation, and may also include elements such as role playing, world making, and collective storytelling. Here, the authors…
ERIC Educational Resources Information Center
Roschelle, Jeremy; Shechtman, Nicole; Tatar, Deborah; Hegedus, Stephen; Hopkins, Bill; Empson, Susan; Knudsen, Jennifer; Gallagher, Lawrence P.
2010-01-01
The authors present three studies (two randomized controlled experiments and one embedded quasi-experiment) designed to evaluate the impact of replacement units targeting student learning of advanced middle school mathematics. The studies evaluated the SimCalc approach, which integrates an interactive representational technology, paper curriculum,…
DOT National Transportation Integrated Search
2009-02-25
The combination of current and planned 2007 U.S. ethanol production capacity is 50 billion L/yr, one-third of the Energy Independence and Security Act of 2007 (EISA) target of 136 billion L of biofuels by 2022. In this study, we evaluate transportati...
Surface fuel loadings within mulching treatments in Colorado coniferous forests
Mike A. Battaglia; Monique E. Rocca; Charles C. Rhoades; Michael G. Ryan
2010-01-01
Recent large-scale, severe wildfires in the western United States have prompted extensive mechanical fuel treatment programs to reduce potential wildfire size and severity. Fuel reduction prescriptions typically target non-merchantable material so approaches to mechanically treat and distribute residue on site are becoming increasingly common. We examined how mulch...
High-throughput screening and small animal models, where are we?
Giacomotto, Jean; Ségalat, Laurent
2010-01-01
Current high-throughput screening methods for drug discovery rely on the existence of targets. Moreover, most of the hits generated during screenings turn out to be invalid after further testing in animal models. To by-pass these limitations, efforts are now being made to screen chemical libraries on whole animals. One of the most commonly used animal model in biology is the murine model Mus musculus. However, its cost limit its use in large-scale therapeutic screening. In contrast, the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster, and the fish Danio rerio are gaining momentum as screening tools. These organisms combine genetic amenability, low cost and culture conditions that are compatible with large-scale screens. Their main advantage is to allow high-throughput screening in a whole-animal context. Moreover, their use is not dependent on the prior identification of a target and permits the selection of compounds with an improved safety profile. This review surveys the versatility of these animal models for drug discovery and discuss the options available at this day. PMID:20423335
Miri, Andrew; Daie, Kayvon; Burdine, Rebecca D.; Aksay, Emre
2011-01-01
The advent of methods for optical imaging of large-scale neural activity at cellular resolution in behaving animals presents the problem of identifying behavior-encoding cells within the resulting image time series. Rapid and precise identification of cells with particular neural encoding would facilitate targeted activity measurements and perturbations useful in characterizing the operating principles of neural circuits. Here we report a regression-based approach to semiautomatically identify neurons that is based on the correlation of fluorescence time series with quantitative measurements of behavior. The approach is illustrated with a novel preparation allowing synchronous eye tracking and two-photon laser scanning fluorescence imaging of calcium changes in populations of hindbrain neurons during spontaneous eye movement in the larval zebrafish. Putative velocity-to-position oculomotor integrator neurons were identified that showed a broad spatial distribution and diversity of encoding. Optical identification of integrator neurons was confirmed with targeted loose-patch electrical recording and laser ablation. The general regression-based approach we demonstrate should be widely applicable to calcium imaging time series in behaving animals. PMID:21084686
SChloro: directing Viridiplantae proteins to six chloroplastic sub-compartments.
Savojardo, Castrense; Martelli, Pier Luigi; Fariselli, Piero; Casadio, Rita
2017-02-01
Chloroplasts are organelles found in plants and involved in several important cell processes. Similarly to other compartments in the cell, chloroplasts have an internal structure comprising several sub-compartments, where different proteins are targeted to perform their functions. Given the relation between protein function and localization, the availability of effective computational tools to predict protein sub-organelle localizations is crucial for large-scale functional studies. In this paper we present SChloro, a novel machine-learning approach to predict protein sub-chloroplastic localization, based on targeting signal detection and membrane protein information. The proposed approach performs multi-label predictions discriminating six chloroplastic sub-compartments that include inner membrane, outer membrane, stroma, thylakoid lumen, plastoglobule and thylakoid membrane. In comparative benchmarks, the proposed method outperforms current state-of-the-art methods in both single- and multi-compartment predictions, with an overall multi-label accuracy of 74%. The results demonstrate the relevance of the approach that is eligible as a good candidate for integration into more general large-scale annotation pipelines of protein subcellular localization. The method is available as web server at http://schloro.biocomp.unibo.it gigi@biocomp.unibo.it.
The Design and Evaluation of a Large-Scale Real-Walking Locomotion Interface
Peck, Tabitha C.; Fuchs, Henry; Whitton, Mary C.
2014-01-01
Redirected Free Exploration with Distractors (RFED) is a large-scale real-walking locomotion interface developed to enable people to walk freely in virtual environments that are larger than the tracked space in their facility. This paper describes the RFED system in detail and reports on a user study that evaluated RFED by comparing it to walking-in-place and joystick interfaces. The RFED system is composed of two major components, redirection and distractors. This paper discusses design challenges, implementation details, and lessons learned during the development of two working RFED systems. The evaluation study examined the effect of the locomotion interface on users’ cognitive performance on navigation and wayfinding measures. The results suggest that participants using RFED were significantly better at navigating and wayfinding through virtual mazes than participants using walking-in-place and joystick interfaces. Participants traveled shorter distances, made fewer wrong turns, pointed to hidden targets more accurately and more quickly, and were able to place and label targets on maps more accurately, and more accurately estimate the virtual environment size. PMID:22184262
Kinetics of Aggregation with Choice
Ben-Naim, Eli; Krapivsky, Paul
2016-12-01
Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less
European large-scale farmland investments and the land-water-energy-food nexus
NASA Astrophysics Data System (ADS)
Siciliano, Giuseppina; Rulli, Maria Cristina; D'Odorico, Paolo
2017-12-01
The escalating human demand for food, water, energy, fibres and minerals have resulted in increasing commercial pressures on land and water resources, which are partly reflected by the recent increase in transnational land investments. Studies have shown that many of the land-water issues associated with land acquisitions are directly related to the areas of energy and food production. This paper explores the land-water-energy-food nexus in relation to large-scale farmland investments pursued by investors from European countries. The analysis is based on a "resource assessment approach" which evaluates the linkages between land acquisitions for agricultural (including both energy and food production) and forestry purposes, and the availability of land and water in the target countries. To that end, the water appropriated by agricultural and forestry productions is quantitatively assessed and its impact on water resource availability is analysed. The analysis is meant to provide useful information to investors from EU countries and policy makers on aspects of resource acquisition, scarcity, and access to promote responsible land investments in the target countries.
Nam, Moon; Kim, Jeong-Seon; Lim, Seungmo; Park, Chung Youl; Kim, Jeong-Gyu; Choi, Hong-Soo; Lim, Hyoun-Sub; Moon, Jae Sun; Lee, Su-Heon
2014-01-01
A large-scale oligonucleotide (LSON) chip was developed for the detection of the plant viruses with known genetic information. The LSON chip contains two sets of 3,978 probes for 538 species of targets including plant viruses, satellite RNAs and viroids. A hundred forty thousand probes, consisting of isolate-, species- and genus-specific probes respectively, are designed from 20,000 of independent nucleotide sequence of plant viruses. Based on the economic importance, the amount of genome information, and the number of strains and/or isolates, one to fifty-one probes for each target virus are selected and spotted on the chip. The standard and field samples for the analysis of the LSON chip have been prepared and tested by RT-PCR. The probe’s specific and/or nonspecific reaction patterns by LSON chip allow us to diagnose the unidentified viruses. Thus, the LSON chip in this study could be highly useful for the detection of unexpected plant viruses, the monitoring of emerging viruses and the fluctuation of the population of major viruses in each plant. PMID:25288985
Viral Organization of Human Proteins
Wuchty, Stefan; Siwo, Geoffrey; Ferdig, Michael T.
2010-01-01
Although maps of intracellular interactions are increasingly well characterized, little is known about large-scale maps of host-pathogen protein interactions. The investigation of host-pathogen interactions can reveal features of pathogenesis and provide a foundation for the development of drugs and disease prevention strategies. A compilation of experimentally verified interactions between HIV-1 and human proteins and a set of HIV-dependency factors (HDF) allowed insights into the topology and intricate interplay between viral and host proteins on a large scale. We found that targeted and HDF proteins appear predominantly in rich-clubs, groups of human proteins that are strongly intertwined among each other. These assemblies of proteins may serve as an infection gateway, allowing the virus to take control of the human host by reaching protein pathways and diversified cellular functions in a pronounced and focused way. Particular transcription factors and protein kinases facilitate indirect interactions between HDFs and viral proteins. Discerning the entanglement of directly targeted and indirectly interacting proteins may uncover molecular and functional sites that can provide novel perspectives on the progression of HIV infection and highlight new avenues to fight this virus. PMID:20827298
HIGH-EFFICIENCY AUTONOMOUS LASER ADAPTIVE OPTICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baranec, Christoph; Riddle, Reed; Tendulkar, Shriharsh
2014-07-20
As new large-scale astronomical surveys greatly increase the number of objects targeted and discoveries made, the requirement for efficient follow-up observations is crucial. Adaptive optics imaging, which compensates for the image-blurring effects of Earth's turbulent atmosphere, is essential for these surveys, but the scarcity, complexity and high demand of current systems limit their availability for following up large numbers of targets. To address this need, we have engineered and implemented Robo-AO, a fully autonomous laser adaptive optics and imaging system that routinely images over 200 objects per night with an acuity 10 times sharper at visible wavelengths than typically possible frommore » the ground. By greatly improving the angular resolution, sensitivity, and efficiency of 1-3 m class telescopes, we have eliminated a major obstacle in the follow-up of the discoveries from current and future large astronomical surveys.« less
Sawata, Hiroshi; Ueshima, Kenji; Tsutani, Kiichiro
2011-04-14
Clinical evidence is important for improving the treatment of patients by health care providers. In the study of cardiovascular diseases, large-scale clinical trials involving thousands of participants are required to evaluate the risks of cardiac events and/or death. The problems encountered in conducting the Japanese Acute Myocardial Infarction Prospective (JAMP) study highlighted the difficulties involved in obtaining the financial and infrastructural resources necessary for conducting large-scale clinical trials. The objectives of the current study were: 1) to clarify the current funding and infrastructural environment surrounding large-scale clinical trials in cardiovascular and metabolic diseases in Japan, and 2) to find ways to improve the environment surrounding clinical trials in Japan more generally. We examined clinical trials examining cardiovascular diseases that evaluated true endpoints and involved 300 or more participants using Pub-Med, Ichushi (by the Japan Medical Abstracts Society, a non-profit organization), websites of related medical societies, the University Hospital Medical Information Network (UMIN) Clinical Trials Registry, and clinicaltrials.gov at three points in time: 30 November, 2004, 25 February, 2007 and 25 July, 2009. We found a total of 152 trials that met our criteria for 'large-scale clinical trials' examining cardiovascular diseases in Japan. Of these, 72.4% were randomized controlled trials (RCTs). Of 152 trials, 9.2% of the trials examined more than 10,000 participants, and 42.8% examined between 1,000 and 10,000 participants. The number of large-scale clinical trials markedly increased from 2001 to 2004, but suddenly decreased in 2007, then began to increase again. Ischemic heart disease (39.5%) was the most common target disease. Most of the larger-scale trials were funded by private organizations such as pharmaceutical companies. The designs and results of 13 trials were not disclosed. To improve the quality of clinical trials, all sponsors should register trials and disclose the funding sources before the enrolment of participants, and publish their results after the completion of each study.
The role of Natural Flood Management in managing floods in large scale basins during extreme events
NASA Astrophysics Data System (ADS)
Quinn, Paul; Owen, Gareth; ODonnell, Greg; Nicholson, Alex; Hetherington, David
2016-04-01
There is a strong evidence database showing the negative impacts of land use intensification and soil degradation in NW European river basins on hydrological response and to flood impact downstream. However, the ability to target zones of high runoff production and the extent to which we can manage flood risk using nature-based flood management solution are less known. A move to planting more trees and having less intense farmed landscapes is part of natural flood management (NFM) solutions and these methods suggest that flood risk can be managed in alternative and more holistic ways. So what local NFM management methods should be used, where in large scale basin should they be deployed and how does flow is propagate to any point downstream? Generally, how much intervention is needed and will it compromise food production systems? If we are observing record levels of rainfall and flow, for example during Storm Desmond in Dec 2015 in the North West of England, what other flood management options are really needed to complement our traditional defences in large basins for the future? In this paper we will show examples of NFM interventions in the UK that have impacted at local scale sites. We will demonstrate the impact of interventions at local, sub-catchment (meso-scale) and finally at the large scale. These tools include observations, process based models and more generalised Flood Impact Models. Issues of synchronisation and the design level of protection will be debated. By reworking observed rainfall and discharge (runoff) for observed extreme events in the River Eden and River Tyne, during Storm Desmond, we will show how much flood protection is needed in large scale basins. The research will thus pose a number of key questions as to how floods may have to be managed in large scale basins in the future. We will seek to support a method of catchment systems engineering that holds water back across the whole landscape as a major opportunity to management water in large scale basins in the future. The broader benefits of engineering landscapes to hold water for pollution control, sediment loss and drought minimisation will also be shown.
Vilar, Santiago; Hripcsak, George
2016-01-01
Drug-target identification is crucial to discover novel applications for existing drugs and provide more insights about mechanisms of biological actions, such as adverse drug effects (ADEs). Computational methods along with the integration of current big data sources provide a useful framework for drug-target and drug-adverse effect discovery. In this article, we propose a method based on the integration of 3D chemical similarity, target and adverse effect data to generate a drug-target-adverse effect predictor along with a simple leveraging system to improve identification of drug-targets and drug-adverse effects. In the first step, we generated a system for multiple drug-target identification based on the application of 3D drug similarity into a large target dataset extracted from the ChEMBL. Next, we developed a target-adverse effect predictor combining targets from ChEMBL with phenotypic information provided by SIDER data source. Both modules were linked to generate a final predictor that establishes hypothesis about new drug-target-adverse effect candidates. Additionally, we showed that leveraging drug-target candidates with phenotypic data is very useful to improve the identification of drug-targets. The integration of phenotypic data into drug-target candidates yielded up to twofold precision improvement. In the opposite direction, leveraging drug-phenotype candidates with target data also yielded a significant enhancement in the performance. The modeling described in the current study is simple and efficient and has applications at large scale in drug repurposing and drug safety through the identification of mechanism of action of biological effects.
Ishimori, Yuu; Mitsunobu, Fumihiro; Yamaoka, Kiyonori; Tanaka, Hiroshi; Kataoka, Takahiro; Sakoda, Akihiro
2011-07-01
A radon test facility for small animals was developed in order to increase the statistical validity of differences of the biological response in various radon environments. This paper illustrates the performances of that facility, the first large-scale facility of its kind in Japan. The facility has a capability to conduct approximately 150 mouse-scale tests at the same time. The apparatus for exposing small animals to radon has six animal chamber groups with five independent cages each. Different radon concentrations in each animal chamber group are available. Because the first target of this study is to examine the in vivo behaviour of radon and its effects, the major functions to control radon and to eliminate thoron were examined experimentally. Additionally, radon progeny concentrations and their particle size distributions in the cages were also examined experimentally to be considered in future projects.
Kongelf, Anine; Bandewar, Sunita V. S.; Bharat, Shalini; Collumbien, Martine
2015-01-01
Background In the last decade, community mobilisation (CM) interventions targeting female sex workers (FSWs) have been scaled-up in India’s national response to the HIV epidemic. This included the Bill and Melinda Gates Foundation’s Avahan programme which adopted a business approach to plan and manage implementation at scale. With the focus of evaluation efforts on measuring effectiveness and health impacts there has been little analysis thus far of the interaction of the CM interventions with the sex work industry in complex urban environments. Methods and Findings Between March and July 2012 semi-structured, in-depth interviews and focus group discussions were conducted with 63 HIV intervention implementers, to explore challenges of HIV prevention among FSWs in Mumbai. A thematic analysis identified contextual factors that impact CM implementation. Large-scale interventions are not only impacted by, but were shown to shape the dynamic social context. Registration practices and programme monitoring were experienced as stigmatising, reflected in shifting client preferences towards women not disclosing as ‘sex workers’. This combined with urban redevelopment and gentrification of traditional red light areas, forcing dispersal and more ‘hidden’ ways of solicitation, further challenging outreach and collectivisation. Participants reported that brothel owners and ‘pimps’ continued to restrict access to sex workers and the heterogeneous ‘community’ of FSWs remains fragmented with high levels of mobility. Stakeholder engagement was poor and mobilising around HIV prevention not compelling. Interventions largely failed to respond to community needs as strong target-orientation skewed activities towards those most easily measured and reported. Conclusion Large-scale interventions have been impacted by and contributed to an increasingly complex sex work environment in Mumbai, challenging outreach and mobilisation efforts. Sex workers remain a vulnerable and disempowered group needing continued support and more comprehensive services. PMID:25811484
Pettigrew, Luisa M; Kumpunen, Stephanie; Mays, Nicholas; Rosen, Rebecca; Posaner, Rachel
2018-03-01
Over the past decade, collaboration between general practices in England to form new provider networks and large-scale organisations has been driven largely by grassroots action among GPs. However, it is now being increasingly advocated for by national policymakers. Expectations of what scaling up general practice in England will achieve are significant. To review the evidence of the impact of new forms of large-scale general practice provider collaborations in England. Systematic review. Embase, MEDLINE, Health Management Information Consortium, and Social Sciences Citation Index were searched for studies reporting the impact on clinical processes and outcomes, patient experience, workforce satisfaction, or costs of new forms of provider collaborations between general practices in England. A total of 1782 publications were screened. Five studies met the inclusion criteria and four examined the same general practice networks, limiting generalisability. Substantial financial investment was required to establish the networks and the associated interventions that were targeted at four clinical areas. Quality improvements were achieved through standardised processes, incentives at network level, information technology-enabled performance dashboards, and local network management. The fifth study of a large-scale multisite general practice organisation showed that it may be better placed to implement safety and quality processes than conventional practices. However, unintended consequences may arise, such as perceptions of disenfranchisement among staff and reductions in continuity of care. Good-quality evidence of the impacts of scaling up general practice provider organisations in England is scarce. As more general practice collaborations emerge, evaluation of their impacts will be important to understand which work, in which settings, how, and why. © British Journal of General Practice 2018.
Yu, Jinchao; Guerois, Raphaël
2016-12-15
Protein-protein docking methods are of great importance for understanding interactomes at the structural level. It has become increasingly appealing to use not only experimental structures but also homology models of unbound subunits as input for docking simulations. So far we are missing a large scale assessment of the success of rigid-body free docking methods on homology models. We explored how we could benefit from comparative modelling of unbound subunits to expand docking benchmark datasets. Starting from a collection of 3157 non-redundant, high X-ray resolution heterodimers, we developed the PPI4DOCK benchmark containing 1417 docking targets based on unbound homology models. Rigid-body docking by Zdock showed that for 1208 cases (85.2%), at least one correct decoy was generated, emphasizing the efficiency of rigid-body docking in generating correct assemblies. Overall, the PPI4DOCK benchmark contains a large set of realistic cases and provides new ground for assessing docking and scoring methodologies. Benchmark sets can be downloaded from http://biodev.cea.fr/interevol/ppi4dock/ CONTACT: guerois@cea.frSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
High-resolution 3D simulations of NIF ignition targets performed on Sequoia with HYDRA
NASA Astrophysics Data System (ADS)
Marinak, M. M.; Clark, D. S.; Jones, O. S.; Kerbel, G. D.; Sepke, S.; Patel, M. V.; Koning, J. M.; Schroeder, C. R.
2015-11-01
Developments in the multiphysics ICF code HYDRA enable it to perform large-scale simulations on the Sequoia machine at LLNL. With an aggregate computing power of 20 Petaflops, Sequoia offers an unprecedented capability to resolve the physical processes in NIF ignition targets for a more complete, consistent treatment of the sources of asymmetry. We describe modifications to HYDRA that enable it to scale to over one million processes on Sequoia. These include new options for replicating parts of the mesh over a subset of the processes, to avoid strong scaling limits. We consider results from a 3D full ignition capsule-only simulation performed using over one billion zones run on 262,000 processors which resolves surface perturbations through modes l = 200. We also report progress towards a high-resolution 3D integrated hohlraum simulation performed using 262,000 processors which resolves surface perturbations on the ignition capsule through modes l = 70. These aim for the most complete calculations yet of the interactions and overall impact of the various sources of asymmetry for NIF ignition targets. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.
HIV Topical Microbicides: Steer the Ship or Run Aground
Gross, Michael
2004-01-01
Six HIV candidate microbicides are scheduled to enter 6 large-scale effectiveness trials in the next year. The selection of products for testing and the design of this group of trials should be reconsidered to provide an answer to a key question now before the field: Does a sulfonated polyanion, delivered intravaginally as a gel, block HIV attachment to target cells with sufficient potency to protect women from sexually acquired HIV infection? Paradoxically, entering more candidates into more trials may confuse or compromise efforts to identify an effective product. Instead, a single trial of the most promising product(s) best serves the current candidates while also preserving resources needed to promptly advance innovative new protective concepts into future large-scale trials. PMID:15226123
Shandas, Vivek; Voelkel, Jackson; Rao, Meenakshi; George, Linda
2016-01-01
Reducing exposure to degraded air quality is essential for building healthy cities. Although air quality and population vary at fine spatial scales, current regulatory and public health frameworks assess human exposures using county- or city-scales. We build on a spatial analysis technique, dasymetric mapping, for allocating urban populations that, together with emerging fine-scale measurements of air pollution, addresses three objectives: (1) evaluate the role of spatial scale in estimating exposure; (2) identify urban communities that are disproportionately burdened by poor air quality; and (3) estimate reduction in mobile sources of pollutants due to local tree-planting efforts using nitrogen dioxide. Our results show a maximum value of 197% difference between cadastrally-informed dasymetric system (CIDS) and standard estimations of population exposure to degraded air quality for small spatial extent analyses, and a lack of substantial difference for large spatial extent analyses. These results provide the foundation for improving policies for managing air quality, and targeting mitigation efforts to address challenges of environmental justice. PMID:27527205
SPH calculations of asteroid disruptions: The role of pressure dependent failure models
NASA Astrophysics Data System (ADS)
Jutzi, Martin
2015-03-01
We present recent improvements of the modeling of the disruption of strength dominated bodies using the Smooth Particle Hydrodynamics (SPH) technique. The improvements include an updated strength model and a friction model, which are successfully tested by a comparison with laboratory experiments. In the modeling of catastrophic disruptions of asteroids, a comparison between old and new strength models shows no significant deviation in the case of targets which are initially non-porous, fully intact and have a homogeneous structure (such as the targets used in the study by Benz and Asphaug, 1999). However, for many cases (e.g. initially partly or fully damaged targets and rubble-pile structures) we find that it is crucial that friction is taken into account and the material has a pressure dependent shear strength. Our investigations of the catastrophic disruption threshold
Mlynek, Georg; Lehner, Anita; Neuhold, Jana; Leeb, Sarah; Kostan, Julius; Charnagalov, Alexej; Stolt-Bergner, Peggy; Djinović-Carugo, Kristina; Pinotsis, Nikos
2014-06-01
Expression in Escherichia coli represents the simplest and most cost effective means for the production of recombinant proteins. This is a routine task in structural biology and biochemistry where milligrams of the target protein are required in high purity and monodispersity. To achieve these criteria, the user often needs to screen several constructs in different expression and purification conditions in parallel. We describe a pipeline, implemented in the Center for Optimized Structural Studies, that enables the systematic screening of expression and purification conditions for recombinant proteins and relies on a series of logical decisions. We first use bioinformatics tools to design a series of protein fragments, which we clone in parallel, and subsequently screen in small scale for optimal expression and purification conditions. Based on a scoring system that assesses soluble expression, we then select the top ranking targets for large-scale purification. In the establishment of our pipeline, emphasis was put on streamlining the processes such that it can be easily but not necessarily automatized. In a typical run of about 2 weeks, we are able to prepare and perform small-scale expression screens for 20-100 different constructs followed by large-scale purification of at least 4-6 proteins. The major advantage of our approach is its flexibility, which allows for easy adoption, either partially or entirely, by any average hypothesis driven laboratory in a manual or robot-assisted manner.
Data management strategies for multinational large-scale systems biology projects.
Wruck, Wasco; Peuker, Martin; Regenbrecht, Christian R A
2014-01-01
Good accessibility of publicly funded research data is essential to secure an open scientific system and eventually becomes mandatory [Wellcome Trust will Penalise Scientists Who Don't Embrace Open Access. The Guardian 2012]. By the use of high-throughput methods in many research areas from physics to systems biology, large data collections are increasingly important as raw material for research. Here, we present strategies worked out by international and national institutions targeting open access to publicly funded research data via incentives or obligations to share data. Funding organizations such as the British Wellcome Trust therefore have developed data sharing policies and request commitment to data management and sharing in grant applications. Increased citation rates are a profound argument for sharing publication data. Pre-publication sharing might be rewarded by a data citation credit system via digital object identifiers (DOIs) which have initially been in use for data objects. Besides policies and incentives, good practice in data management is indispensable. However, appropriate systems for data management of large-scale projects for example in systems biology are hard to find. Here, we give an overview of a selection of open-source data management systems proved to be employed successfully in large-scale projects.
NASA Astrophysics Data System (ADS)
Beichner, Robert
2015-03-01
The Student Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) project was developed nearly 20 years ago as an economical way to provide collaborative, interactive instruction even for large enrollment classes. Nearly all research-based pedagogies have been designed with fairly high faculty-student ratios. The economics of introductory courses at large universities often precludes that situation, so SCALE-UP was created as a way to facilitate highly collaborative active learning with large numbers of students served by only a few faculty and assistants. It enables those students to learn and succeed not only in acquiring content, but also to practice important 21st century skills like problem solving, communication, and teamsmanship. The approach was initially targeted at undergraduate science and engineering students taking introductory physics courses in large enrollment sections. It has since expanded to multiple content areas, including chemistry, math, engineering, biology, business, nursing, and even the humanities. Class sizes range from 24 to over 600. Data collected from multiple sites around the world indicates highly successful implementation at more than 250 institutions. NSF support was critical for initial development and dissemination efforts. Generously supported by NSF (9752313, 9981107) and FIPSE (P116B971905, P116B000659).
Data management strategies for multinational large-scale systems biology projects
Peuker, Martin; Regenbrecht, Christian R.A.
2014-01-01
Good accessibility of publicly funded research data is essential to secure an open scientific system and eventually becomes mandatory [Wellcome Trust will Penalise Scientists Who Don’t Embrace Open Access. The Guardian 2012]. By the use of high-throughput methods in many research areas from physics to systems biology, large data collections are increasingly important as raw material for research. Here, we present strategies worked out by international and national institutions targeting open access to publicly funded research data via incentives or obligations to share data. Funding organizations such as the British Wellcome Trust therefore have developed data sharing policies and request commitment to data management and sharing in grant applications. Increased citation rates are a profound argument for sharing publication data. Pre-publication sharing might be rewarded by a data citation credit system via digital object identifiers (DOIs) which have initially been in use for data objects. Besides policies and incentives, good practice in data management is indispensable. However, appropriate systems for data management of large-scale projects for example in systems biology are hard to find. Here, we give an overview of a selection of open-source data management systems proved to be employed successfully in large-scale projects. PMID:23047157
NASA Technical Reports Server (NTRS)
Crawford, D. A.; Barnouin-Jha, O. S.; Cintala, M. J.
2003-01-01
The propagation of shock waves through target materials is strongly influenced by the presence of small-scale structure, fractures, physical and chemical heterogeneities. Pre-existing fractures often create craters that appear square in outline (e.g. Meteor Crater). Reverberations behind the shock from the presence of physical heterogeneity have been proposed as a mechanism for transient weakening of target materials. Pre-existing fractures can also affect melt generation. In this study, we are attempting to bridge the gap in numerical modeling between the micro-scale and the continuum, the so-called meso-scale. To accomplish this, we are developing a methodology to be used in the shock physics hydrocode (CTH) using Monte-Carlo-type methods to investigate the shock properties of heterogeneous materials. By comparing the results of numerical experiments at the micro-scale with experimental results and by using statistical techniques to evaluate the performance of simple constitutive models, we hope to embed the effect of physical heterogeneity into the field variables (pressure, stress, density, velocity) allowing us to directly imprint the effects of micro-scale heterogeneity at the continuum level without incurring high computational cost.
Clustering on very small scales from a large, complete sample of confirmed quasar pairs
NASA Astrophysics Data System (ADS)
Eftekharzadeh, Sarah; Myers, Adam D.; Djorgovski, Stanislav G.; Graham, Matthew J.; Hennawi, Joseph F.; Mahabal, Ashish A.; Richards, Gordon T.
2016-06-01
We present by far the largest sample of spectroscopically confirmed binaryquasars with proper transverse separations of 17.0 ≤ Rprop ≤ 36.6 h-1 kpc. Our sample, whichis an order-of-magnitude larger than previous samples, is selected from Sloan Digital Sky Survey (SDSS) imaging over an area corresponding to the SDSS 6th data release (DR6). Our quasars are targeted using a Kernel Density Estimation technique (KDE), and confirmed using long-slit spectroscopy on a range of facilities.Our most complete sub-sample of 44 binary quasars with g<20.85, extends across angular scales of 2.9" < Δθ < 6.3", and is targeted from a parent sample that would be equivalent to a full spectroscopic survey of nearly 300,000 quasars.We determine the projected correlation function of quasars (\\bar Wp) over proper transverse scales of 17.0 ≤ Rprop ≤ 36.6 h-1 kpc, and also in 4 bins of scale within this complete range.To investigate the redshift evolution of quasar clustering on small scales, we make the first self-consistent measurement of the projected quasar correlation function in 4 bins of redshift over 0.4 ≤ z ≤ 2.3.
Evolution of Scaling Emergence in Large-Scale Spatial Epidemic Spreading
Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan
2011-01-01
Background Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. Methodology/Principal Findings In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. Conclusions/Significance The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease. PMID:21747932
Dictyostelium mobile elements: strategies to amplify in a compact genome.
Winckler, T; Dingermann, T; Glöckner, G
2002-12-01
Dictyostelium discoideum is a eukaryotic microorganism that is attractive for the study of fundamental biological phenomena such as cell-cell communication, formation of multicellularity, cell differentiation and morphogenesis. Large-scale sequencing of the D. discoideum genome has provided new insights into evolutionary strategies evolved by transposable elements (TEs) to settle in compact microbial genomes and to maintain active populations over evolutionary time. The high gene density (about 1 gene/2.6 kb) of the D. discoideum genome leaves limited space for selfish molecular invaders to move and amplify without causing deleterious mutations that eradicate their host. Targeting of transfer RNA (tRNA) gene loci appears to be a generally successful strategy for TEs residing in compact genomes to insert away from coding regions. In D. discoideum, tRNA gene-targeted retrotransposition has evolved independently at least three times by both non-long terminal repeat (LTR) retrotransposons and retrovirus-like LTR retrotransposons. Unlike the nonspecifically inserting D. discoideum TEs, which have a strong tendency to insert into preexisting TE copies and form large and complex clusters near the ends of chromosomes, the tRNA gene-targeted retrotransposons have managed to occupy 75% of the tRNA gene loci spread on chromosome 2 and represent 80% of the TEs recognized on the assembled central 6.5-Mb part of chromosome 2. In this review we update the available information about D. discoideum TEs which emerges both from previous work and current large-scale genome sequencing, with special emphasis on the fact that tRNA genes are principal determinants of retrotransposon insertions into the D. discoideum genome.
Motivation: As cancer genomics initiatives move toward comprehensive identification of genetic alterations in cancer, attention is now turning to understanding how interactions among these genes lead to the acquisition of tumor hallmarks. Emerging pharmacological and clinical data suggest a highly promising role of cancer-specific protein-protein interactions (PPIs) as druggable cancer targets. However, large-scale experimental identification of cancer-related PPIs remains challenging, and currently available resources to explore oncogenic PPI networks are limited.
Towards the computation of time-periodic inertial range dynamics
NASA Astrophysics Data System (ADS)
van Veen, L.; Vela-Martín, A.; Kawahara, G.
2018-04-01
We explore the possibility of computing simple invariant solutions, like travelling waves or periodic orbits, in Large Eddy Simulation (LES) on a periodic domain with constant external forcing. The absence of material boundaries and the simple forcing mechanism make this system a comparatively simple target for the study of turbulent dynamics through invariant solutions. We show, that in spite of the application of eddy viscosity the computations are still rather challenging and must be performed on GPU cards rather than conventional coupled CPUs. We investigate the onset of turbulence in this system by means of bifurcation analysis, and present a long-period, large-amplitude unstable periodic orbit that is filtered from a turbulent time series. Although this orbit is computed on a coarse grid, with only a small separation between the integral scale and the LES filter length, the periodic dynamics seem to capture a regeneration process of the large-scale vortices.
Pechsiri, Joseph S; Thomas, Jean-Baptiste E; Risén, Emma; Ribeiro, Mauricio S; Malmström, Maria E; Nylund, Göran M; Jansson, Anette; Welander, Ulrika; Pavia, Henrik; Gröndahl, Fredrik
2016-12-15
The cultivation of seaweed as a feedstock for third generation biofuels is gathering interest in Europe, however, many questions remain unanswered in practise, notably regarding scales of operation, energy returns on investment (EROI) and greenhouse gas (GHG) emissions, all of which are crucial to determine commercial viability. This study performed an energy and GHG emissions analysis, using EROI and GHG savings potential respectively, as indicators of commercial viability for two systems: the Swedish Seafarm project's seaweed cultivation (0.5ha), biogas and fertilizer biorefinery, and an estimation of the same system scaled up and adjusted to a cultivation of 10ha. Based on a conservative estimate of biogas yield, neither the 0.5ha case nor the up-scaled 10ha estimates met the (commercial viability) target EROI of 3, nor the European Union Renewable Energy Directive GHG savings target of 60% for biofuels, however the potential for commercial viability was substantially improved by scaling up operations: GHG emissions and energy demand, per unit of biogas, was almost halved by scaling operations up by a factor of twenty, thereby approaching the EROI and GHG savings targets set, under beneficial biogas production conditions. Further analysis identified processes whose optimisations would have a large impact on energy use and emissions (such as anaerobic digestion) as well as others embodying potential for further economies of scale (such as harvesting), both of which would be of interest for future developments of kelp to biogas and fertilizer biorefineries. Copyright © 2016. Published by Elsevier B.V.
Deep Extragalactic VIsible Legacy Survey (DEVILS): Motivation, Design and Target Catalogue
NASA Astrophysics Data System (ADS)
Davies, L. J. M.; Robotham, A. S. G.; Driver, S. P.; Lagos, C. P.; Cortese, L.; Mannering, E.; Foster, C.; Lidman, C.; Hashemizadeh, A.; Koushan, S.; O'Toole, S.; Baldry, I. K.; Bilicki, M.; Bland-Hawthorn, J.; Bremer, M. N.; Brown, M. J. I.; Bryant, J. J.; Catinella, B.; Croom, S. M.; Grootes, M. W.; Holwerda, B. W.; Jarvis, M. J.; Maddox, N.; Meyer, M.; Moffett, A. J.; Phillipps, S.; Taylor, E. N.; Windhorst, R. A.; Wolf, C.
2018-06-01
The Deep Extragalactic VIsible Legacy Survey (DEVILS) is a large spectroscopic campaign at the Anglo-Australian Telescope (AAT) aimed at bridging the near and distant Universe by producing the highest completeness survey of galaxies and groups at intermediate redshifts (0.3 < z < 1.0). Our sample consists of ˜60,000 galaxies to Y<21.2 mag, over ˜6 deg2 in three well-studied deep extragalactic fields (Cosmic Origins Survey field, COSMOS, Extended Chandra Deep Field South, ECDFS and the X-ray Multi-Mirror Mission Large-Scale Structure region, XMM-LSS - all Large Synoptic Survey Telescope deep-drill fields). This paper presents the broad experimental design of DEVILS. Our target sample has been selected from deep Visible and Infrared Survey Telescope for Astronomy (VISTA) Y-band imaging (VISTA Deep Extragalactic Observations, VIDEO and UltraVISTA), with photometry measured by PROFOUND. Photometric star/galaxy separation is done on the basis of NIR colours, and has been validated by visual inspection. To maximise our observing efficiency for faint targets we employ a redshift feedback strategy, which continually updates our target lists, feeding back the results from the previous night's observations. We also present an overview of the initial spectroscopic observations undertaken in late 2017 and early 2018.
Liu, Xian; Xu, Yuan; Li, Shanshan; Wang, Yulan; Peng, Jianlong; Luo, Cheng; Luo, Xiaomin; Zheng, Mingyue; Chen, Kaixian; Jiang, Hualiang
2014-01-01
Ligand-based in silico target fishing can be used to identify the potential interacting target of bioactive ligands, which is useful for understanding the polypharmacology and safety profile of existing drugs. The underlying principle of the approach is that known bioactive ligands can be used as reference to predict the targets for a new compound. We tested a pipeline enabling large-scale target fishing and drug repositioning, based on simple fingerprint similarity rankings with data fusion. A large library containing 533 drug relevant targets with 179,807 active ligands was compiled, where each target was defined by its ligand set. For a given query molecule, its target profile is generated by similarity searching against the ligand sets assigned to each target, for which individual searches utilizing multiple reference structures are then fused into a single ranking list representing the potential target interaction profile of the query compound. The proposed approach was validated by 10-fold cross validation and two external tests using data from DrugBank and Therapeutic Target Database (TTD). The use of the approach was further demonstrated with some examples concerning the drug repositioning and drug side-effects prediction. The promising results suggest that the proposed method is useful for not only finding promiscuous drugs for their new usages, but also predicting some important toxic liabilities. With the rapid increasing volume and diversity of data concerning drug related targets and their ligands, the simple ligand-based target fishing approach would play an important role in assisting future drug design and discovery.
Fitzpatrick, Stephanie L; Hill-Briggs, Felicia
2015-10-01
Identification of patients with poor chronic disease self-management skills can facilitate treatment planning, determine effectiveness of interventions, and reduce disease complications. This paper describes the use of a Rasch model, the Rating Scale Model, to examine psychometric properties of the 50-item Health Problem-Solving Scale (HPSS) among 320 African American patients with high risk for cardiovascular disease. Items on the positive/effective HPSS subscales targeted patients at low, moderate, and high levels of positive/effective problem solving, whereas items on the negative/ineffective problem solving subscales mostly targeted those at moderate or high levels of ineffective problem solving. Validity was examined by correlating factor scores on the measure with clinical and behavioral measures. Items on the HPSS show promise in the ability to assess health-related problem solving among high risk patients. However, further revisions of the scale are needed to increase its usability and validity with large, diverse patient populations in the future.
Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger
2017-01-01
Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.
Aerospace Laser Ignition/Ablation Variable High Precision Thruster
NASA Technical Reports Server (NTRS)
Campbell, Jonathan W. (Inventor); Edwards, David L. (Inventor); Campbell, Jason J. (Inventor)
2015-01-01
A laser ignition/ablation propulsion system that captures the advantages of both liquid and solid propulsion. A reel system is used to move a propellant tape containing a plurality of propellant material targets through an ignition chamber. When a propellant target is in the ignition chamber, a laser beam from a laser positioned above the ignition chamber strikes the propellant target, igniting the propellant material and resulting in a thrust impulse. The propellant tape is advanced, carrying another propellant target into the ignition chamber. The propellant tape and ignition chamber are designed to ensure that each ignition event is isolated from the remaining propellant targets. Thrust and specific impulse may by precisely controlled by varying the synchronized propellant tape/laser speed. The laser ignition/ablation propulsion system may be scaled for use in small and large applications.
Scene-Aware Adaptive Updating for Visual Tracking via Correlation Filters
Zhang, Sirou; Qiao, Xiaoya
2017-01-01
In recent years, visual object tracking has been widely used in military guidance, human-computer interaction, road traffic, scene monitoring and many other fields. The tracking algorithms based on correlation filters have shown good performance in terms of accuracy and tracking speed. However, their performance is not satisfactory in scenes with scale variation, deformation, and occlusion. In this paper, we propose a scene-aware adaptive updating mechanism for visual tracking via a kernel correlation filter (KCF). First, a low complexity scale estimation method is presented, in which the corresponding weight in five scales is employed to determine the final target scale. Then, the adaptive updating mechanism is presented based on the scene-classification. We classify the video scenes as four categories by video content analysis. According to the target scene, we exploit the adaptive updating mechanism to update the kernel correlation filter to improve the robustness of the tracker, especially in scenes with scale variation, deformation, and occlusion. We evaluate our tracker on the CVPR2013 benchmark. The experimental results obtained with the proposed algorithm are improved by 33.3%, 15%, 6%, 21.9% and 19.8% compared to those of the KCF tracker on the scene with scale variation, partial or long-time large-area occlusion, deformation, fast motion and out-of-view. PMID:29140311
Korte, Andrew R.; Stopka, Sylwia A.; Morris, Nicholas; ...
2016-07-11
The unique challenges presented by metabolomics have driven the development of new mass spectrometry (MS)-based techniques for small molecule analysis. We have previously demonstrated silicon nanopost arrays (NAPA) to be an effective substrate for laser desorption ionization (LDI) of small molecules for MS. However, the utility of NAPA-LDI-MS for a wide range of metabolite classes has not been investigated. Here we apply NAPA-LDI-MS to the large-scale acquisition of high-resolution mass spectra and tandem mass spectra from a collection of metabolite standards covering a range of compound classes including amino acids, nucleotides, carbohydrates, xenobiotics, lipids, and other classes. In untargeted analysismore » of metabolite standard mixtures, detection was achieved for 374 compounds and useful MS/MS spectra were obtained for 287 compounds, without individual optimization of ionization or fragmentation conditions. Metabolite detection was evaluated in the context of 31 metabolic pathways, and NAPA-LDI-MS was found to provide detection for 63% of investigated pathway metabolites. Individual, targeted analysis of the 20 common amino acids provided detection of 100% of the investigated compounds, demonstrating that improved coverage is possible through optimization and targeting of individual analytes or analyte classes. In direct analysis of aqueous and organic extracts from human serum samples, spectral features were assigned to a total of 108 small metabolites and lipids. Glucose and amino acids were quantitated within their physiological concentration ranges. Finally, the broad coverage demonstrated by this large-scale screening experiment opens the door for use of NAPA-LDI-MS in numerous metabolite analysis applications« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korte, Andrew R.; Stopka, Sylwia A.; Morris, Nicholas
The unique challenges presented by metabolomics have driven the development of new mass spectrometry (MS)-based techniques for small molecule analysis. We have previously demonstrated silicon nanopost arrays (NAPA) to be an effective substrate for laser desorption ionization (LDI) of small molecules for MS. However, the utility of NAPA-LDI-MS for a wide range of metabolite classes has not been investigated. Here we apply NAPA-LDI-MS to the large-scale acquisition of high-resolution mass spectra and tandem mass spectra from a collection of metabolite standards covering a range of compound classes including amino acids, nucleotides, carbohydrates, xenobiotics, lipids, and other classes. In untargeted analysismore » of metabolite standard mixtures, detection was achieved for 374 compounds and useful MS/MS spectra were obtained for 287 compounds, without individual optimization of ionization or fragmentation conditions. Metabolite detection was evaluated in the context of 31 metabolic pathways, and NAPA-LDI-MS was found to provide detection for 63% of investigated pathway metabolites. Individual, targeted analysis of the 20 common amino acids provided detection of 100% of the investigated compounds, demonstrating that improved coverage is possible through optimization and targeting of individual analytes or analyte classes. In direct analysis of aqueous and organic extracts from human serum samples, spectral features were assigned to a total of 108 small metabolites and lipids. Glucose and amino acids were quantitated within their physiological concentration ranges. Finally, the broad coverage demonstrated by this large-scale screening experiment opens the door for use of NAPA-LDI-MS in numerous metabolite analysis applications« less
Genome-wide map of Apn1 binding sites under oxidative stress in Saccharomyces cerevisiae.
Morris, Lydia P; Conley, Andrew B; Degtyareva, Natalya; Jordan, I King; Doetsch, Paul W
2017-11-01
The DNA is cells is continuously exposed to reactive oxygen species resulting in toxic and mutagenic DNA damage. Although the repair of oxidative DNA damage occurs primarily through the base excision repair (BER) pathway, the nucleotide excision repair (NER) pathway processes some of the same lesions. In addition, damage tolerance mechanisms, such as recombination and translesion synthesis, enable cells to tolerate oxidative DNA damage, especially when BER and NER capacities are exceeded. Thus, disruption of BER alone or disruption of BER and NER in Saccharomyces cerevisiae leads to increased mutations as well as large-scale genomic rearrangements. Previous studies demonstrated that a particular region of chromosome II is susceptible to chronic oxidative stress-induced chromosomal rearrangements, suggesting the existence of DNA damage and/or DNA repair hotspots. Here we investigated the relationship between oxidative damage and genomic instability utilizing chromatin immunoprecipitation combined with DNA microarray technology to profile DNA repair sites along yeast chromosomes under different oxidative stress conditions. We targeted the major yeast AP endonuclease Apn1 as a representative BER protein. Our results indicate that Apn1 target sequences are enriched for cytosine and guanine nucleotides. We predict that BER protects these sites in the genome because guanines and cytosines are thought to be especially susceptible to oxidative attack, thereby preventing large-scale genome destabilization from chronic accumulation of DNA damage. Information from our studies should provide insight into how regional deployment of oxidative DNA damage management systems along chromosomes protects against large-scale rearrangements. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A robust close-range photogrammetric target extraction algorithm for size and type variant targets
NASA Astrophysics Data System (ADS)
Nyarko, Kofi; Thomas, Clayton; Torres, Gilbert
2016-05-01
The Photo-G program conducted by Naval Air Systems Command at the Atlantic Test Range in Patuxent River, Maryland, uses photogrammetric analysis of large amounts of real-world imagery to characterize the motion of objects in a 3-D scene. Current approaches involve several independent processes including target acquisition, target identification, 2-D tracking of image features, and 3-D kinematic state estimation. Each process has its own inherent complications and corresponding degrees of both human intervention and computational complexity. One approach being explored for automated target acquisition relies on exploiting the pixel intensity distributions of photogrammetric targets, which tend to be patterns with bimodal intensity distributions. The bimodal distribution partitioning algorithm utilizes this distribution to automatically deconstruct a video frame into regions of interest (ROI) that are merged and expanded to target boundaries, from which ROI centroids are extracted to mark target acquisition points. This process has proved to be scale, position and orientation invariant, as well as fairly insensitive to global uniform intensity disparities.
Reaching the global target to reduce stunting: an investment framework.
Shekar, Meera; Kakietek, Jakub; D'Alimonte, Mary R; Rogers, Hilary E; Eberwein, Julia Dayton; Akuoku, Jon Kweku; Pereira, Audrey; Soe-Lin, Shan; Hecht, Robert
2017-06-01
Childhood stunting, being short for one's age, has life-long consequences for health, human capital and economic growth. Being stunted in early childhood is associated with slower cognitive development, reduced schooling attainment and adult incomes decreased by 5-53%. The World Health Assembly has endorsed global nutrition targets including one to reduce the number of stunted children under five by 40% by 2025. The target has been included in the Sustainable Development Goals (SDG target 2.2). This paper estimates the cost of achieving this target and develops scenarios for generating the necessary financing. We focus on a key intervention package for stunting (KIPS) with strong evidence of effectiveness. Annual scale-up costs for the period of 2016-25 were estimated for a sample of 37 high burden countries and extrapolated to all low and middle income countries. The Lives Saved Tool was used to model the impact of the scale-up on stunting prevalence. We analysed data on KIPS budget allocations and expenditure by governments, donors and households to derive a global baseline financing estimate. We modelled two financing scenarios, a 'business as usual', which extends the current trends in domestic and international financing for nutrition through 2025, and another that proposes increases in financing from all sources under a set of burden-sharing rules. The 10-year financial need to scale up KIPS is US$49.5 billion. Under 'business as usual', this financial need is not met and the global stunting target is not reached. To reach the target, current financing will have to increase from US$2.6 billion to US$7.4 billion a year on average. Reaching the stunting target is feasible but will require large coordinated investments in KIPS and a supportive enabling environment. The example of HIV scale-up over 2001-11 is instructive in identifying the factors that could drive such a global response to childhood stunting. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.
USDA-ARS?s Scientific Manuscript database
Along riparian corridors throughout the arid and semiarid regions of the western United States, non-native shrubs and trees in the genus Tamarix have replaced native vegetation. Plant communities along rivers with altered flow regimes and flood control have become particularly vulnerable to widespre...
Targeted business intelligence pays off.
Hennen, James
2009-03-01
Application business intelligence can accomplish much of what large-scale, enterprisewide efforts can accomplish: Focus on a variety of data that are interrelated in a meaningful way, Support decision making at multiple levels within a given organization, Leverage data that are already captured but not fully used, Provide actionable information and support quick response via a dashboard or control panel.
Historical open forest ecosystems in the Missouri Ozarks: reconstruction and restoration targets
Brice B. Hanberry; D. Todd Jones-Farrand; John M. Kabrick
2014-01-01
Current forests no longer resemble historical open forest ecosystems in the eastern United States. In the absence of representative forest ecosystems under a continuous surface fire regime at a large scale, reconstruction of historical landscapes can provide a reference for restoration efforts. For initial expert-assigned vegetation phases ranging from prairie to...
ERIC Educational Resources Information Center
Shamel, Kimberly A.
2013-01-01
Bullying is a large scale social problem impacting educational systems nationwide, and has been linked to negative outcomes for both bullies and targets. Bullying has become more highly technological and is most often referred to as cyber bullying. Bullies have begun to use the internet, social networking sites, e-mail, instant messaging (IM),…
ERIC Educational Resources Information Center
Hanselman, Paul; Rozek, Christopher S.; Grigg, Jeffrey; Borman, Geoffrey D.
2017-01-01
Brief, targeted self-affirmation writing exercises have recently been offered as a way to reduce racial achievement gaps, but evidence about their effects in educational settings is mixed, leaving ambiguity about the likely benefits of these strategies if implemented broadly. A key limitation in interpreting these mixed results is that they come…
A Day in Third Grade: A Large-Scale Study of Classroom Quality and Teacher and Student Behavior
ERIC Educational Resources Information Center
Elementary School Journal, 2005
2005-01-01
Observations of 780 third-grade classrooms described classroom activities, child-teacher interactions, and dimensions of the global classroom environment, which were examined in relation to structural aspects of the classroom and child behavior. 1 child per classroom was targeted for observation in relation to classroom quality and teacher and…
Relating drug–protein interaction network with drug side effects
Mizutani, Sayaka; Pauwels, Edouard; Stoven, Véronique; Goto, Susumu; Yamanishi, Yoshihiro
2012-01-01
Motivation: Identifying the emergence and underlying mechanisms of drug side effects is a challenging task in the drug development process. This underscores the importance of system–wide approaches for linking different scales of drug actions; namely drug-protein interactions (molecular scale) and side effects (phenotypic scale) toward side effect prediction for uncharacterized drugs. Results: We performed a large-scale analysis to extract correlated sets of targeted proteins and side effects, based on the co-occurrence of drugs in protein-binding profiles and side effect profiles, using sparse canonical correlation analysis. The analysis of 658 drugs with the two profiles for 1368 proteins and 1339 side effects led to the extraction of 80 correlated sets. Enrichment analyses using KEGG and Gene Ontology showed that most of the correlated sets were significantly enriched with proteins that are involved in the same biological pathways, even if their molecular functions are different. This allowed for a biologically relevant interpretation regarding the relationship between drug–targeted proteins and side effects. The extracted side effects can be regarded as possible phenotypic outcomes by drugs targeting the proteins that appear in the same correlated set. The proposed method is expected to be useful for predicting potential side effects of new drug candidate compounds based on their protein-binding profiles. Supplementary information: Datasets and all results are available at http://web.kuicr.kyoto-u.ac.jp/supp/smizutan/target-effect/. Availability: Software is available at the above supplementary website. Contact: yamanishi@bioreg.kyushu-u.ac.jp, or goto@kuicr.kyoto-u.ac.jp PMID:22962476
Numerical dissipation vs. subgrid-scale modelling for large eddy simulation
NASA Astrophysics Data System (ADS)
Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos
2017-05-01
This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.
Community-based native seed production for restoration in Brazil - the role of science and policy.
Schmidt, I B; de Urzedo, D I; Piña-Rodrigues, F C M; Vieira, D L M; de Rezende, G M; Sampaio, A B; Junqueira, R G P
2018-05-20
Large-scale restoration programmes in the tropics require large volumes of high quality, genetically diverse and locally adapted seeds from a large number of species. However, scarcity of native seeds is a critical restriction to achieve restoration targets. In this paper, we analyse three successful community-based networks that supply native seeds and seedlings for Brazilian Amazon and Cerrado restoration projects. In addition, we propose directions to promote local participation, legal, technical and commercialisation issues for up-scaling the market of native seeds for restoration with high quality and social justice. We argue that effective community-based restoration arrangements should follow some principles: (i) seed production must be based on real market demand; (ii) non-governmental and governmental organisations have a key role in supporting local organisation, legal requirements and selling processes; (iii) local ecological knowledge and labour should be valued, enabling local communities to promote large-scale seed production; (iv) applied research can help develop appropriate techniques and solve technical issues. The case studies from Brazil and principles presented here can be useful for the up-scaling restoration ecology efforts in many other parts of the world and especially in tropical countries where improving rural community income is a strategy for biodiversity conservation and restoration. © 2018 German Society for Plant Sciences and The Royal Botanical Society of the Netherlands.
Mesoderm Lineage 3D Tissue Constructs Are Produced at Large-Scale in a 3D Stem Cell Bioprocess.
Cha, Jae Min; Mantalaris, Athanasios; Jung, Sunyoung; Ji, Yurim; Bang, Oh Young; Bae, Hojae
2017-09-01
Various studies have presented different approaches to direct pluripotent stem cell differentiation such as applying defined sets of exogenous biochemical signals and genetic/epigenetic modifications. Although differentiation to target lineages can be successfully regulated, such conventional methods are often complicated, laborious, and not cost-effective to be employed to the large-scale production of 3D stem cell-based tissue constructs. A 3D-culture platform that could realize the large-scale production of mesoderm lineage tissue constructs from embryonic stem cells (ESCs) is developed. ESCs are cultured using our previously established 3D-bioprocess platform which is amenable to mass-production of 3D ESC-based tissue constructs. Hepatocarcinoma cell line conditioned medium is introduced to the large-scale 3D culture to provide a specific biomolecular microenvironment to mimic in vivo mesoderm formation process. After 5 days of spontaneous differentiation period, the resulting 3D tissue constructs are composed of multipotent mesodermal progenitor cells verified by gene and molecular expression profiles. Subsequently the optimal time points to trigger terminal differentiation towards cardiomyogenesis or osteogenesis from the mesodermal tissue constructs is found. A simple and affordable 3D ESC-bioprocess that can reach the scalable production of mesoderm origin tissues with significantly improved correspondent tissue properties is demonstrated. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Boysen, L.; Heck, V.; Lucht, W.; Gerten, D.
2015-12-01
Terrestrial carbon dioxide removal (tCDR) through dedicated biomass plantations is considered as one climate engineering (CE) option if implemented at large-scale. While the risks and costs are supposed to be small, the effectiveness depends strongly on spatial and temporal scales of implementation. Based on simulations with a dynamic global vegetation model (LPJmL) we comprehensively assess the effectiveness, biogeochemical side-effects and tradeoffs from an earth system-analytic perspective. We analyzed systematic land-use scenarios in which all, 25%, or 10% of natural and/or agricultural areas are converted to tCDR plantations including the assumption that biomass plantations are established once the 2°C target is crossed in a business-as-usual climate change trajectory. The resulting tCDR potentials in year 2100 include the net accumulated annual biomass harvests and changes in all land carbon pools. We find that only the most spatially excessive, and thus undesirable, scenario would be capable to restore the 2° target by 2100 under continuing high emissions (with a cooling of 3.02°C). Large-scale biomass plantations covering areas between 1.1 - 4.2 Gha would produce a climate reduction potential of 0.8 - 1.4°C. tCDR plantations at smaller scales do not build up enough biomass over this considered period and the potentials to achieve global warming reductions are substantially lowered to no more than 0.5-0.6°C. Finally, we demonstrate that the (non-economic) costs for the Earth system include negative impacts on the water cycle and on ecosystems, which are already under pressure due to both land use change and climate change. Overall, tCDR may lead to a further transgression of land- and water-related planetary boundaries while not being able to set back the crossing of the planetary boundary for climate change. tCDR could still be considered in the near-future mitigation portfolio if implemented on small scales on wisely chosen areas.
The scientific targets of the SCOPE mission
NASA Astrophysics Data System (ADS)
Fujimoto, M.; Saito, Y.; Tsuda, Y.; Shinohara, I.; Kojima, H.
Future Japanese magnetospheric mission "SCOPE" is now under study (planned to be launched in 2012). The main purpose of this mission is to investigate the dynamic behaviors of plasmas in the Earth's magnetosphere from the view-point of cross-scale coupling. Dynamical collisionless space plasma phenomena, be they large scale as a whole, are chracterized by coupling over various time and spatial scales. The best example would be the magnetic reconnection process, which is a large scale energy conversion process but has a small key region at the heart of its engine. Inside the key region, electron scale dynamics plays the key role in liberating the frozen-in constraint, by which reconnection is allowed to proceed. The SCOPE mission is composed of one large mother satellite and four small daughter satellites. The mother spacecraft will be equiped with the electron detector that has 10 msec time resolution so that scales down to the electron's will be resolved. Three of the four daughter satellites surround the mother satellite 3-dimensionally with the mutual distances between several km and several thousand km, which are varied during the mission. Plasma measurements on these spacecrafts will have 1 sec resolution and will provide information on meso-scale plasma structure. The fourth daughter satellite stays near the mother satellite with the distance less than 100km. By correlation between the two plasma wave instruments on the daughter and the mother spacecrafts, propagation of the waves and the information on the electron scale dynamics will be obtained. By this strategy, both meso- and micro-scale information on dynamics are obtained, that will enable us to investigate the physics of the space plasma from the cross-scale coupling point of view.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrier, C.; Holcman, D., E-mail: david.holcman@ens.fr; Mathematical Institute, Oxford OX2 6GG, Newton Institute
The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationallymore » greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.« less
Performance optimization of Qbox and WEST on Intel Knights Landing
NASA Astrophysics Data System (ADS)
Zheng, Huihuo; Knight, Christopher; Galli, Giulia; Govoni, Marco; Gygi, Francois
We present the optimization of electronic structure codes Qbox and WEST targeting the Intel®Xeon Phi™processor, codenamed Knights Landing (KNL). Qbox is an ab-initio molecular dynamics code based on plane wave density functional theory (DFT) and WEST is a post-DFT code for excited state calculations within many-body perturbation theory. Both Qbox and WEST employ highly scalable algorithms which enable accurate large-scale electronic structure calculations on leadership class supercomputer platforms beyond 100,000 cores, such as Mira and Theta at the Argonne Leadership Computing Facility. In this work, features of the KNL architecture (e.g. hierarchical memory) are explored to achieve higher performance in key algorithms of the Qbox and WEST codes and to develop a road-map for further development targeting next-generation computing architectures. In particular, the optimizations of the Qbox and WEST codes on the KNL platform will target efficient large-scale electronic structure calculations of nanostructured materials exhibiting complex structures and prediction of their electronic and thermal properties for use in solar and thermal energy conversion device. This work was supported by MICCoM, as part of Comp. Mats. Sci. Program funded by the U.S. DOE, Office of Sci., BES, MSE Division. This research used resources of the ALCF, which is a DOE Office of Sci. User Facility under Contract DE-AC02-06CH11357.
Large scale land acquisitions and REDD+: a synthesis of conflicts and opportunities
NASA Astrophysics Data System (ADS)
Carter, Sarah; Manceur, Ameur M.; Seppelt, Ralf; Hermans-Neumann, Kathleen; Herold, Martin; Verchot, Lou
2017-03-01
Large scale land acquisitions (LSLA), and Reducing Emissions from Deforestation and forest Degradation (REDD+) are both land based phenomena which when occurring in the same area, can compete with each other for land. A quantitative analysis of country characteristics revealed that land available for agriculture, accessibility, and political stability are key explanatory factors for a country being targeted for LSLA. Surprisingly LSLA occur in countries with lower accessibility. Countries with good land availability, poor accessibility and political stability may become future targets if they do not already have LSLA. Countries which high levels of agriculture-driven deforestation and LSLA, should develop interventions which reduce forest loss driven either directly or indirectly by LSLA as part of their REDD+ strategies. Both host country and investor-side policies have been identified which could be used more widely to reduce conflicts between LSLA and REDD+. Findings from this research highlight the need for and can inform the development of national and international policies on land acquisitions including green acquisitions such as REDD+. Land management must be considered with all its objectives—including food security, biodiversity conservation, and climate change mitigation—in a coherent strategy which engages relevant stakeholders. This is not currently occurring and might be a key ingredient to achieve the targets under the Sustainable Development Goals 2 and 15 and 16 (related to food security and sustainable agriculture and the protection of forests) among others.
Ye, Yong; Deng, Jiahao; Shen, Sanmin; Hou, Zhuo; Liu, Yuting
2016-01-01
A novel method for proximity detection of moving targets (with high dielectric constants) using a large-scale (the size of each sensor is 31 cm × 19 cm) planar capacitive sensor system (PCSS) is proposed. The capacitive variation with distance is derived, and a pair of electrodes in a planar capacitive sensor unit (PCSU) with a spiral shape is found to have better performance on sensitivity distribution homogeneity and dynamic range than three other shapes (comb shape, rectangular shape, and circular shape). A driving excitation circuit with a Clapp oscillator is proposed, and a capacitance measuring circuit with sensitivity of 0.21 Vp−p/pF is designed. The results of static experiments and dynamic experiments demonstrate that the voltage curves of static experiments are similar to those of dynamic experiments; therefore, the static data can be used to simulate the dynamic curves. The dynamic range of proximity detection for three projectiles is up to 60 cm, and the results of the following static experiments show that the PCSU with four neighboring units has the highest sensitivity (the sensitivities of other units are at least 4% lower); when the attack angle decreases, the intensity of sensor signal increases. This proposed method leads to the design of a feasible moving target detector with simple structure and low cost, which can be applied in the interception system. PMID:27196905
A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.
Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu
2016-04-19
Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.
Combining functional genomics and chemical biology to identify targets of bioactive compounds.
Ho, Cheuk Hei; Piotrowski, Jeff; Dixon, Scott J; Baryshnikova, Anastasia; Costanzo, Michael; Boone, Charles
2011-02-01
Genome sequencing projects have revealed thousands of suspected genes, challenging researchers to develop efficient large-scale functional analysis methodologies. Determining the function of a gene product generally requires a means to alter its function. Genetically tractable model organisms have been widely exploited for the isolation and characterization of activating and inactivating mutations in genes encoding proteins of interest. Chemical genetics represents a complementary approach involving the use of small molecules capable of either inactivating or activating their targets. Saccharomyces cerevisiae has been an important test bed for the development and application of chemical genomic assays aimed at identifying targets and modes of action of known and uncharacterized compounds. Here we review yeast chemical genomic assays strategies for drug target identification. Copyright © 2010 Elsevier Ltd. All rights reserved.
Optimal chemotaxis in intermittent migration of animal cells
NASA Astrophysics Data System (ADS)
Romanczuk, P.; Salbreux, G.
2015-04-01
Animal cells can sense chemical gradients without moving and are faced with the challenge of migrating towards a target despite noisy information on the target position. Here we discuss optimal search strategies for a chaser that moves by switching between two phases of motion ("run" and "tumble"), reorienting itself towards the target during tumble phases, and performing persistent migration during run phases. We show that the chaser average run time can be adjusted to minimize the target catching time or the spatial dispersion of the chasers. We obtain analytical results for the catching time and for the spatial dispersion in the limits of small and large ratios of run time to tumble time and scaling laws for the optimal run times. Our findings have implications for optimal chemotactic strategies in animal cell migration.
Asteroid collisions: Target size effects and resultant velocity distributions
NASA Technical Reports Server (NTRS)
Ryan, Eileen V.
1993-01-01
To study the dynamic fragmentation of rock to simulate asteroid collisions, we use a 2-D, continuum damage numerical hydrocode which models two-body impacts. This hydrocode monitors stress wave propagation and interaction within the target body, and includes a physical model for the formation and growth of cracks in rock. With this algorithm we have successfully reproduced fragment size distributions and mean ejecta speeds from laboratory impact experiments using basalt, and weak and strong mortar as target materials. Using the hydrocode, we have determined that the energy needed to fracture a body has a much stronger dependence on target size than predicted from most scaling theories. In addition, velocity distributions obtained indicate that mean ejecta speeds resulting from large-body collisions do not exceed escape velocities.
Distributed resource allocation under communication constraints
NASA Astrophysics Data System (ADS)
Dodin, Pierre; Nimier, Vincent
2001-03-01
This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.
Batch Immunostaining for Large-Scale Protein Detection in the Whole Monkey Brain
Zangenehpour, Shahin; Burke, Mark W.; Chaudhuri, Avi; Ptito, Maurice
2009-01-01
Immunohistochemistry (IHC) is one of the most widely used laboratory techniques for the detection of target proteins in situ. Questions concerning the expression pattern of a target protein across the entire brain are relatively easy to answer when using IHC in small brains, such as those of rodents. However, answering the same questions in large and convoluted brains, such as those of primates presents a number of challenges. Here we present a systematic approach for immunodetection of target proteins in an adult monkey brain. This approach relies on the tissue embedding and sectioning methodology of NeuroScience Associates (NSA) as well as tools developed specifically for batch-staining of free-floating sections. It results in uniform staining of a set of sections which, at a particular interval, represents the entire brain. The resulting stained sections can be subjected to a wide variety of analytical procedures in order to measure protein levels, the population of neurons expressing a certain protein. PMID:19636291
NASA Astrophysics Data System (ADS)
Nunes, A.; Ivanov, V. Y.
2014-12-01
Although current global reanalyses provide reasonably accurate large-scale features of the atmosphere, systematic errors are still found in the hydrological and energy budgets of such products. In the tropics, precipitation is particularly challenging to model, which is also adversely affected by the scarcity of hydrometeorological datasets in the region. With the goal of producing downscaled analyses that are appropriate for a climate assessment at regional scales, a regional spectral model has used a combination of precipitation assimilation with scale-selective bias correction. The latter is similar to the spectral nudging technique, which prevents the departure of the regional model's internal states from the large-scale forcing. The target area in this study is the Amazon region, where large errors are detected in reanalysis precipitation. To generate the downscaled analysis, the regional climate model used NCEP/DOE R2 global reanalysis as the initial and lateral boundary conditions, and assimilated NOAA's Climate Prediction Center (CPC) MORPHed precipitation (CMORPH), available at 0.25-degree resolution, every 3 hours. The regional model's precipitation was successfully brought closer to the observations, in comparison to the NCEP global reanalysis products, as a result of the impact of a precipitation assimilation scheme on cumulus-convection parameterization, and improved boundary forcing achieved through a new version of scale-selective bias correction. Water and energy budget terms were also evaluated against global reanalyses and other datasets.
Williams, Mary R; Sigman, Michael E; Lewis, Jennifer; Pitan, Kelly McHugh
2012-10-10
A bayesian soft classification method combined with target factor analysis (TFA) is described and tested for the analysis of fire debris data. The method relies on analysis of the average mass spectrum across the chromatographic profile (i.e., the total ion spectrum, TIS) from multiple samples taken from a single fire scene. A library of TIS from reference ignitable liquids with assigned ASTM classification is used as the target factors in TFA. The class-conditional distributions of correlations between the target and predicted factors for each ASTM class are represented by kernel functions and analyzed by bayesian decision theory. The soft classification approach assists in assessing the probability that ignitable liquid residue from a specific ASTM E1618 class, is present in a set of samples from a single fire scene, even in the presence of unspecified background contributions from pyrolysis products. The method is demonstrated with sample data sets and then tested on laboratory-scale burn data and large-scale field test burns. The overall performance achieved in laboratory and field test of the method is approximately 80% correct classification of fire debris samples. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Eyjafjallajökull and 9/11: The Impact of Large-Scale Disasters on Worldwide Mobility
Woolley-Meza, Olivia; Grady, Daniel; Thiemann, Christian; Bagrow, James P.; Brockmann, Dirk
2013-01-01
Large-scale disasters that interfere with globalized socio-technical infrastructure, such as mobility and transportation networks, trigger high socio-economic costs. Although the origin of such events is often geographically confined, their impact reverberates through entire networks in ways that are poorly understood, difficult to assess, and even more difficult to predict. We investigate how the eruption of volcano Eyjafjallajökull, the September 11th terrorist attacks, and geographical disruptions in general interfere with worldwide mobility. To do this we track changes in effective distance in the worldwide air transportation network from the perspective of individual airports. We find that universal features exist across these events: airport susceptibilities to regional disruptions follow similar, strongly heterogeneous distributions that lack a scale. On the other hand, airports are more uniformly susceptible to attacks that target the most important hubs in the network, exhibiting a well-defined scale. The statistical behavior of susceptibility can be characterized by a single scaling exponent. Using scaling arguments that capture the interplay between individual airport characteristics and the structural properties of routes we can recover the exponent for all types of disruption. We find that the same mechanisms responsible for efficient passenger flow may also keep the system in a vulnerable state. Our approach can be applied to understand the impact of large, correlated disruptions in financial systems, ecosystems and other systems with a complex interaction structure between heterogeneous components. PMID:23950904
NASA Astrophysics Data System (ADS)
Wu, Yuchi; Dong, Kegong; Yan, Yonghong; Zhu, Bin; Zhang, Tiankui; Chen, Jia; Yu, Minghai; Tan, Fang; Wang, Shaoyi; Han, Dan; Lu, Feng; Gu, Yuqiu
2017-06-01
An experiment for pair production by high intensity laser irradiating thick solid targets is present. The experiment used picosecond beam of the XingGuangIII laser facility, with intensities up to several 1019 W/cm2, pulse durations about 0.8 ps and laser energies around 120 J. Pairs were generated from 1 mm-thick tantalum disk targets with different diameters from 1 mm to 10 mm. Energy spectra of hot electron from targetrear surface represent a Maxwellian distribution and obey a scaling of ∼(Iλ2)0.5. Large quantity of positrons were observed at the target rear normal direction with a yield up to 2.8 × 109 e+/sr. Owing to the target rear surface sheath field, the positrons behave as a quasi-monoenergetic beam with peak energy of several MeV. Our experiment shows that the peak energy of positron beam is inversely proportional to the target diameter.
Plastoglobules: a new address for targeting recombinant proteins in the chloroplast
Vidi, Pierre-Alexandre; Kessler, Felix; Bréhélin, Claire
2007-01-01
Background The potential of transgenic plants for cost-effective production of pharmaceutical molecules is now becoming apparent. Plants have the advantage over established fermentation systems (bacterial, yeast or animal cell cultures) to circumvent the risk of pathogen contamination, to be amenable to large scaling up and to necessitate only established farming procedures. Chloroplasts have proven a useful cellular compartment for protein accumulation owing to their large size and number, as well as the possibility for organellar transformation. They therefore represent the targeting destination of choice for recombinant proteins in leaf crops such as tobacco. Extraction and purification of recombinant proteins from leaf material contribute to a large extent to the production costs. Developing new strategies facilitating these processes is therefore necessary. Results Here, we evaluated plastoglobule lipoprotein particles as a new subchloroplastic destination for recombinant proteins. The yellow fluorescent protein as a trackable cargo was targeted to plastoglobules when fused to plastoglobulin 34 (PGL34) as the carrier. Similar to adipocyte differentiation related protein (ADRP) in animal cells, most of the protein sequence of PGL34 was necessary for targeting to lipid bodies. The recombinant protein was efficiently enriched in plastoglobules isolated by simple flotation centrifugation. The viability of plants overproducing the recombinant protein was not affected, indicating that plastoglobule targeting did not significantly impair photosynthesis or sugar metabolism. Conclusion Our data identify plastoglobules as a new targeting destination for recombinant protein in leaf crops. The wide-spread presence of plastoglobules and plastoglobulins in crop species promises applications comparable to those of transgenic oilbody-oleosin technology in molecular farming. PMID:17214877
On the utility of antiprotons as drivers for inertial confinement fusion
NASA Astrophysics Data System (ADS)
Perkins, L. John; Orth, Charles D.; Tabak, Max
2004-10-01
In contrast to the large mass, complexity and recirculating power of conventional drivers for inertial confinement fusion (ICF), antiproton annihilation offers a specific energy of 90 MJ µg-1 and thus a unique form of energy packaging and delivery. In principle, antiproton drivers could provide a profound reduction in system mass for advanced space propulsion by ICF. We examine the physics underlying the use of antiprotons ( \\bar{p} ) to drive various classes of high-yield ICF targets by the methods of volumetric ignition, hotspot ignition and fast ignition. The useable fraction of annihilation deposition energy is determined for both \\bar{p} -driven ablative compression and \\bar{p} -driven fast ignition, in association with zero- and one-dimensional target burn models. Thereby, we deduce scaling laws for the number of injected antiprotons required per capsule, together with timing and focal spot requirements. The kinetic energy of the injected antiproton beam required to penetrate to the desired annihilation point is always small relative to the deposited annihilation energy. We show that heavy metal seeding of the fuel and/or ablator is required to optimize local deposition of annihilation energy and determine that a minimum of ~3 × 1015 injected antiprotons will be required to achieve high yield (several hundred megajoules) in any target configuration. Target gains—i.e. fusion yields divided by the available p- \\bar{p} annihilation energy from the injected antiprotons ( 1.88\\,GeV/\\bar{p} )—range from ~3 for volumetric ignition targets to ~600 for fast ignition targets. Antiproton-driven ICF is a speculative concept, and the handling of antiprotons and their required injection precision—temporally and spatially—will present significant technical challenges. The storage and manipulation of low-energy antiprotons, particularly in the form of antihydrogen, is a science in its infancy and a large scale-up of antiproton production over present supply methods would be required to embark on a serious R&D programme for this application.
Willett, Francis R; Murphy, Brian A; Memberg, William D; Blabe, Christine H; Pandarinath, Chethan; Walter, Benjamin L; Sweet, Jennifer A; Miller, Jonathan P; Henderson, Jaimie M; Shenoy, Krishna V; Hochberg, Leigh R; Kirsch, Robert F; Ajiboye, A Bolu
2017-04-01
Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts' law: [Formula: see text] (where MT is movement time, D is target distance, R is target radius, and [Formula: see text] are parameters). Fitts' law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio [Formula: see text]) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to [Formula: see text]). Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts' law. We found that movement times were better described by the equation [Formula: see text], which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the [Formula: see text] ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user's motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder. The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts' law-like relationship to iBCI movements may require non-linear decoding strategies.
Chen, Shuo; Luo, Chenggao; Wang, Hongqiang; Deng, Bin; Cheng, Yongqiang; Zhuang, Zhaowen
2018-04-26
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. However, there are still two problems in three-dimensional (3D) TCAI. Firstly, the large-scale reference-signal matrix based on meshing the 3D imaging area creates a heavy computational burden, thus leading to unsatisfactory efficiency. Secondly, it is difficult to resolve the target under low signal-to-noise ratio (SNR). In this paper, we propose a 3D imaging method based on matched filtering (MF) and convolutional neural network (CNN), which can reduce the computational burden and achieve high-resolution imaging for low SNR targets. In terms of the frequency-hopping (FH) signal, the original echo is processed with MF. By extracting the processed echo in different spike pulses separately, targets in different imaging planes are reconstructed simultaneously to decompose the global computational complexity, and then are synthesized together to reconstruct the 3D target. Based on the conventional TCAI model, we deduce and build a new TCAI model based on MF. Furthermore, the convolutional neural network (CNN) is designed to teach the MF-TCAI how to reconstruct the low SNR target better. The experimental results demonstrate that the MF-TCAI achieves impressive performance on imaging ability and efficiency under low SNR. Moreover, the MF-TCAI has learned to better resolve the low-SNR 3D target with the help of CNN. In summary, the proposed 3D TCAI can achieve: (1) low-SNR high-resolution imaging by using MF; (2) efficient 3D imaging by downsizing the large-scale reference-signal matrix; and (3) intelligent imaging with CNN. Therefore, the TCAI based on MF and CNN has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.
Parallel Adjective High-Order CFD Simulations Characterizing SOFIA Cavity Acoustics
NASA Technical Reports Server (NTRS)
Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak
2016-01-01
This paper presents large-scale MPI-parallel computational uid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft fuselage of a Boeing 747SP. These simulations focus on how the unsteady ow eld inside and over the cavity interferes with the optical path and mounting structure of the telescope. A temporally fourth-order accurate Runge-Kutta, and spatially fth-order accurate WENO- 5Z scheme was used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh re nement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32k CPU cores and 4 billion compu- tational cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregular numerical cost associated with blocks con- taining boundaries. Limits to scaling beyond 32k cores are identi ed, and targeted code optimizations are discussed.
Parallel Adaptive High-Order CFD Simulations Characterizing SOFIA Cavitiy Acoustics
NASA Technical Reports Server (NTRS)
Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak
2015-01-01
This paper presents large-scale MPI-parallel computational uid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft fuselage of a Boeing 747SP. These simulations focus on how the unsteady ow eld inside and over the cavity interferes with the optical path and mounting structure of the telescope. A tempo- rally fourth-order accurate Runge-Kutta, and a spatially fth-order accurate WENO-5Z scheme were used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh re nement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32k CPU cores and 4 billion compu- tational cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregular numerical cost associated with blocks con- taining boundaries. Limits to scaling beyond 32k cores are identi ed, and targeted code optimizations are discussed.
Very large scale monoclonal antibody purification: the case for conventional unit operations.
Kelley, Brian
2007-01-01
Technology development initiatives targeted for monoclonal antibody purification may be motivated by manufacturing limitations and are often aimed at solving current and future process bottlenecks. A subject under debate in many biotechnology companies is whether conventional unit operations such as chromatography will eventually become limiting for the production of recombinant protein therapeutics. An evaluation of the potential limitations of process chromatography and filtration using today's commercially available resins and membranes was conducted for a conceptual process scaled to produce 10 tons of monoclonal antibody per year from a single manufacturing plant, a scale representing one of the world's largest single-plant capacities for cGMP protein production. The process employs a simple, efficient purification train using only two chromatographic and two ultrafiltration steps, modeled after a platform antibody purification train that has generated 10 kg batches in clinical production. Based on analyses of cost of goods and the production capacity of this very large scale purification process, it is unlikely that non-conventional downstream unit operations would be needed to replace conventional chromatographic and filtration separation steps, at least for recombinant antibodies.
Harnessing the genome for characterization of GPCRs in cancer pathogenesis
Feigin, Michael E.
2014-01-01
G-protein coupled receptors (GPCRs) mediate numerous physiological processes and represent the targets for a vast array of therapeutics for diseases ranging from depression to hypertension to reflux. Despite the recognition that GPCRs can act as oncogenes and tumor suppressors by regulating oncogenic signaling networks, few drugs targeting GPCRs are utilized in cancer therapy. Recent large-scale genome-wide analyses of multiple human tumors have uncovered novel GPCRs altered in cancer. However, the work of determining which GPCRs from these lists are drivers of tumorigenesis, and hence valid therapeutic targets, remains a formidable challenge. In this review I will highlight recent studies providing evidence that GPCRs are relevant targets for cancer therapy through their effects on known cancer signaling pathways, tumor progression, invasion and metastasis, and the microenvironment. Furthermore, I will explore how genomic analysis is beginning to shine a light on GPCRs as therapeutic targets in the age of personalized medicine. PMID:23927072
3D range-gated super-resolution imaging based on stereo matching for moving platforms and targets
NASA Astrophysics Data System (ADS)
Sun, Liang; Wang, Xinwei; Zhou, Yan
2017-11-01
3D range-gated superresolution imaging is a novel 3D reconstruction technique for target detection and recognition with good real-time performance. However, for moving targets or platforms such as airborne, shipborne, remote operated vehicle and autonomous vehicle, 3D reconstruction has a large error or failure. In order to overcome this drawback, we propose a method of stereo matching for 3D range-gated superresolution reconstruction algorithm. In experiment, the target is a doll of Mario with a height of 38cm at the location of 34m, and we obtain two successive frame images of the Mario. To confirm our method is effective, we transform the original images with translation, rotation, scale and perspective, respectively. The experimental result shows that our method has a good result of 3D reconstruction for moving targets or platforms.
Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.
Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D
2016-01-01
The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.
Mina, Michael J
2017-06-01
Interactions between pathogens and commensal microbes are major contributors to health and disease. Infectious diseases however are most often considered independent, viewed within a one-host one-pathogen paradigm and, by extension, the interventions used to treat and prevent them are measured and evaluated within this same paradigm. Vaccines, especially live vaccines, by stimulating immune responses or directly interacting with other microbes can alter the environment in which they act, with effects that span across pathogen species. Live attenuated infl uenza vaccines for example, while safe, increase upper respiratory tract bacterial carriage density of important human commensal pathogens like Streptococcus pneumoniae and Staphylococcus aureus. Further, by altering the ecological niche and dynamics of phylogenetically distinct microbes within the host, vaccines may unintentionally affect transmission of non-vaccine targeted pathogens. Thus, vaccine effects may span across species and across scales, from the individual to the population level. In keeping with traditional vaccine herd-effects that indirectly protect even unvaccinated individuals by reducing population prevalence of vaccine-targeted pathogens, we call these cross-species cross-scale effects "generalized herd-effects". As opposed to traditional herd-effects, "generalized" relaxes the assumption that the effect occurs at the level of the vaccine-target pathogen and "herd effect" implies, as usual, that the effects indirectly impact the population at large, including unvaccinated bystanders. Unlike traditional herd-effects that decrease population prevalence of the vaccine-target, generalized herd-effects may decrease or increase prevalence and disease by the off-target pathogen. LAIV, for example, by increasing pneumococcal density in the upper respiratory tract of vaccine recipients, especially children, may increase pneumococcal transmission and prevalence, leading to excess pneumococcal invasive disease in the population, especially among the elderly and others most susceptible to pneumococcal disease. However, these effects may also be beneficial, for example the large reductions in all-cause mortality noted following measles vaccines. Here we discuss evidence for these novel vaccine effects and suggest that vaccine monitoring and evaluation programs should consider generalized herd effects to appreciate the full impacts of vaccines, beneficial or detrimental, across species and scales that are inevitably hiding in plain sight, affecting human health and disease. © 2017 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
49 CFR Appendix A to Part 223 - Certification of Glazing Materials
Code of Federal Regulations, 2011 CFR
2011-10-01
... material to be tested (Target Material) shall be a full scale sample of the largest dimension intended to... weight impacts at a minimum of 960 feet per second velocity. (ii) Large Object Impact in which a cinder block of 24 lbs minimum weight with dimensions of 8 inches by 8 inches by 16 inches nominally impacts at...
ERIC Educational Resources Information Center
Sanders, Matthew R.; Ralph, Alan; Sofronoff, Kate; Gardiner, Paul; Thompson, Rachel; Dwyer, Sarah; Bidwell, Kerry
2008-01-01
A large-scale population trial using the Triple P-Positive Parenting Program (TPS) was evaluated. The target population was all parents of 4- to 7-year-old children residing in ten geographical catchment areas in Brisbane (intervention communities) and ten sociodemographically matched catchment areas from Sydney (5) and Melbourne (5), care as…
ERIC Educational Resources Information Center
Hadfield, Mark; Jopling, Michael
2014-01-01
This paper discusses the development of a model targeted at non-specialist practitioners implementing innovations that involve information and communication technology (ICT) in education. It is based on data from a national evaluation of ICT-based projects in initial teacher education, which included a large-scale questionnaire survey and six…
ERIC Educational Resources Information Center
Ong, Jun-Yang; Chan, Shang-Ce; Hoang, Truong-Giang
2018-01-01
A Sonogashira experiment was transformed into a problem-based learning platform for third-year undergraduate students. Given a target that could be synthesized in a single step, students worked in groups to investigate which method was the best for large-scale production. Through this practical scenario, students learn to conduct a literature…
ERIC Educational Resources Information Center
Ettema, James S.
A study was conducted to determine who, within a target user group, used and benefitted from a videotex system. The subjects were large-scale farmers who agreed to have videotex terminals installed in their homes to receive a wide range of informational and commercial transaction services provided by a bank holding company. At the end of an…
NASA Technical Reports Server (NTRS)
Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.
1987-01-01
The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.
Brentuximab vedotin: clinical updates and practical guidance
Yi, Jun Ho; Kim, Seok Jin
2017-01-01
Brentuximab vedotin (BV), a potent antibody-drug conjugate, targets the CD30 antigen. Owing to the remarkable efficacy shown in CD30-positive lymphomas, such as Hodgkin's lymphoma and systemic anaplastic large-cell lymphoma, BV was granted accelerated approval in 2011 by the US Food and Drug Administration. Thereafter, many large-scale trials in various situations have been performed, which led to extensions of the original indication. The aim of this review was to describe the latest updates on clinical trials of BV and the in-practice guidance for the use of BV. PMID:29333400
Anderson, R.N.; Boulanger, A.; Bagdonas, E.P.; Xu, L.; He, W.
1996-12-17
The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.
Anderson, Roger N.; Boulanger, Albert; Bagdonas, Edward P.; Xu, Liqing; He, Wei
1996-01-01
The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.
Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru
2006-07-17
In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.
NASA Astrophysics Data System (ADS)
Yu, Chenghai; Ma, Ning; Wang, Kai; Du, Juan; Van den Braembussche, R. A.; Lin, Feng
2014-04-01
A similitude method to model the tip clearance flow in a high-speed compressor with a low-speed model is presented in this paper. The first step of this method is the derivation of similarity criteria for tip clearance flow, on the basis of an inviscid model of tip clearance flow. The aerodynamic parameters needed for the model design are then obtained from a numerical simulation of the target high-speed compressor rotor. According to the aerodynamic and geometric parameters of the target compressor rotor, a large-scale low-speed rotor blade is designed with an inverse blade design program. In order to validate the similitude method, the features of tip clearance flow in the low-speed model compressor are compared with the ones in the high-speed compressor at both design and small flow rate points. It is found that not only the trajectory of the tip leakage vortex but also the interface between the tip leakage flow and the incoming main flow in the high-speed compressor match well with that of its low speed model. These results validate the effectiveness of the similitude method for the tip clearance flow proposed in this paper.
Kumar Kakkar, Ashish; Dahiya, Neha
2014-06-01
Strategy, Management and Health Policy Large pharmaceutical companies have traditionally focused on the development of blockbuster drugs that target disease states with large patient populations. However, with large-scale patent expirations and competition from generics and biosimilars, anemic pipelines, escalating clinical trial costs, and global health-care reform, the blockbuster model has become less viable. Orphan drug initiatives and the incentives accompanied by these have fostered renewed research efforts in the area of rare diseases and have led to the approval of more than 400 orphan products. Despite targeting much smaller patient populations, the revenue-generating potential of orphan drugs has been shown to be huge, with a greater return on investment than non-orphan drugs. The success of these "niche buster" therapeutics has led to a renewed interest from "Big Pharma" in the rare disease landscape. This article reviews the key drivers for orphan drug research and development, their profitability, and issues surrounding the emergence of large pharmaceutical firms into the orphan drug space. © 2014 Wiley Periodicals, Inc.
Radon background in liquid xenon detectors
NASA Astrophysics Data System (ADS)
Rupp, N.
2018-02-01
The radioactive daughters isotope of 222Rn are one of the highest risk contaminants in liquid xenon detectors aiming for a small signal rate. The noble gas is permanently emanated from the detector surfaces and mixed with the xenon target. Because of its long half-life 222Rn is homogeneously distributed in the target and its subsequent decays can mimic signal events. Since no shielding is possible this background source can be the dominant one in future large scale experiments. This article provides an overview of strategies used to mitigate this source of background by means of material selection and on-line radon removal techniques.
Hierarchical algorithms for modeling the ocean on hierarchical architectures
NASA Astrophysics Data System (ADS)
Hill, C. N.
2012-12-01
This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.
D'Aiuto, Leonardo; Zhi, Yun; Kumar Das, Dhanjit; Wilcox, Madeleine R; Johnson, Jon W; McClain, Lora; MacDonald, Matthew L; Di Maio, Roberto; Schurdak, Mark E; Piazza, Paolo; Viggiano, Luigi; Sweet, Robert; Kinchington, Paul R; Bhattacharjee, Ayantika G; Yolken, Robert; Nimgaonka, Vishwajit L; Nimgaonkar, Vishwajit L
2014-01-01
Induced pluripotent stem cell (iPSC)-based technologies offer an unprecedented opportunity to perform high-throughput screening of novel drugs for neurological and neurodegenerative diseases. Such screenings require a robust and scalable method for generating large numbers of mature, differentiated neuronal cells. Currently available methods based on differentiation of embryoid bodies (EBs) or directed differentiation of adherent culture systems are either expensive or are not scalable. We developed a protocol for large-scale generation of neuronal stem cells (NSCs)/early neural progenitor cells (eNPCs) and their differentiation into neurons. Our scalable protocol allows robust and cost-effective generation of NSCs/eNPCs from iPSCs. Following culture in neurobasal medium supplemented with B27 and BDNF, NSCs/eNPCs differentiate predominantly into vesicular glutamate transporter 1 (VGLUT1) positive neurons. Targeted mass spectrometry analysis demonstrates that iPSC-derived neurons express ligand-gated channels and other synaptic proteins and whole-cell patch-clamp experiments indicate that these channels are functional. The robust and cost-effective differentiation protocol described here for large-scale generation of NSCs/eNPCs and their differentiation into neurons paves the way for automated high-throughput screening of drugs for neurological and neurodegenerative diseases.
Video-rate volumetric neuronal imaging using 3D targeted illumination.
Xiao, Sheng; Tseng, Hua-An; Gritton, Howard; Han, Xue; Mertz, Jerome
2018-05-21
Fast volumetric microscopy is required to monitor large-scale neural ensembles with high spatio-temporal resolution. Widefield fluorescence microscopy can image large 2D fields of view at high resolution and speed while remaining simple and costeffective. A focal sweep add-on can further extend the capacity of widefield microscopy by enabling extended-depth-of-field (EDOF) imaging, but suffers from an inability to reject out-of-focus fluorescence background. Here, by using a digital micromirror device to target only in-focus sample features, we perform EDOF imaging with greatly enhanced contrast and signal-to-noise ratio, while reducing the light dosage delivered to the sample. Image quality is further improved by the application of a robust deconvolution algorithm. We demonstrate the advantages of our technique for in vivo calcium imaging in the mouse brain.
Budde, Kristin S.; Barron, Daniel S.; Fox, Peter T.
2015-01-01
Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as “neural signatures of stuttering” (Brown 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: 1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and 2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state). PMID:25463820
International Halley Watch: Discipline specialists for large scale phenomena
NASA Technical Reports Server (NTRS)
Brandt, J. C.; Niedner, M. B., Jr.
1986-01-01
The largest scale structures of comets, their tails, are extremely interesting from a physical point of view, and some of their properties are among the most spectacular displayed by comets. Because the tail(s) is an important component part of a comet, the Large-Scale Phenomena (L-SP) Discipline was created as one of eight different observational methods in which Halley data would be encouraged and collected from all around the world under the aspices of the International Halley Watch (IHW). The L-SP Discipline Specialist (DS) Team resides at NASA/Goddard Space Flight Center under the leadership of John C. Brandt, Malcolm B. Niedner, and their team of image-processing and computer specialists; Jurgan Rahe at NASA Headquarters completes the formal DS science staff. The team has adopted the study of disconnection events (DE) as its principal science target, and it is because of the rapid changes which occur in connection with DE's that such extensive global coverage was deemed necessary to assemble a complete record.
Fast large-scale object retrieval with binary quantization
NASA Astrophysics Data System (ADS)
Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi
2015-11-01
The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.
NASA Astrophysics Data System (ADS)
Harris, T. H. S.; Davais, M. E.
2017-12-01
Several elements of the 786 ka Australasian (AA) tektite imprint bear close scrutinyin order to locate the parent impact site or structure. The unique Carolina bays unit geologic formation is indicated as a large "medial" ejecta blanket from a large cosmic impact during a period containing 786 ka. Coincidence? Kg-scale sub-spherical hollow splash form AA tektites implies prolonged atmospheric blow out-scale momentum current with a core of sub-parallel or divergent flow volume having essentially zero turbulence. This would allow for plasma entrainment and heating of target mass at prolonged low dynamic pressure during outflow, where adiabatic expansion could deliver both semi-solid Muong Nong-type and inviscid melts above the atmosphere for gentle release upon rarefaction in vacuum. Within a large atmospheric blow-out scale momentum current, target mass becomes entrained at the speed of adiabatic outflow. 10+ km/s ejecta entrainment yields inter-hemispheric emplacement from launch per governing suborbital mechanics, without question. Oblique impact into a thick ice sheet explains reduced excavation volume and shearing disruption in the form of hypersonic steam plasma scouring. Adiabatic expansion would be immediately available to accelerate and further heat proto-tektite target mass. With shock no longer the sole transport engine, kg-scale splash forms and tektite speeds above the post-shock vaporization velocity of quartz are explained by expansion of shocked ice, in agreement with the observed imprint. The 6 Carolina bay shapes or "Davias Archetypes" are reproducible using conic perturbation in Suborbital Analysis, conforming to a formative mechanism of suborbital transport and ballistic emplacement: "Suborbital Obstruction Shadowing" needs only 3 parts in 10,000 of VEL variation around a circular EL-AZ-VEL launch cone, before considering re-entry effects. Transport energy of the Carolina bay sand, calculated using the 3.5 to 4 km/s launch VEL required for its indicated transport, must account for inefficiency of entrained transport. Roughly 1600 cubic kilometers of Carolina bays sand must have taken 10 to 1000 times more energy to transport than the entire Chixulub event yield. Imagery by M. E. Davias of Cintos.org, S.E. Nebraska (top) and Bennettsville, South Carolina (bottom).
NASA Astrophysics Data System (ADS)
Kato, E.; Yamagata, Y.
2014-12-01
Bioenergy with Carbon Capture and Storage (BECCS) is a key component of mitigation strategies in future socio-economic scenarios that aim to keep mean global temperature rise below 2°C above pre-industrial, which would require net negative carbon emissions in the end of the 21st century. Because of the additional need for land, developing sustainable low-carbon scenarios requires careful consideration of the land-use implications of deploying large-scale BECCS. We evaluated the feasibility of the large-scale BECCS in RCP2.6, which is a scenario with net negative emissions aiming to keep the 2°C temperature target, with a top-down analysis of required yields and a bottom-up evaluation of BECCS potential using a process-based global crop model. Land-use change carbon emissions related to the land expansion were examined using a global terrestrial biogeochemical cycle model. Our analysis reveals that first-generation bioenergy crops would not meet the required BECCS of the RCP2.6 scenario even with a high fertilizer and irrigation application. Using second-generation bioenergy crops can marginally fulfill the required BECCS only if a technology of full post-process combustion CO2 capture is deployed with a high fertilizer application in the crop production. If such an assumed technological improvement does not occur in the future, more than doubling the area for bioenergy production for BECCS around 2050 assumed in RCP2.6 would be required, however, such scenarios implicitly induce large-scale land-use changes that would cancel half of the assumed CO2 sequestration by BECCS. Otherwise a conflict of land-use with food production is inevitable.
NASA Astrophysics Data System (ADS)
Kato, Etsushi; Yamagata, Yoshiki
2014-09-01
Bioenergy with Carbon Capture and Storage (BECCS) is a key component of mitigation strategies in future socioeconomic scenarios that aim to keep mean global temperature rise below 2°C above preindustrial, which would require net negative carbon emissions in the end of the 21st century. Because of the additional need for land, developing sustainable low-carbon scenarios requires careful consideration of the land-use implications of deploying large scale BECCS. We evaluated the feasibility of the large-scale BECCS in RCP2.6, which is a scenario with net negative emissions aiming to keep the 2°C temperature target, with a top-down analysis of required yields and a bottom-up evaluation of BECCS potential using a process-based global crop model. Land-use change carbon emissions related to the land expansion were examined using a global terrestrial biogeochemical cycle model. Our analysis reveals that first-generation bioenergy crops would not meet the required BECCS of the RCP2.6 scenario even with a high-fertilizer and irrigation application. Using second-generation bioenergy crops can marginally fulfill the required BECCS only if a technology of full postprocess combustion CO2 capture is deployed with a high-fertilizer application in the crop production. If such an assumed technological improvement does not occur in the future, more than doubling the area for bioenergy production for BECCS around 2050 assumed in RCP2.6 would be required; however, such scenarios implicitly induce large-scale land-use changes that would cancel half of the assumed CO2 sequestration by BECCS. Otherwise, a conflict of land use with food production is inevitable.
A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.
Rutledge, Robert G
2011-03-02
Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.
A Java Program for LRE-Based Real-Time qPCR that Enables Large-Scale Absolute Quantification
Rutledge, Robert G.
2011-01-01
Background Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Findings Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. Conclusions The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples. PMID:21407812
Vercoe, Reuben B.; Chang, James T.; Dy, Ron L.; Taylor, Corinda; Gristwood, Tamzin; Clulow, James S.; Richter, Corinna; Przybilski, Rita; Pitman, Andrew R.; Fineran, Peter C.
2013-01-01
In prokaryotes, clustered regularly interspaced short palindromic repeats (CRISPRs) and their associated (Cas) proteins constitute a defence system against bacteriophages and plasmids. CRISPR/Cas systems acquire short spacer sequences from foreign genetic elements and incorporate these into their CRISPR arrays, generating a memory of past invaders. Defence is provided by short non-coding RNAs that guide Cas proteins to cleave complementary nucleic acids. While most spacers are acquired from phages and plasmids, there are examples of spacers that match genes elsewhere in the host bacterial chromosome. In Pectobacterium atrosepticum the type I-F CRISPR/Cas system has acquired a self-complementary spacer that perfectly matches a protospacer target in a horizontally acquired island (HAI2) involved in plant pathogenicity. Given the paucity of experimental data about CRISPR/Cas–mediated chromosomal targeting, we examined this process by developing a tightly controlled system. Chromosomal targeting was highly toxic via targeting of DNA and resulted in growth inhibition and cellular filamentation. The toxic phenotype was avoided by mutations in the cas operon, the CRISPR repeats, the protospacer target, and protospacer-adjacent motif (PAM) beside the target. Indeed, the natural self-targeting spacer was non-toxic due to a single nucleotide mutation adjacent to the target in the PAM sequence. Furthermore, we show that chromosomal targeting can result in large-scale genomic alterations, including the remodelling or deletion of entire pre-existing pathogenicity islands. These features can be engineered for the targeted deletion of large regions of bacterial chromosomes. In conclusion, in DNA–targeting CRISPR/Cas systems, chromosomal interference is deleterious by causing DNA damage and providing a strong selective pressure for genome alterations, which may have consequences for bacterial evolution and pathogenicity. PMID:23637624
Vercoe, Reuben B; Chang, James T; Dy, Ron L; Taylor, Corinda; Gristwood, Tamzin; Clulow, James S; Richter, Corinna; Przybilski, Rita; Pitman, Andrew R; Fineran, Peter C
2013-04-01
In prokaryotes, clustered regularly interspaced short palindromic repeats (CRISPRs) and their associated (Cas) proteins constitute a defence system against bacteriophages and plasmids. CRISPR/Cas systems acquire short spacer sequences from foreign genetic elements and incorporate these into their CRISPR arrays, generating a memory of past invaders. Defence is provided by short non-coding RNAs that guide Cas proteins to cleave complementary nucleic acids. While most spacers are acquired from phages and plasmids, there are examples of spacers that match genes elsewhere in the host bacterial chromosome. In Pectobacterium atrosepticum the type I-F CRISPR/Cas system has acquired a self-complementary spacer that perfectly matches a protospacer target in a horizontally acquired island (HAI2) involved in plant pathogenicity. Given the paucity of experimental data about CRISPR/Cas-mediated chromosomal targeting, we examined this process by developing a tightly controlled system. Chromosomal targeting was highly toxic via targeting of DNA and resulted in growth inhibition and cellular filamentation. The toxic phenotype was avoided by mutations in the cas operon, the CRISPR repeats, the protospacer target, and protospacer-adjacent motif (PAM) beside the target. Indeed, the natural self-targeting spacer was non-toxic due to a single nucleotide mutation adjacent to the target in the PAM sequence. Furthermore, we show that chromosomal targeting can result in large-scale genomic alterations, including the remodelling or deletion of entire pre-existing pathogenicity islands. These features can be engineered for the targeted deletion of large regions of bacterial chromosomes. In conclusion, in DNA-targeting CRISPR/Cas systems, chromosomal interference is deleterious by causing DNA damage and providing a strong selective pressure for genome alterations, which may have consequences for bacterial evolution and pathogenicity.
Scaling the Drosophila Wing: TOR-Dependent Target Gene Access by the Hippo Pathway Transducer Yorkie
Parker, Joseph; Struhl, Gary
2015-01-01
Organ growth is controlled by patterning signals that operate locally (e.g., Wingless/Ints [Wnts], Bone Morphogenetic Proteins [BMPs], and Hedgehogs [Hhs]) and scaled by nutrient-dependent signals that act systemically (e.g., Insulin-like peptides [ILPs] transduced by the Target of Rapamycin [TOR] pathway). How cells integrate these distinct inputs to generate organs of the appropriate size and shape is largely unknown. The transcriptional coactivator Yorkie (Yki, a YES-Associated Protein, or YAP) acts downstream of patterning morphogens and other tissue-intrinsic signals to promote organ growth. Yki activity is regulated primarily by the Warts/Hippo (Wts/Hpo) tumour suppressor pathway, which impedes nuclear access of Yki by a cytoplasmic tethering mechanism. Here, we show that the TOR pathway regulates Yki by a separate and novel mechanism in the Drosophila wing. Instead of controlling Yki nuclear access, TOR signaling governs Yki action after it reaches the nucleus by allowing it to gain access to its target genes. When TOR activity is inhibited, Yki accumulates in the nucleus but is sequestered from its normal growth-promoting target genes—a phenomenon we term “nuclear seclusion.” Hence, we posit that in addition to its well-known role in stimulating cellular metabolism in response to nutrients, TOR also promotes wing growth by liberating Yki from nuclear seclusion, a parallel pathway that we propose contributes to the scaling of wing size with nutrient availability. PMID:26474042
Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target
Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji
2009-01-01
In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326
Targeting PTPRK-RSPO3 colon tumours promotes differentiation and loss of stem-cell function.
Storm, Elaine E; Durinck, Steffen; de Sousa e Melo, Felipe; Tremayne, Jarrod; Kljavin, Noelyn; Tan, Christine; Ye, Xiaofen; Chiu, Cecilia; Pham, Thinh; Hongo, Jo-Anne; Bainbridge, Travis; Firestein, Ron; Blackwood, Elizabeth; Metcalfe, Ciara; Stawiski, Eric W; Yauch, Robert L; Wu, Yan; de Sauvage, Frederic J
2016-01-07
Colorectal cancer remains a major unmet medical need, prompting large-scale genomics efforts in the field to identify molecular drivers for which targeted therapies might be developed. We previously reported the identification of recurrent translocations in R-spondin genes present in a subset of colorectal tumours. Here we show that targeting RSPO3 in PTPRK-RSPO3-fusion-positive human tumour xenografts inhibits tumour growth and promotes differentiation. Notably, genes expressed in the stem-cell compartment of the intestine were among those most sensitive to anti-RSPO3 treatment. This observation, combined with functional assays, suggests that a stem-cell compartment drives PTPRK-RSPO3 colorectal tumour growth and indicates that the therapeutic targeting of stem-cell properties within tumours may be a clinically relevant approach for the treatment of colorectal tumours.
NASA Astrophysics Data System (ADS)
Zorita, E.
2009-12-01
One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales. However, as the focus shifts towards higher frequency variability, decadal or multidecadal, the need for larger simulation ensembles becomes more evident. Nevertheless,the comparison at these time scales may expose some lines of research on the origin of multidecadal regional climate variability.
Understanding and Controlling Sialylation in a CHO Fc-Fusion Process
Lewis, Amanda M.; Croughan, William D.; Aranibar, Nelly; Lee, Alison G.; Warrack, Bethanne; Abu-Absi, Nicholas R.; Patel, Rutva; Drew, Barry; Borys, Michael C.; Reily, Michael D.; Li, Zheng Jian
2016-01-01
A Chinese hamster ovary (CHO) bioprocess, where the product is a sialylated Fc-fusion protein, was operated at pilot and manufacturing scale and significant variation of sialylation level was observed. In order to more tightly control glycosylation profiles, we sought to identify the cause of variability. Untargeted metabolomics and transcriptomics methods were applied to select samples from the large scale runs. Lower sialylation was correlated with elevated mannose levels, a shift in glucose metabolism, and increased oxidative stress response. Using a 5-L scale model operated with a reduced dissolved oxygen set point, we were able to reproduce the phenotypic profiles observed at manufacturing scale including lower sialylation, higher lactate and lower ammonia levels. Targeted transcriptomics and metabolomics confirmed that reduced oxygen levels resulted in increased mannose levels, a shift towards glycolysis, and increased oxidative stress response similar to the manufacturing scale. Finally, we propose a biological mechanism linking large scale operation and sialylation variation. Oxidative stress results from gas transfer limitations at large scale and the presence of oxygen dead-zones inducing upregulation of glycolysis and mannose biosynthesis, and downregulation of hexosamine biosynthesis and acetyl-CoA formation. The lower flux through the hexosamine pathway and reduced intracellular pools of acetyl-CoA led to reduced formation of N-acetylglucosamine and N-acetylneuraminic acid, both key building blocks of N-glycan structures. This study reports for the first time a link between oxidative stress and mammalian protein sialyation. In this study, process, analytical, metabolomic, and transcriptomic data at manufacturing, pilot, and laboratory scales were taken together to develop a systems level understanding of the process and identify oxygen limitation as the root cause of glycosylation variability. PMID:27310468
CELL5M: A geospatial database of agricultural indicators for Africa South of the Sahara.
Koo, Jawoo; Cox, Cindy M; Bacou, Melanie; Azzarri, Carlo; Guo, Zhe; Wood-Sichra, Ulrike; Gong, Queenie; You, Liangzhi
2016-01-01
Recent progress in large-scale georeferenced data collection is widening opportunities for combining multi-disciplinary datasets from biophysical to socioeconomic domains, advancing our analytical and modeling capacity. Granular spatial datasets provide critical information necessary for decision makers to identify target areas, assess baseline conditions, prioritize investment options, set goals and targets and monitor impacts. However, key challenges in reconciling data across themes, scales and borders restrict our capacity to produce global and regional maps and time series. This paper provides overview, structure and coverage of CELL5M-an open-access database of geospatial indicators at 5 arc-minute grid resolution-and introduces a range of analytical applications and case-uses. CELL5M covers a wide set of agriculture-relevant domains for all countries in Africa South of the Sahara and supports our understanding of multi-dimensional spatial variability inherent in farming landscapes throughout the region.
Whole organism lineage tracing by combinatorial and cumulative genome editing
McKenna, Aaron; Findlay, Gregory M.; Gagnon, James A.; Horwitz, Marshall S.; Schier, Alexander F.; Shendure, Jay
2016-01-01
Multicellular systems develop from single cells through distinct lineages. However, current lineage tracing approaches scale poorly to whole, complex organisms. Here we use genome editing to progressively introduce and accumulate diverse mutations in a DNA barcode over multiple rounds of cell division. The barcode, an array of CRISPR/Cas9 target sites, marks cells and enables the elucidation of lineage relationships via the patterns of mutations shared between cells. In cell culture and zebrafish, we show that rates and patterns of editing are tunable, and that thousands of lineage-informative barcode alleles can be generated. By sampling hundreds of thousands of cells from individual zebrafish, we find that most cells in adult organs derive from relatively few embryonic progenitors. In future analyses, genome editing of synthetic target arrays for lineage tracing (GESTALT) can be used to generate large-scale maps of cell lineage in multicellular systems for normal development and disease. PMID:27229144
Stempler, Shiri; Yizhak, Keren; Ruppin, Eytan
2014-01-01
Accumulating evidence links numerous abnormalities in cerebral metabolism with the progression of Alzheimer's disease (AD), beginning in its early stages. Here, we integrate transcriptomic data from AD patients with a genome-scale computational human metabolic model to characterize the altered metabolism in AD, and employ state-of-the-art metabolic modelling methods to predict metabolic biomarkers and drug targets in AD. The metabolic descriptions derived are first tested and validated on a large scale versus existing AD proteomics and metabolomics data. Our analysis shows a significant decrease in the activity of several key metabolic pathways, including the carnitine shuttle, folate metabolism and mitochondrial transport. We predict several metabolic biomarkers of AD progression in the blood and the CSF, including succinate and prostaglandin D2. Vitamin D and steroid metabolism pathways are enriched with predicted drug targets that could mitigate the metabolic alterations observed. Taken together, this study provides the first network wide view of the metabolic alterations associated with AD progression. Most importantly, it offers a cohort of new metabolic leads for the diagnosis of AD and its treatment. PMID:25127241
Aagaard, Kevin; Crimmins, Shawn M.; Thogmartin, Wayne E.; Tavernia, Brian G.; Lyons, James E.
2015-01-01
The development of robust modelling techniques to derive inferences from large-scale migratory bird monitoring data at appropriate scales has direct relevance to their management. The Integrated Waterbird Management and Monitoring programme (IWMM) represents one of the few attempts to monitor migrating waterbirds across entire flyways using targeted local surveys. This dataset included 13,208,785 waterfowl (eight Anas species) counted during 28,000 surveys at nearly 1,000 locations across the eastern United States between autumn 2010 and spring 2013 and was used to evaluate potential predictors of waterfowl abundance at the wetland scale. Mixed-effects, log-linear models of local abundance were built for the Atlantic and Mississippi flyways during spring and autumn migration to identify factors relating to habitat structure, forage availability, and migration timing that influence target dabbling duck species abundance. Results indicated that migrating dabbling ducks responded differently to environmental factors. While the factors identified demonstrated a high degree of importance, they were inconsistent across species, flyways and seasons. Furthermore, the direction and magnitude of the importance of each covariate group considered here varied across species. Given our results, actionable policy recommendations are likely to be most effective if they consider species-level variation within targeted taxonomic units and across management areas. The methods implemented here can easily be applied to other contexts, and serve as a novel investigation into local-level population patterns using data from broad-scale monitoring programmes.
NASA Astrophysics Data System (ADS)
Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.
2015-12-01
Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are being simulated for plausible physical reasons, boosting confidence in future projections of temperature extremes. Conversely, where model skill is identified to be lower, caution should be exercised in interpreting future projections.
Pan, Hung-Yin; Chen, Carton W; Huang, Chih-Hung
2018-04-17
Soil bacteria Streptomyces are the most important producers of secondary metabolites, including most known antibiotics. These bacteria and their close relatives are unique in possessing linear chromosomes, which typically harbor 20 to 30 biosynthetic gene clusters of tens to hundreds of kb in length. Many Streptomyces chromosomes are accompanied by linear plasmids with sizes ranging from several to several hundred kb. The large linear plasmids also often contain biosynthetic gene clusters. We have developed a targeted recombination procedure for arm exchanges between a linear plasmid and a linear chromosome. A chromosomal segment inserted in an artificially constructed plasmid allows homologous recombination between the two replicons at the homology. Depending on the design, the recombination may result in two recombinant replicons or a single recombinant chromosome with the loss of the recombinant plasmid that lacks a replication origin. The efficiency of such targeted recombination ranges from 9 to 83% depending on the locations of the homology (and thus the size of the chromosomal arm exchanged), essentially eliminating the necessity of selection. The targeted recombination is useful for the efficient engineering of the Streptomyces genome for large-scale deletion, addition, and shuffling.
NASA Astrophysics Data System (ADS)
Rengarajan, Rajagopalan; Goodenough, Adam A.; Schott, John R.
2016-10-01
Many remote sensing applications rely on simulated scenes to perform complex interaction and sensitivity studies that are not possible with real-world scenes. These applications include the development and validation of new and existing algorithms, understanding of the sensor's performance prior to launch, and trade studies to determine ideal sensor configurations. The accuracy of these applications is dependent on the realism of the modeled scenes and sensors. The Digital Image and Remote Sensing Image Generation (DIRSIG) tool has been used extensively to model the complex spectral and spatial texture variation expected in large city-scale scenes and natural biomes. In the past, material properties that were used to represent targets in the simulated scenes were often assumed to be Lambertian in the absence of hand-measured directional data. However, this assumption presents a limitation for new algorithms that need to recognize the anisotropic behavior of targets. We have developed a new method to model and simulate large-scale high-resolution terrestrial scenes by combining bi-directional reflectance distribution function (BRDF) products from Moderate Resolution Imaging Spectroradiometer (MODIS) data, high spatial resolution data, and hyperspectral data. The high spatial resolution data is used to separate materials and add textural variations to the scene, and the directional hemispherical reflectance from the hyperspectral data is used to adjust the magnitude of the MODIS BRDF. In this method, the shape of the BRDF is preserved since it changes very slowly, but its magnitude is varied based on the high resolution texture and hyperspectral data. In addition to the MODIS derived BRDF, target/class specific BRDF values or functions can also be applied to features of specific interest. The purpose of this paper is to discuss the techniques and the methodology used to model a forest region at a high resolution. The simulated scenes using this method for varying view angles show the expected variations in the reflectance due to the BRDF effects of the Harvard forest. The effectiveness of this technique to simulate real sensor data is evaluated by comparing the simulated data with the Landsat 8 Operational Land Image (OLI) data over the Harvard forest. Regions of interest were selected from the simulated and the real data for different targets and their Top-of-Atmospheric (TOA) radiance were compared. After adjusting for scaling correction due to the difference in atmospheric conditions between the simulated and the real data, the TOA radiance is found to agree within 5 % in the NIR band and 10 % in the visible bands for forest targets under similar illumination conditions. The technique presented in this paper can be extended for other biomes (e.g. desert regions and agricultural regions) by using the appropriate geographic regions. Since the entire scene is constructed in a simulated environment, parameters such as BRDF or its effects can be analyzed for general or target specific algorithm improvements. Also, the modeling and simulation techniques can be used as a baseline for the development and comparison of new sensor designs and to investigate the operational and environmental factors that affects the sensor constellations such as Sentinel and Landsat missions.
Orthographic and Phonological Neighborhood Databases across Multiple Languages.
Marian, Viorica
2017-01-01
The increased globalization of science and technology and the growing number of bilinguals and multilinguals in the world have made research with multiple languages a mainstay for scholars who study human function and especially those who focus on language, cognition, and the brain. Such research can benefit from large-scale databases and online resources that describe and measure lexical, phonological, orthographic, and semantic information. The present paper discusses currently-available resources and underscores the need for tools that enable measurements both within and across multiple languages. A general review of language databases is followed by a targeted introduction to databases of orthographic and phonological neighborhoods. A specific focus on CLEARPOND illustrates how databases can be used to assess and compare neighborhood information across languages, to develop research materials, and to provide insight into broad questions about language. As an example of how using large-scale databases can answer questions about language, a closer look at neighborhood effects on lexical access reveals that not only orthographic, but also phonological neighborhoods can influence visual lexical access both within and across languages. We conclude that capitalizing upon large-scale linguistic databases can advance, refine, and accelerate scientific discoveries about the human linguistic capacity.
Systematic Identification of Combinatorial Drivers and Targets in Cancer Cell Lines
Tabchy, Adel; Eltonsy, Nevine; Housman, David E.; Mills, Gordon B.
2013-01-01
There is an urgent need to elicit and validate highly efficacious targets for combinatorial intervention from large scale ongoing molecular characterization efforts of tumors. We established an in silico bioinformatic platform in concert with a high throughput screening platform evaluating 37 novel targeted agents in 669 extensively characterized cancer cell lines reflecting the genomic and tissue-type diversity of human cancers, to systematically identify combinatorial biomarkers of response and co-actionable targets in cancer. Genomic biomarkers discovered in a 141 cell line training set were validated in an independent 359 cell line test set. We identified co-occurring and mutually exclusive genomic events that represent potential drivers and combinatorial targets in cancer. We demonstrate multiple cooperating genomic events that predict sensitivity to drug intervention independent of tumor lineage. The coupling of scalable in silico and biologic high throughput cancer cell line platforms for the identification of co-events in cancer delivers rational combinatorial targets for synthetic lethal approaches with a high potential to pre-empt the emergence of resistance. PMID:23577104
Systematic identification of combinatorial drivers and targets in cancer cell lines.
Tabchy, Adel; Eltonsy, Nevine; Housman, David E; Mills, Gordon B
2013-01-01
There is an urgent need to elicit and validate highly efficacious targets for combinatorial intervention from large scale ongoing molecular characterization efforts of tumors. We established an in silico bioinformatic platform in concert with a high throughput screening platform evaluating 37 novel targeted agents in 669 extensively characterized cancer cell lines reflecting the genomic and tissue-type diversity of human cancers, to systematically identify combinatorial biomarkers of response and co-actionable targets in cancer. Genomic biomarkers discovered in a 141 cell line training set were validated in an independent 359 cell line test set. We identified co-occurring and mutually exclusive genomic events that represent potential drivers and combinatorial targets in cancer. We demonstrate multiple cooperating genomic events that predict sensitivity to drug intervention independent of tumor lineage. The coupling of scalable in silico and biologic high throughput cancer cell line platforms for the identification of co-events in cancer delivers rational combinatorial targets for synthetic lethal approaches with a high potential to pre-empt the emergence of resistance.
Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop
NASA Astrophysics Data System (ADS)
Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin
2014-06-01
Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.
NASA Astrophysics Data System (ADS)
Jutzi, Martin; Michel, Patrick
2014-02-01
In this paper, we investigate numerically the momentum transferred by impacts of small (artificial) projectiles on asteroids. The study of the momentum transfer efficiency as a function of impact conditions and of the internal structure of an asteroid is crucial for performance assessment of the kinetic impactor concept of deflecting an asteroid from its trajectory. The momentum transfer is characterized by the so-called momentum multiplication factor β, which has been introduced to define the momentum imparted to an asteroid in terms of the momentum of the impactor. Here we present results of code calculations of the β factor for porous targets, in which porosity takes the form of microporosity and/or macroporosity. The results of our study using a large range of impact conditions indicate that the momentum multiplication factor β is small for porous targets even for very high impact velocities (β<2 for vimp⩽15 km/s), which is consistent with published scaling laws and results of laboratory experiments (Holsapple, K.A., Housen, K.R. [2012]. Icarus 221, 875-887; Holsapple, K.A., Housen, K.R. [2013]. Proceedings of the IAA Planetary Defense Conference 2013, Flagstaff, USA). It is found that both porosity and strength can have a large effect on the amount of transferred momentum and on the scaling of β with impact velocity. On the other hand, the macroporous inhomogeneities considered here do not have a significant effect on β.
Asaad, Sameh W; Bellofatto, Ralph E; Brezzo, Bernard; Haymes, Charles L; Kapur, Mohit; Parker, Benjamin D; Roewer, Thomas; Tierno, Jose A
2014-01-28
A plurality of target field programmable gate arrays are interconnected in accordance with a connection topology and map portions of a target system. A control module is coupled to the plurality of target field programmable gate arrays. A balanced clock distribution network is configured to distribute a reference clock signal, and a balanced reset distribution network is coupled to the control module and configured to distribute a reset signal to the plurality of target field programmable gate arrays. The control module and the balanced reset distribution network are cooperatively configured to initiate and control a simulation of the target system with the plurality of target field programmable gate arrays. A plurality of local clock control state machines reside in the target field programmable gate arrays. The local clock state machines are configured to generate a set of synchronized free-running and stoppable clocks to maintain cycle-accurate and cycle-reproducible execution of the simulation of the target system. A method is also provided.
Guo, Rui; Blacker, David J; Wang, Xia; Arima, Hisatomi; Lavados, Pablo M; Lindley, Richard I; Chalmers, John; Anderson, Craig S; Robinson, Thompson
2017-12-01
The prognosis in acute spontaneous intracerebral hemorrhage (ICH) is related to hematoma volume, where >30 mL is commonly used to define large ICH as a threshold for neurosurgical decompression but without clear supporting evidence. To determine the factors associated with large ICH and neurosurgical intervention among participants of the Intensive Blood Pressure Reduction in Acute Cerebral Hemorrhage Trials (INTERACT). We performed pooled analysis of the pilot INTERACT1 (n = 404) and main INTERACT2 (n = 2839) studies of ICH patients (<6 h of onset) with elevated systolic blood pressure (SBP, 150-220 mm Hg) who were randomized to intensive (target SBP < 140 mm Hg) or contemporaneous guideline-recommended (target SBP < 180 mm Hg) management. Neurosurgical intervention data were collected at 7 d postrandomization. Multivariable logistic regression was used to determine associations. There were 372 (13%) patients with large ICH volume (>30 mL), which was associated with nonresiding in China, nondiabetic status, severe neurological deficit (National Institutes of Health stroke scale [NIHSS] score ≥ 15), lobar location, intraventricular hemorrhage extension, raised leucocyte count, and hyponatremia. Significant predictors of those patients who underwent surgery (226 of 3233 patients overall; 83 of 372 patients with large ICH) were younger age, severe neurological deficit (lower Glasgow coma scale score, and NIHSS score ≥ 15), baseline ICH volume > 30 mL, and intraventricular hemorrhage. Early identification of severe ICH, based on age and clinical and imaging parameters, may facilitate neurosurgery and intensive monitoring of patients. Copyright © 2017 by the Congress of Neurological Surgeons
Rethinking Trade-Driven Extinction Risk in Marine and Terrestrial Megafauna.
McClenachan, Loren; Cooper, Andrew B; Dulvy, Nicholas K
2016-06-20
Large animals hunted for the high value of their parts (e.g., elephant ivory and shark fins) are at risk of extinction due to both intensive international trade pressure and intrinsic biological sensitivity. However, the relative role of trade, particularly in non-perishable products, and biological factors in driving extinction risk is not well understood [1-4]. Here we identify a taxonomically diverse group of >100 marine and terrestrial megafauna targeted for international luxury markets; estimate their value across three points of sale; test relationships among extinction risk, high value, and body size; and quantify the effects of two mitigating factors: poaching fines and geographic range size. We find that body size is the principal driver of risk for lower value species, but that this biological pattern is eliminated above a value threshold, meaning that the most valuable species face a high extinction risk regardless of size. For example, once mean product values exceed US$12,557 kg(-1), body size no longer drives risk. Total value scales with size for marine animals more strongly than for terrestrial animals, incentivizing the hunting of large marine individuals and species. Poaching fines currently have little effect on extinction risk; fines would need to be increased 10- to 100-fold to be effective. Large geographic ranges reduce risk for terrestrial, but not marine, species, whose ranges are ten times greater. Our results underscore both the evolutionary and ecosystem consequences of targeting large marine animals and the need to geographically scale up and prioritize conservation of high-value marine species to avoid extinction. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ma, Haoyan; Li, Peng; Song, Gangbing; Wu, Jianxin
2017-01-01
Structural health monitoring (SHM) systems can improve the safety and reliability of structures, reduce maintenance costs, and extend service life. Research on concrete SHMs using piezoelectric-based smart aggregates have reached great achievements. However, the newly developed techniques have not been widely applied in practical engineering, largely due to the wiring problems associated with large-scale structural health monitoring. The cumbersome wiring requires much material and labor work, and more importantly, the associated maintenance work is also very heavy. Targeting a practical large scale concrete crack detection (CCD) application, a smart aggregates-based wireless sensor network system is proposed for the CCD application. The developed CCD system uses Zigbee 802.15.4 protocols, and is able to perform dynamic stress monitoring, structural impact capturing, and internal crack detection. The system has been experimentally validated, and the experimental results demonstrated the effectiveness of the proposed system. This work provides important support for practical CCD applications using wireless smart aggregates. PMID:28714927
Yan, Shi; Ma, Haoyan; Li, Peng; Song, Gangbing; Wu, Jianxin
2017-07-17
Structural health monitoring (SHM) systems can improve the safety and reliability of structures, reduce maintenance costs, and extend service life. Research on concrete SHMs using piezoelectric-based smart aggregates have reached great achievements. However, the newly developed techniques have not been widely applied in practical engineering, largely due to the wiring problems associated with large-scale structural health monitoring. The cumbersome wiring requires much material and labor work, and more importantly, the associated maintenance work is also very heavy. Targeting a practical large scale concrete crack detection (CCD) application, a smart aggregates-based wireless sensor network system is proposed for the CCD application. The developed CCD system uses Zigbee 802.15.4 protocols, and is able to perform dynamic stress monitoring, structural impact capturing, and internal crack detection. The system has been experimentally validated, and the experimental results demonstrated the effectiveness of the proposed system. This work provides important support for practical CCD applications using wireless smart aggregates.
On the Instability of Large Slopes in the Upstream of Wu River, Taiwan
NASA Astrophysics Data System (ADS)
Shou, Keh-Jian; Lin, Jia-Fei
2015-04-01
Considering the existence of various types of landslides (shallow and deep-seated) and the importance of protection targets (the landslide might affect a residential area, cut a road, isolate a village, etc.), this study aims to analyze the landslide susceptibility along the Lixing Industrial Road, i.e., Nantou County Road # 89, in the upstream of Wu River. Focusing on the selected typical large scale landslides, the data and information of the landslides were collected from the field and the government (including the local government, the Soil and Water Conservation Bureau, and the highway agencies). Based on the data of Li-DAR and the information from boreholes, the temporal behavior and the complex mechanism of large scale landslides were analyzed. To assess the spatial hazard of the landslides, probabilistic analysis was applied. The study of the landslide mechanism can help to understand the behavior of landslides in similar geologic conditions, and the results of hazard analysis can be applied for risk prevention and management in the study area.
Delensing CMB polarization with external datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kendrick M.; Hanson, Duncan; LoVerde, Marilena
2012-06-01
One of the primary scientific targets of current and future CMB polarization experiments is the search for a stochastic background of gravity waves in the early universe. As instrumental sensitivity improves, the limiting factor will eventually be B-mode power generated by gravitational lensing, which can be removed through use of so-called ''delensing'' algorithms. We forecast prospects for delensing using lensing maps which are obtained externally to CMB polarization: either from large-scale structure observations, or from high-resolution maps of CMB temperature. We conclude that the forecasts in either case are not encouraging, and that significantly delensing large-scale CMB polarization requires high-resolutionmore » polarization maps with sufficient sensitivity to measure the lensing B-mode. We also present a simple formalism for including delensing in CMB forecasts which is computationally fast and agrees well with Monte Carlos.« less
Robust visual tracking via multiscale deep sparse networks
NASA Astrophysics Data System (ADS)
Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo
2017-04-01
In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.
Mechanism of single-pulse ablative generation of laser-induced periodic surface structures
NASA Astrophysics Data System (ADS)
Shugaev, Maxim V.; Gnilitskyi, Iaroslav; Bulgakova, Nadezhda M.; Zhigilei, Leonid V.
2017-11-01
One of the remarkable capabilities of ultrashort polarized laser pulses is the generation of laser-induced periodic surface structures (LIPSS). The origin of this phenomenon is largely attributed to the interference of the incident laser wave and surface electromagnetic wave that creates a periodic absorption pattern. Although, commonly, LIPSS are produced by repetitive irradiation of the same area by multiple laser pulses in the regime of surface melting and resolidification, recent reports demonstrate the formation of LIPSS in the single-pulse irradiation regime at laser fluences well above the ablation threshold. In this paper, we report results of a large-scale molecular dynamics simulation aimed at providing insights into the mechanisms of single-pulse ablative LIPSS formation. The simulation performed for a Cr target reveals an interplay of material removal and redistribution in the course of spatially modulated ablation, leading to the transient formation of an elongated liquid wall extending up to ˜600 nm above the surface of the target at the locations of the minima of the laser energy deposition. The upper part of the liquid wall disintegrates into droplets while the base of the wall solidifies on the time scale of ˜2 ns, producing a ˜100 -nm-tall frozen surface feature extending above the level of the initial surface of the target. The properties of the surface region of the target are modified by the presence of high densities of dislocations and vacancies generated due to the rapid and highly nonequilibrium nature of the melting and resolidification processes. The insights into the LIPSS formation mechanisms may help in designing approaches for increasing the processing speed and improving the quality of the laser-patterned periodic surface structures.
Killeen, Gerry F; Smith, Tom A; Ferguson, Heather M; Mshinda, Hassan; Abdulla, Salim; Lengeler, Christian; Kachur, Steven P
2007-01-01
Background Malaria prevention in Africa merits particular attention as the world strives toward a better life for the poorest. Insecticide-treated nets (ITNs) represent a practical means to prevent malaria in Africa, so scaling up coverage to at least 80% of young children and pregnant women by 2010 is integral to the Millennium Development Goals (MDG). Targeting individual protection to vulnerable groups is an accepted priority, but community-level impacts of broader population coverage are largely ignored even though they may be just as important. We therefore estimated coverage thresholds for entire populations at which individual- and community-level protection are equivalent, representing rational targets for ITN coverage beyond vulnerable groups. Methods and Findings Using field-parameterized malaria transmission models, we show that high (80% use) but exclusively targeted coverage of young children and pregnant women (representing <20% of the population) will deliver limited protection and equity for these vulnerable groups. In contrast, relatively modest coverage (35%–65% use, with this threshold depending on ecological scenario and net quality) of all adults and children, rather than just vulnerable groups, can achieve equitable community-wide benefits equivalent to or greater than personal protection. Conclusions Coverage of entire populations will be required to accomplish large reductions of the malaria burden in Africa. While coverage of vulnerable groups should still be prioritized, the equitable and communal benefits of wide-scale ITN use by older children and adults should be explicitly promoted and evaluated by national malaria control programmes. ITN use by the majority of entire populations could protect all children in such communities, even those not actually covered by achieving existing personal protection targets of the MDG, Roll Back Malaria Partnership, or the US President's Malaria Initiative. PMID:17608562
Kawasaki, Masahiro; Uno, Yutaka; Mori, Jumpei; Kobata, Kenji; Kitajo, Keiichi
2014-01-01
Electroencephalogram (EEG) phase synchronization analyses can reveal large-scale communication between distant brain areas. However, it is not possible to identify the directional information flow between distant areas using conventional phase synchronization analyses. In the present study, we applied transcranial magnetic stimulation (TMS) to the occipital area in subjects who were resting with their eyes closed, and analyzed the spatial propagation of transient TMS-induced phase resetting by using the transfer entropy (TE), to quantify the causal and directional flow of information. The time-frequency EEG analysis indicated that the theta (5 Hz) phase locking factor (PLF) reached its highest value at the distant area (the motor area in this study), with a time lag that followed the peak of the transient PLF enhancements of the TMS-targeted area at the TMS onset. Phase-preservation index (PPI) analyses demonstrated significant phase resetting at the TMS-targeted area and distant area. Moreover, the TE from the TMS-targeted area to the distant area increased clearly during the delay that followed TMS onset. Interestingly, the time lags were almost coincident between the PLF and TE results (152 vs. 165 ms), which provides strong evidence that the emergence of the delayed PLF reflects the causal information flow. Such tendencies were observed only in the higher-intensity TMS condition, and not in the lower-intensity or sham TMS conditions. Thus, TMS may manipulate large-scale causal relationships between brain areas in an intensity-dependent manner. We demonstrated that single-pulse TMS modulated global phase dynamics and directional information flow among synchronized brain networks. Therefore, our results suggest that single-pulse TMS can manipulate both incoming and outgoing information in the TMS-targeted area associated with functional changes.
A Scalable Approach for Protein False Discovery Rate Estimation in Large Proteomic Data Sets.
Savitski, Mikhail M; Wilhelm, Mathias; Hahne, Hannes; Kuster, Bernhard; Bantscheff, Marcus
2015-09-01
Calculating the number of confidently identified proteins and estimating false discovery rate (FDR) is a challenge when analyzing very large proteomic data sets such as entire human proteomes. Biological and technical heterogeneity in proteomic experiments further add to the challenge and there are strong differences in opinion regarding the conceptual validity of a protein FDR and no consensus regarding the methodology for protein FDR determination. There are also limitations inherent to the widely used classic target-decoy strategy that particularly show when analyzing very large data sets and that lead to a strong over-representation of decoy identifications. In this study, we investigated the merits of the classic, as well as a novel target-decoy-based protein FDR estimation approach, taking advantage of a heterogeneous data collection comprised of ∼19,000 LC-MS/MS runs deposited in ProteomicsDB (https://www.proteomicsdb.org). The "picked" protein FDR approach treats target and decoy sequences of the same protein as a pair rather than as individual entities and chooses either the target or the decoy sequence depending on which receives the highest score. We investigated the performance of this approach in combination with q-value based peptide scoring to normalize sample-, instrument-, and search engine-specific differences. The "picked" target-decoy strategy performed best when protein scoring was based on the best peptide q-value for each protein yielding a stable number of true positive protein identifications over a wide range of q-value thresholds. We show that this simple and unbiased strategy eliminates a conceptual issue in the commonly used "classic" protein FDR approach that causes overprediction of false-positive protein identification in large data sets. The approach scales from small to very large data sets without losing performance, consistently increases the number of true-positive protein identifications and is readily implemented in proteomics analysis software. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
ERIC Educational Resources Information Center
Janssen, Rianne; Crauwels, Marion
2011-01-01
A large-scale paper-and-pencil assessment of the attainment targets of environmental studies with a focus on the subject area nature was held in primary education in Flanders (Belgium). The tests on different subfields of nature, i.e. the human body, healthcare, organisms, ecosystems, environmental care and non-living nature, were administered to…
ERIC Educational Resources Information Center
Brückner, Sebastian; Förster, Manuel; Zlatkin-Troitschanskaia, Olga; Happ, Roland; Walstad, William B.; Yamaoka, Michio; Asano, Tadayoshi
2015-01-01
Gender effects in large-scale assessments have become an increasingly important research area within and across countries. Yet few studies have linked differences in assessment results of male and female students in higher education to construct-relevant features of the target construct. This paper examines gender effects on students' economic…
Apparatus for the production of boron nitride nanotubes
Smith, Michael W; Jordan, Kevin
2014-06-17
An apparatus for the large scale production of boron nitride nanotubes comprising; a pressure chamber containing; a continuously fed boron containing target; a source of thermal energy preferably a focused laser beam; a cooled condenser; a source of pressurized nitrogen gas; and a mechanism for extracting boron nitride nanotubes that are condensed on or in the area of the cooled condenser from the pressure chamber.
Hart-Smith, Gene; Yagoub, Daniel; Tay, Aidan P.; Pickford, Russell; Wilkins, Marc R.
2016-01-01
All large scale LC-MS/MS post-translational methylation site discovery experiments require methylpeptide spectrum matches (methyl-PSMs) to be identified at acceptably low false discovery rates (FDRs). To meet estimated methyl-PSM FDRs, methyl-PSM filtering criteria are often determined using the target-decoy approach. The efficacy of this methyl-PSM filtering approach has, however, yet to be thoroughly evaluated. Here, we conduct a systematic analysis of methyl-PSM FDRs across a range of sample preparation workflows (each differing in their exposure to the alcohols methanol and isopropyl alcohol) and mass spectrometric instrument platforms (each employing a different mode of MS/MS dissociation). Through 13CD3-methionine labeling (heavy-methyl SILAC) of Saccharomyces cerevisiae cells and in-depth manual data inspection, accurate lists of true positive methyl-PSMs were determined, allowing methyl-PSM FDRs to be compared with target-decoy approach-derived methyl-PSM FDR estimates. These results show that global FDR estimates produce extremely unreliable methyl-PSM filtering criteria; we demonstrate that this is an unavoidable consequence of the high number of amino acid combinations capable of producing peptide sequences that are isobaric to methylated peptides of a different sequence. Separate methyl-PSM FDR estimates were also found to be unreliable due to prevalent sources of false positive methyl-PSMs that produce high peptide identity score distributions. Incorrect methylation site localizations, peptides containing cysteinyl-S-β-propionamide, and methylated glutamic or aspartic acid residues can partially, but not wholly, account for these false positive methyl-PSMs. Together, these results indicate that the target-decoy approach is an unreliable means of estimating methyl-PSM FDRs and methyl-PSM filtering criteria. We suggest that orthogonal methylpeptide validation (e.g. heavy-methyl SILAC or its offshoots) should be considered a prerequisite for obtaining high confidence methyl-PSMs in large scale LC-MS/MS methylation site discovery experiments and make recommendations on how to reduce methyl-PSM FDRs in samples not amenable to heavy isotope labeling. Data are available via ProteomeXchange with the data identifier PXD002857. PMID:26699799
Nyström, Monica Elisabeth; Strehlenert, Helena; Hansson, Johan; Hasson, Henna
2014-09-18
Large-scale change initiatives stimulating change in several organizational systems in the health and social care sector are challenging both to lead and evaluate. There is a lack of systematic research that can enrich our understanding of strategies to facilitate large system transformations in this sector. The purpose of this study was to examine the characteristics of core activities and strategies to facilitate implementation and change of a national program aimed at improving life for the most ill elderly people in Sweden. The program outcomes were also addressed to assess the impact of these strategies. A longitudinal case study design with multiple data collection methods was applied. Archival data (n = 795), interviews with key stakeholders (n = 11) and non-participant observations (n = 23) were analysed using content analysis. Outcome data was obtained from national quality registries. This study presents an approach for implementing a large national change program that is characterized by initial flexibility and dynamism regarding content and facilitation strategies and a growing complexity over time requiring more structure and coordination. The description of activities and strategies show that the program management team engaged a variety of stakeholders and actor groups and accordingly used a palate of different strategies. The main strategies used to influence change in the target organisations were to use regional improvement coaches, regional strategic management teams, national quality registries, financial incentives and annually revised agreements. Interactive learning sessions, intense communication, monitor and measurements, and active involvement of different experts and stakeholders, including elderly people, complemented these strategies. Program outcomes showed steady progress in most of the five target areas, less so for the target of achieving coordinated care. There is no blue-print on how to approach the challenging task of leading large scale change programs in complex contexts, but our conclusion is that more attention has to be given to the multidimensional strategies that program management need to consider. This multidimensionality comprises different strategies depending on types of actors, system levels, contextual factors, program progress over time, program content, types of learning and change processes, and the conditions for sustainability.
Popescu, Sorina C.; Popescu, George V.; Bachan, Shawn; Zhang, Zimei; Seay, Montrell; Gerstein, Mark; Snyder, Michael; Dinesh-Kumar, S. P.
2007-01-01
Calmodulins (CaMs) are the most ubiquitous calcium sensors in eukaryotes. A number of CaM-binding proteins have been identified through classical methods, and many proteins have been predicted to bind CaMs based on their structural homology with known targets. However, multicellular organisms typically contain many CaM-like (CML) proteins, and a global identification of their targets and specificity of interaction is lacking. In an effort to develop a platform for large-scale analysis of proteins in plants we have developed a protein microarray and used it to study the global analysis of CaM/CML interactions. An Arabidopsis thaliana expression collection containing 1,133 ORFs was generated and used to produce proteins with an optimized medium-throughput plant-based expression system. Protein microarrays were prepared and screened with several CaMs/CMLs. A large number of previously known and novel CaM/CML targets were identified, including transcription factors, receptor and intracellular protein kinases, F-box proteins, RNA-binding proteins, and proteins of unknown function. Multiple CaM/CML proteins bound many binding partners, but the majority of targets were specific to one or a few CaMs/CMLs indicating that different CaM family members function through different targets. Based on our analyses, the emergent CaM/CML interactome is more extensive than previously predicted. Our results suggest that calcium functions through distinct CaM/CML proteins to regulate a wide range of targets and cellular activities. PMID:17360592
Multiple Objects Fusion Tracker Using a Matching Network for Adaptively Represented Instance Pairs
Oh, Sang-Il; Kang, Hang-Bong
2017-01-01
Multiple-object tracking is affected by various sources of distortion, such as occlusion, illumination variations and motion changes. Overcoming these distortions by tracking on RGB frames, such as shifting, has limitations because of material distortions caused by RGB frames. To overcome these distortions, we propose a multiple-object fusion tracker (MOFT), which uses a combination of 3D point clouds and corresponding RGB frames. The MOFT uses a matching function initialized on large-scale external sequences to determine which candidates in the current frame match with the target object in the previous frame. After conducting tracking on a few frames, the initialized matching function is fine-tuned according to the appearance models of target objects. The fine-tuning process of the matching function is constructed as a structured form with diverse matching function branches. In general multiple object tracking situations, scale variations for a scene occur depending on the distance between the target objects and the sensors. If the target objects in various scales are equally represented with the same strategy, information losses will occur for any representation of the target objects. In this paper, the output map of the convolutional layer obtained from a pre-trained convolutional neural network is used to adaptively represent instances without information loss. In addition, MOFT fuses the tracking results obtained from each modality at the decision level to compensate the tracking failures of each modality using basic belief assignment, rather than fusing modalities by selectively using the features of each modality. Experimental results indicate that the proposed tracker provides state-of-the-art performance considering multiple objects tracking (MOT) and KITTIbenchmarks. PMID:28420194
Visualizing the Big (and Large) Data from an HPC Resource
NASA Astrophysics Data System (ADS)
Sisneros, R.
2015-10-01
Supercomputers are built to endure painfully large simulations and contend with resulting outputs. These are characteristics that scientists are all too willing to test the limits of in their quest for science at scale. The data generated during a scientist's workflow through an HPC center (large data) is the primary target for analysis and visualization. However, the hardware itself is also capable of generating volumes of diagnostic data (big data); this presents compelling opportunities to deploy analogous analytic techniques. In this paper we will provide a survey of some of the many ways in which visualization and analysis may be crammed into the scientific workflow as well as utilized on machine-specific data.
Drug2Gene: an exhaustive resource to explore effectively the drug-target relation network.
Roider, Helge G; Pavlova, Nadia; Kirov, Ivaylo; Slavov, Stoyan; Slavov, Todor; Uzunov, Zlatyo; Weiss, Bertram
2014-03-11
Information about drug-target relations is at the heart of drug discovery. There are now dozens of databases providing drug-target interaction data with varying scope, and focus. Therefore, and due to the large chemical space, the overlap of the different data sets is surprisingly small. As searching through these sources manually is cumbersome, time-consuming and error-prone, integrating all the data is highly desirable. Despite a few attempts, integration has been hampered by the diversity of descriptions of compounds, and by the fact that the reported activity values, coming from different data sets, are not always directly comparable due to usage of different metrics or data formats. We have built Drug2Gene, a knowledge base, which combines the compound/drug-gene/protein information from 19 publicly available databases. A key feature is our rigorous unification and standardization process which makes the data truly comparable on a large scale, allowing for the first time effective data mining in such a large knowledge corpus. As of version 3.2, Drug2Gene contains 4,372,290 unified relations between compounds and their targets most of which include reported bioactivity data. We extend this set with putative (i.e. homology-inferred) relations where sufficient sequence homology between proteins suggests they may bind to similar compounds. Drug2Gene provides powerful search functionalities, very flexible export procedures, and a user-friendly web interface. Drug2Gene v3.2 has become a mature and comprehensive knowledge base providing unified, standardized drug-target related information gathered from publicly available data sources. It can be used to integrate proprietary data sets with publicly available data sets. Its main goal is to be a 'one-stop shop' to identify tool compounds targeting a given gene product or for finding all known targets of a drug. Drug2Gene with its integrated data set of public compound-target relations is freely accessible without restrictions at http://www.drug2gene.com.
Zhang, Hui; Zhang, Jinshan; Wei, Pengliang; Zhang, Botao; Gou, Feng; Feng, Zhengyan; Mao, Yanfei; Yang, Lan; Zhang, Heng; Xu, Nanfei; Zhu, Jian-Kang
2014-08-01
The CRISPR/Cas9 system has been demonstrated to efficiently induce targeted gene editing in a variety of organisms including plants. Recent work showed that CRISPR/Cas9-induced gene mutations in Arabidopsis were mostly somatic mutations in the early generation, although some mutations could be stably inherited in later generations. However, it remains unclear whether this system will work similarly in crops such as rice. In this study, we tested in two rice subspecies 11 target genes for their amenability to CRISPR/Cas9-induced editing and determined the patterns, specificity and heritability of the gene modifications. Analysis of the genotypes and frequency of edited genes in the first generation of transformed plants (T0) showed that the CRISPR/Cas9 system was highly efficient in rice, with target genes edited in nearly half of the transformed embryogenic cells before their first cell division. Homozygotes of edited target genes were readily found in T0 plants. The gene mutations were passed to the next generation (T1) following classic Mendelian law, without any detectable new mutation or reversion. Even with extensive searches including whole genome resequencing, we could not find any evidence of large-scale off-targeting in rice for any of the many targets tested in this study. By specifically sequencing the putative off-target sites of a large number of T0 plants, low-frequency mutations were found in only one off-target site where the sequence had 1-bp difference from the intended target. Overall, the data in this study point to the CRISPR/Cas9 system being a powerful tool in crop genome engineering. © 2014 Society for Experimental Biology, Association of Applied Biologists and John Wiley & Sons Ltd.
Fungicides transport in runoff from vineyard plot and catchment: contribution of non-target areas.
Lefrancq, Marie; Payraudeau, Sylvain; García Verdú, Antonio Joaquín; Maillard, Elodie; Millet, Maurice; Imfeld, Gwenaël
2014-04-01
Surface runoff and erosion during the course of rainfall events are major processes of pesticides transport from agricultural land to aquatic ecosystem. These processes are generally evaluated either at the plot or the catchment scale. Here, we compared at both scales the transport and partitioning in runoff water of two widely used fungicides, i.e., kresoxim-methyl (KM) and cyazofamid (CY). The objective was to evaluate the relationship between fungicides runoff from the plot and from the vineyard catchment. The results show that seasonal exports for KM and CY at the catchment were larger than those obtained at the plot. This underlines that non-target areas within the catchment largely contribute to the overall load of runoff-associated fungicides. Estimations show that 85 and 62 % of the loads observed for KM and CY at the catchment outlet cannot be explained by the vineyard plots. However, the partitioning of KM and CY between three fractions, i.e., the suspended solids (>0.7 μm) and two dissolved fractions (i.e., between 0.22 and 0.7 µm and <0.22 µm) in runoff water was similar at both scales. KM was predominantly detected below 0.22 μm, whereas CY was mainly detected in the fraction between 0.22 and 0.7 μm. Although KM and CY have similar physicochemical properties and are expected to behave similarly, our results show that their partitioning between two fractions of the dissolved phase differs largely. It is concluded that combined observations of pesticide runoff at both the catchment and the plot scales enable to evaluate the sources areas of pesticide off-site transport.
The Burn Wound Microenvironment
Rose, Lloyd F.; Chan, Rodney K.
2016-01-01
Significance: While the survival rate of the severely burned patient has improved significantly, relatively little progress has been made in treatment or prevention of burn-induced long-term sequelae, such as contraction and fibrosis. Recent Advances: Our knowledge of the molecular pathways involved in burn wounds has increased dramatically, and technological advances now allow large-scale genomic studies, providing a global view of wound healing processes. Critical Issues: Translating findings from a large number of in vitro and preclinical animal studies into clinical practice represents a gap in our understanding, and the failures of a number of clinical trials suggest that targeting single pathways or cytokines may not be the best approach. Significant opportunities for improvement exist. Future Directions: Study of the underlying molecular influences of burn wound healing progression will undoubtedly continue as an active research focus. Increasing our knowledge of these processes will identify additional therapeutic targets, supporting informed clinical studies that translate into clinical relevance and practice. PMID:26989577
Advances in segmentation modeling for health communication and social marketing campaigns.
Albrecht, T L; Bryant, C
1996-01-01
Large-scale communication campaigns for health promotion and disease prevention involve analysis of audience demographic and psychographic factors for effective message targeting. A variety of segmentation modeling techniques, including tree-based methods such as Chi-squared Automatic Interaction Detection and logistic regression, are used to identify meaningful target groups within a large sample or population (N = 750-1,000+). Such groups are based on statistically significant combinations of factors (e.g., gender, marital status, and personality predispositions). The identification of groups or clusters facilitates message design in order to address the particular needs, attention patterns, and concerns of audience members within each group. We review current segmentation techniques, their contributions to conceptual development, and cost-effective decision making. Examples from a major study in which these strategies were used are provided from the Texas Women, Infants and Children Program's Comprehensive Social Marketing Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horiuchi, Shunsaku, E-mail: horiuchi@vt.edu
2016-06-21
The cold dark matter paradigm has been extremely successful in explaining the large-scale structure of the Universe. However, it continues to face issues when confronted by observations on sub-Galactic scales. A major caveat, now being addressed, has been the incomplete treatment of baryon physics. We first summarize the small-scale issues surrounding cold dark matter and discuss the solutions explored by modern state-of-the-art numerical simulations including treatment of baryonic physics. We identify the too big to fail in field galaxies as among the best targets to study modifications to dark matter, and discuss the particular connection with sterile neutrino warm darkmore » matter. We also discuss how the recently detected anomalous 3.55 keV X-ray lines, when interpreted as sterile neutrino dark matter decay, provide a very good description of small-scale observations of the Local Group.« less
Behaviors of susceptible-infected epidemics on scale-free networks with identical infectivity
NASA Astrophysics Data System (ADS)
Zhou, Tao; Liu, Jian-Guo; Bai, Wen-Jie; Chen, Guanrong; Wang, Bing-Hong
2006-11-01
In this paper, we propose a susceptible-infected model with identical infectivity, in which, at every time step, each node can only contact a constant number of neighbors. We implemented this model on scale-free networks, and found that the infected population grows in an exponential form with the time scale proportional to the spreading rate. Furthermore, by numerical simulation, we demonstrated that the targeted immunization of the present model is much less efficient than that of the standard susceptible-infected model. Finally, we investigate a fast spreading strategy when only local information is available. Different from the extensively studied path-finding strategy, the strategy preferring small-degree nodes is more efficient than that preferring large-degree nodes. Our results indicate the existence of an essential relationship between network traffic and network epidemic on scale-free networks.
Observing a light dark matter beam with neutrino experiments
NASA Astrophysics Data System (ADS)
Deniverville, Patrick; Pospelov, Maxim; Ritz, Adam
2011-10-01
We consider the sensitivity of fixed-target neutrino experiments at the luminosity frontier to light stable states, such as those present in models of MeV-scale dark matter. To ensure the correct thermal relic abundance, such states must annihilate via light mediators, which in turn provide an access portal for direct production in colliders or fixed targets. Indeed, this framework endows the neutrino beams produced at fixed-target facilities with a companion “dark matter beam,” which may be detected via an excess of elastic scattering events off electrons or nuclei in the (near-)detector. We study the high-luminosity proton fixed-target experiments at LSND and MiniBooNE, and determine that the ensuing sensitivity to light dark matter generally surpasses that of other direct probes. For scenarios with a kinetically-mixed U(1)' vector mediator of mass mV, we find that a large volume of parameter space is excluded for mDM˜1-5MeV, covering vector masses 2mDM≲mV≲mη and a range of kinetic mixing parameters reaching as low as κ˜10-5. The corresponding MeV-scale dark matter scenarios motivated by an explanation of the galactic 511 keV line are thus strongly constrained.
The big brown bat's perceptual dimension of target range
NASA Astrophysics Data System (ADS)
Simmons, James A.
2005-09-01
Big brown bats determine the distance to targets from echo delay, but information actually is entered onto the bat's psychological delay scale from two sources. The first is the target-ranging system itself, from the time that elapses between single-spike neural responses evoked by the broadcast and similar responses evoked by echoes at different delays. These responses register the FM sweeps of broadcasts or echoes, and the associated system of neural delay lines and coincidence detectors cross correlates the spectrograms along the time axis. The second source is the echo spectrum, which relates to shape expressed as range profile. The target-ranging system extracts this by fanning out to encompass parallel representations of many possible notch frequencies and notch widths in echoes. Bats perceive delay separations of 5-30 μs and have a resolution limit of about 2 μs, but interference amplifies small delay separations by transposing them into large changes in notch frequency, so only perception of intervals smaller than 5 μs is surprising. Experiments with phase-shifted echoes show that the psychological time scale can represent two different delays originating entirely in the time domain when they are at least as close together as 10 μs. [Work supported by NIH and ONR.
Selecting Reliable and Robust Freshwater Macroalgae for Biomass Applications
Lawton, Rebecca J.; de Nys, Rocky; Paul, Nicholas A.
2013-01-01
Intensive cultivation of freshwater macroalgae is likely to increase with the development of an algal biofuels industry and algal bioremediation. However, target freshwater macroalgae species suitable for large-scale intensive cultivation have not yet been identified. Therefore, as a first step to identifying target species, we compared the productivity, growth and biochemical composition of three species representative of key freshwater macroalgae genera across a range of cultivation conditions. We then selected a primary target species and assessed its competitive ability against other species over a range of stocking densities. Oedogonium had the highest productivity (8.0 g ash free dry weight m−2 day−1), lowest ash content (3–8%), lowest water content (fresh weigh: dry weight ratio of 3.4), highest carbon content (45%) and highest bioenergy potential (higher heating value 20 MJ/kg) compared to Cladophora and Spirogyra. The higher productivity of Oedogonium relative to Cladophora and Spirogyra was consistent when algae were cultured with and without the addition of CO2 across three aeration treatments. Therefore, Oedogonium was selected as our primary target species. The competitive ability of Oedogonium was assessed by growing it in bi-cultures and polycultures with Cladophora and Spirogyra over a range of stocking densities. Cultures were initially stocked with equal proportions of each species, but after three weeks of growth the proportion of Oedogonium had increased to at least 96% (±7 S.E.) in Oedogonium-Spirogyra bi-cultures, 86% (±16 S.E.) in Oedogonium-Cladophora bi-cultures and 82% (±18 S.E.) in polycultures. The high productivity, bioenergy potential and competitive dominance of Oedogonium make this species an ideal freshwater macroalgal target for large-scale production and a valuable biomass source for bioenergy applications. These results demonstrate that freshwater macroalgae are thus far an under-utilised feedstock with much potential for biomass applications. PMID:23717561
Selecting reliable and robust freshwater macroalgae for biomass applications.
Lawton, Rebecca J; de Nys, Rocky; Paul, Nicholas A
2013-01-01
Intensive cultivation of freshwater macroalgae is likely to increase with the development of an algal biofuels industry and algal bioremediation. However, target freshwater macroalgae species suitable for large-scale intensive cultivation have not yet been identified. Therefore, as a first step to identifying target species, we compared the productivity, growth and biochemical composition of three species representative of key freshwater macroalgae genera across a range of cultivation conditions. We then selected a primary target species and assessed its competitive ability against other species over a range of stocking densities. Oedogonium had the highest productivity (8.0 g ash free dry weight m⁻² day⁻¹), lowest ash content (3-8%), lowest water content (fresh weigh: dry weight ratio of 3.4), highest carbon content (45%) and highest bioenergy potential (higher heating value 20 MJ/kg) compared to Cladophora and Spirogyra. The higher productivity of Oedogonium relative to Cladophora and Spirogyra was consistent when algae were cultured with and without the addition of CO₂ across three aeration treatments. Therefore, Oedogonium was selected as our primary target species. The competitive ability of Oedogonium was assessed by growing it in bi-cultures and polycultures with Cladophora and Spirogyra over a range of stocking densities. Cultures were initially stocked with equal proportions of each species, but after three weeks of growth the proportion of Oedogonium had increased to at least 96% (±7 S.E.) in Oedogonium-Spirogyra bi-cultures, 86% (±16 S.E.) in Oedogonium-Cladophora bi-cultures and 82% (±18 S.E.) in polycultures. The high productivity, bioenergy potential and competitive dominance of Oedogonium make this species an ideal freshwater macroalgal target for large-scale production and a valuable biomass source for bioenergy applications. These results demonstrate that freshwater macroalgae are thus far an under-utilised feedstock with much potential for biomass applications.
Systematic identification of proteins that elicit drug side effects
Kuhn, Michael; Al Banchaabouchi, Mumna; Campillos, Monica; Jensen, Lars Juhl; Gross, Cornelius; Gavin, Anne-Claude; Bork, Peer
2013-01-01
Side effect similarities of drugs have recently been employed to predict new drug targets, and networks of side effects and targets have been used to better understand the mechanism of action of drugs. Here, we report a large-scale analysis to systematically predict and characterize proteins that cause drug side effects. We integrated phenotypic data obtained during clinical trials with known drug–target relations to identify overrepresented protein–side effect combinations. Using independent data, we confirm that most of these overrepresentations point to proteins which, when perturbed, cause side effects. Of 1428 side effects studied, 732 were predicted to be predominantly caused by individual proteins, at least 137 of them backed by existing pharmacological or phenotypic data. We prove this concept in vivo by confirming our prediction that activation of the serotonin 7 receptor (HTR7) is responsible for hyperesthesia in mice, which, in turn, can be prevented by a drug that selectively inhibits HTR7. Taken together, we show that a large fraction of complex drug side effects are mediated by individual proteins and create a reference for such relations. PMID:23632385
Towne, Danli L; Nicholl, Emily E; Comess, Kenneth M; Galasinski, Scott C; Hajduk, Philip J; Abraham, Vivek C
2012-09-01
Efficient elucidation of the biological mechanism of action of novel compounds remains a major bottleneck in the drug discovery process. To address this need in the area of oncology, we report the development of a multiparametric high-content screening assay panel at the level of single cells to dramatically accelerate understanding the mechanism of action of cell growth-inhibiting compounds on a large scale. Our approach is based on measuring 10 established end points associated with mitochondrial apoptosis, cell cycle disruption, DNA damage, and cellular morphological changes in the same experiment, across three multiparametric assays. The data from all of the measurements taken together are expected to help increase our current understanding of target protein functions, constrain the list of possible targets for compounds identified using phenotypic screens, and identify off-target effects. We have also developed novel data visualization and phenotypic classification approaches for detailed interpretation of individual compound effects and navigation of large collections of multiparametric cellular responses. We expect this general approach to be valuable for drug discovery across multiple therapeutic areas.
Dumas, Pascal; Jimenez, Haizea; Peignon, Christophe; Wantiez, Laurent; Adjeroud, Mehdi
2013-01-01
No-take marine reserves are one of the oldest and most versatile tools used across the Pacific for the conservation of reef resources, in particular for invertebrates traditionally targeted by local fishers. Assessing their actual efficiency is still a challenge in complex ecosystems such as coral reefs, where reserve effects are likely to be obscured by high levels of environmental variability. The goal of this study was to investigate the potential interference of small-scale habitat structure on the efficiency of reserves. The spatial distribution of widely harvested macroinvertebrates was surveyed in a large set of protected vs. unprotected stations from eleven reefs located in New Caledonia. Abundance, density and individual size data were collected along random, small-scale (20×1 m) transects. Fine habitat typology was derived with a quantitative photographic method using 17 local habitat variables. Marine reserves substantially augmented the local density, size structure and biomass of the target species. Density of Trochus niloticus and Tridacna maxima doubled globally inside the reserve network; average size was greater by 10 to 20% for T. niloticus. We demonstrated that the apparent success of protection could be obscured by marked variations in population structure occurring over short distances, resulting from small-scale heterogeneity in the reef habitat. The efficiency of reserves appeared to be modulated by the availability of suitable habitats at the decimetric scale (“microhabitats”) for the considered sessile/low-mobile macroinvertebrate species. Incorporating microhabitat distribution could significantly enhance the efficiency of habitat surrogacy, a valuable approach in the case of conservation targets focusing on endangered or emblematic macroinvertebrate or relatively sedentary fish species PMID:23554965
Multiscale modeling methods in biomechanics.
Bhattacharya, Pinaki; Viceconti, Marco
2017-05-01
More and more frequently, computational biomechanics deals with problems where the portion of physical reality to be modeled spans over such a large range of spatial and temporal dimensions, that it is impossible to represent it as a single space-time continuum. We are forced to consider multiple space-time continua, each representing the phenomenon of interest at a characteristic space-time scale. Multiscale models describe a complex process across multiple scales, and account for how quantities transform as we move from one scale to another. This review offers a set of definitions for this emerging field, and provides a brief summary of the most recent developments on multiscale modeling in biomechanics. Of all possible perspectives, we chose that of the modeling intent, which vastly affect the nature and the structure of each research activity. To the purpose we organized all papers reviewed in three categories: 'causal confirmation,' where multiscale models are used as materializations of the causation theories; 'predictive accuracy,' where multiscale modeling is aimed to improve the predictive accuracy; and 'determination of effect,' where multiscale modeling is used to model how a change at one scale manifests in an effect at another radically different space-time scale. Consistent with how the volume of computational biomechanics research is distributed across application targets, we extensively reviewed papers targeting the musculoskeletal and the cardiovascular systems, and covered only a few exemplary papers targeting other organ systems. The review shows a research subdomain still in its infancy, where causal confirmation papers remain the most common. WIREs Syst Biol Med 2017, 9:e1375. doi: 10.1002/wsbm.1375 For further resources related to this article, please visit the WIREs website. © 2017 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paguio, R. R.; Smith, G. E.; Taylor, J. L.
Z-Beamlet (ZBL) experiments conducted at the PECOS test facility at Sandia National Laboratories (SNL) investigated the nonlinear processes in laser plasma interaction (or laserplasma instabilities LPI) that complicate the deposition of laser energy by enhanced absorption, backscatter, filamentation and beam-spray that can occur in large-scale laser-heated gas cell targets. These targets and experiments were designed to provide better insight into the physics of the laser preheat stage of the Magnetized Liner Inertial Fusion (MagLIF) scheme being tested on the SNL Z-machine. The experiments aim to understand the tradeoffs between laser spot size, laser pulse shape, laser entrance hole (LEH) windowmore » thickness, and fuel density for laser preheat. Gas cell target design evolution and fabrication adaptations to accommodate the evolving experiment and scientific requirements are also described in this paper.« less
Will nanotechnology influence targeted cancer therapy?
Grimm, Jan; Scheinberg, David A.
2011-01-01
The rapid development of techniques that enable synthesis (and manipulation) of matter on the nanometer scale, as well as the development of new nano-materials, will play a large role in disease diagnosis and treatment, specifically in targeted cancer therapy. Targeted nanocarriers are an intriguing means to selectively deliver high concentrations of cytotoxic agents or imaging labels directly to the cancer site. Often solubility issues and an unfavorable biodistribution can result in a suboptimal response of novel agents even though they are very potent. New nanoparticulate formulations allow simultaneous imaging and therapy (“theranostics”), which can provide a realistic means for the clinical implementation of such otherwise suboptimal formulations. In this review we will not attempt to provide a complete overview of the rapidly enlarging field of nanotechnology in cancer; rather, we will present properties specific to nanoparticles, and examples of their uses, which demonstrate their importance for targeted cancer therapy. PMID:21356476
Paguio, R. R.; Smith, G. E.; Taylor, J. L.; ...
2017-12-04
Z-Beamlet (ZBL) experiments conducted at the PECOS test facility at Sandia National Laboratories (SNL) investigated the nonlinear processes in laser plasma interaction (or laserplasma instabilities LPI) that complicate the deposition of laser energy by enhanced absorption, backscatter, filamentation and beam-spray that can occur in large-scale laser-heated gas cell targets. These targets and experiments were designed to provide better insight into the physics of the laser preheat stage of the Magnetized Liner Inertial Fusion (MagLIF) scheme being tested on the SNL Z-machine. The experiments aim to understand the tradeoffs between laser spot size, laser pulse shape, laser entrance hole (LEH) windowmore » thickness, and fuel density for laser preheat. Gas cell target design evolution and fabrication adaptations to accommodate the evolving experiment and scientific requirements are also described in this paper.« less
Lead (Pb) Hohlraum: Target for Inertial Fusion Energy
Ross, J. S.; Amendt, P.; Atherton, L. J.; Dunne, M.; Glenzer, S. H.; Lindl, J. D.; Meeker, D.; Moses, E. I.; Nikroo, A.; Wallace, R.
2013-01-01
Recent progress towards demonstrating inertial confinement fusion (ICF) ignition at the National Ignition Facility (NIF) has sparked wide interest in Laser Inertial Fusion Energy (LIFE) for carbon-free large-scale power generation. A LIFE-based fleet of power plants promises clean energy generation with no greenhouse gas emissions and a virtually limitless, widely available thermonuclear fuel source. For the LIFE concept to be viable, target costs must be minimized while the target material efficiency or x-ray albedo is optimized. Current ICF targets on the NIF utilize a gold or depleted uranium cylindrical radiation cavity (hohlraum) with a plastic capsule at the center that contains the deuterium and tritium fuel. Here we show a direct comparison of gold and lead hohlraums in efficiently ablating deuterium-filled plastic capsules with soft x rays. We report on lead hohlraum performance that is indistinguishable from gold, yet costing only a small fraction. PMID:23486285
Lead (Pb) hohlraum: target for inertial fusion energy.
Ross, J S; Amendt, P; Atherton, L J; Dunne, M; Glenzer, S H; Lindl, J D; Meeker, D; Moses, E I; Nikroo, A; Wallace, R
2013-01-01
Recent progress towards demonstrating inertial confinement fusion (ICF) ignition at the National Ignition Facility (NIF) has sparked wide interest in Laser Inertial Fusion Energy (LIFE) for carbon-free large-scale power generation. A LIFE-based fleet of power plants promises clean energy generation with no greenhouse gas emissions and a virtually limitless, widely available thermonuclear fuel source. For the LIFE concept to be viable, target costs must be minimized while the target material efficiency or x-ray albedo is optimized. Current ICF targets on the NIF utilize a gold or depleted uranium cylindrical radiation cavity (hohlraum) with a plastic capsule at the center that contains the deuterium and tritium fuel. Here we show a direct comparison of gold and lead hohlraums in efficiently ablating deuterium-filled plastic capsules with soft x rays. We report on lead hohlraum performance that is indistinguishable from gold, yet costing only a small fraction.
Large-size porous ZnO flakes with superior gas-sensing performance
NASA Astrophysics Data System (ADS)
Wen, Wei; Wu, Jin-Ming; Wang, Yu-De
2012-06-01
A simple top-down route is developed to fabricate large size porous ZnO flakes via solution combustion synthesis followed by a subsequent calcination in air, which is template-free and can be easily enlarged to an industrial scale. The achieved porous ZnO flakes, which are tens to hundreds of micrometers in flat and tens of nanometers in thickness, exhibit high response for detecting acetone and ethanol, because the unique two-dimensional architecture shortens effectively the gas diffusion distance and provides highly accessible open channels and active surfaces for the target gas.
Envisaging bacteria as phage targets
Abedon, Stephen T.
2011-01-01
It can be difficult to appreciate just how small bacteria and phages are or how large, in comparison, the volumes that they occupy. A single milliliter, for example, can represent to a phage what would be, with proper scaling, an “ocean” to you and me. Here I illustrate, using more easily visualized macroscopic examples, the difficulties that a phage, as a randomly diffusing particle, can have in locating bacteria to infect. I conclude by restating the truism that the rate of phage adsorption to a given target bacterium is a function of phage density, that is, titer, in combination with the degree of bacterial susceptibility to adsorption by an encountering phage. PMID:23616932
Detergent Lysis of Animal Tissues for Immunoprecipitation.
DeCaprio, James; Kohl, Thomas O
2017-12-01
This protocol details protein extraction from mouse tissues for immunoprecipitation purposes and has been applied for the performance of large-scale immunoprecipitations of target proteins from various tissues for the identification of associated proteins by mass spectroscopy. The key factors in performing a successful immunoprecipitation directly relate to the abundance of target protein in a particular tissue type and whether or not the embryonic, newborn, or adult mouse-derived tissues contain fibrous and other insoluble material. Several tissue types, including lung and liver as well as carcinomas, contain significant amounts of fibrous tissue that can interfere with an immunoprecipitation. © 2017 Cold Spring Harbor Laboratory Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rashkin, Hannah J.; Bell, Eric B.; Choi, Yejin
People around the globe respond to major real world events through social media. To study targeted public sentiments across many languages and geographic locations, we introduce multilingual connotation frames: an extension from English connotation frames of Rashkin et al. (2016) with 10 additional European languages, focusing on the implied sentiments among event participants engaged in a frame. As a case study, we present large scale analysis on targeted public sentiments using 1.2 million multilingual connotation frames extracted from Twitter. We rely on connotation frames to build models to forecast country-specific connotation dynamics – perspective change over time towards salient entitiesmore » and events. Our results demonstrate that connotation dynamics can be accurately predicted up to half a week in advance.« less
Arrays of probes for positional sequencing by hybridization
Cantor, Charles R [Boston, MA; Prezetakiewiczr, Marek [East Boston, MA; Smith, Cassandra L [Boston, MA; Sano, Takeshi [Waltham, MA
2008-01-15
This invention is directed to methods and reagents useful for sequencing nucleic acid targets utilizing sequencing by hybridization technology comprising probes, arrays of probes and methods whereby sequence information is obtained rapidly and efficiently in discrete packages. That information can be used for the detection, identification, purification and complete or partial sequencing of a particular target nucleic acid. When coupled with a ligation step, these methods can be performed under a single set of hybridization conditions. The invention also relates to the replication of probe arrays and methods for making and replicating arrays of probes which are useful for the large scale manufacture of diagnostic aids used to screen biological samples for specific target sequences. Arrays created using PCR technology may comprise probes with 5'- and/or 3'-overhangs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
WarpIV: In situ visualization and analysis of ion accelerator simulations
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...
2016-05-09
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
Large eddy simulation of turbine wakes using higher-order methods
NASA Astrophysics Data System (ADS)
Deskos, Georgios; Laizet, Sylvain; Piggott, Matthew D.; Sherwin, Spencer
2017-11-01
Large eddy simulations (LES) of a horizontal-axis turbine wake are presented using the well-known actuator line (AL) model. The fluid flow is resolved by employing higher-order numerical schemes on a 3D Cartesian mesh combined with a 2D Domain Decomposition strategy for an efficient use of supercomputers. In order to simulate flows at relatively high Reynolds numbers for a reasonable computational cost, a novel strategy is used to introduce controlled numerical dissipation to a selected range of small scales. The idea is to mimic the contribution of the unresolved small-scales by imposing a targeted numerical dissipation at small scales when evaluating the viscous term of the Navier-Stokes equations. The numerical technique is shown to behave similarly to the traditional eddy viscosity sub-filter scale models such as the classic or the dynamic Smagorinsky models. The results from the simulations are compared to experimental data for a Reynolds number scaled by the diameter equal to ReD =1,000,000 and both the time-averaged stream wise velocity and turbulent kinetic energy (TKE) are showing a good overall agreement. At the end, suggestions for the amount of numerical dissipation required by our approach are made for the particular case of horizontal-axis turbine wakes.
A large scale membrane-binding protein conformational change that initiates at small length scales
NASA Astrophysics Data System (ADS)
Grandpre, Trevor; Andorf, Matthew; Chakravarthy, Srinivas; Lamb, Robert; Poor, Taylor; Landahl, Eric
2013-03-01
The fusion (F) protein of parainfluenza virus 5 (PIV5) is a membrane-bound, homotrimeric glycoprotein located on the surface of PIV5 viral envelopes. Upon being triggered by the receptor-binding protein (HN), F undergoes a greater than 100Å ATP-independent refolding event. This refolding event results in the insertion of a hydrophobic fusion peptide into the membrane of the target cell, followed by the desolvation and subsequent fusion event as the two membranes are brought together. Isothermal calorimetry and hydrophobic dye incorporation experiments indicate that the soluble construct of the F protein undergoes a conformational rearrangement event at around 55 deg C. We present the results of an initial Time-Resolved Small-Angle X-Ray Scattering (TR-SAXS) study of this large scale, entropically driven conformational change using a temperature jump. Although we the measured radius of gyration of this protein changes on a 110 second timescale, we find that the x-ray scattering intensity at higher angles (corresponding to smaller length scales in the protein) changes nearly an order of magnitude faster. We believe this may be a signature of entropically-driven conformational change. To whom correspondence should be addressed
Zhang, Yaoyang; Xu, Tao; Shan, Bing; Hart, Jonathan; Aslanian, Aaron; Han, Xuemei; Zong, Nobel; Li, Haomin; Choi, Howard; Wang, Dong; Acharya, Lipi; Du, Lisa; Vogt, Peter K; Ping, Peipei; Yates, John R
2015-11-03
Shotgun proteomics generates valuable information from large-scale and target protein characterizations, including protein expression, protein quantification, protein post-translational modifications (PTMs), protein localization, and protein-protein interactions. Typically, peptides derived from proteolytic digestion, rather than intact proteins, are analyzed by mass spectrometers because peptides are more readily separated, ionized and fragmented. The amino acid sequences of peptides can be interpreted by matching the observed tandem mass spectra to theoretical spectra derived from a protein sequence database. Identified peptides serve as surrogates for their proteins and are often used to establish what proteins were present in the original mixture and to quantify protein abundance. Two major issues exist for assigning peptides to their originating protein. The first issue is maintaining a desired false discovery rate (FDR) when comparing or combining multiple large datasets generated by shotgun analysis and the second issue is properly assigning peptides to proteins when homologous proteins are present in the database. Herein we demonstrate a new computational tool, ProteinInferencer, which can be used for protein inference with both small- or large-scale data sets to produce a well-controlled protein FDR. In addition, ProteinInferencer introduces confidence scoring for individual proteins, which makes protein identifications evaluable. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015. Published by Elsevier B.V.
2013-01-01
Background While a large body of work exists on comparing and benchmarking descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 amino acid descriptor sets have been benchmarked with respect to their ability of establishing bioactivity models. The descriptor sets included in the study are Z-scales (3 variants), VHSE, T-scales, ST-scales, MS-WHIM, FASGAI, BLOSUM, a novel protein descriptor set (termed ProtFP (4 variants)), and in addition we created and benchmarked three pairs of descriptor combinations. Prediction performance was evaluated in seven structure-activity benchmarks which comprise Angiotensin Converting Enzyme (ACE) dipeptidic inhibitor data, and three proteochemometric data sets, namely (1) GPCR ligands modeled against a GPCR panel, (2) enzyme inhibitors (NNRTIs) with associated bioactivities against a set of HIV enzyme mutants, and (3) enzyme inhibitors (PIs) with associated bioactivities on a large set of HIV enzyme mutants. Results The amino acid descriptor sets compared here show similar performance (<0.1 log units RMSE difference and <0.1 difference in MCC), while errors for individual proteins were in some cases found to be larger than those resulting from descriptor set differences ( > 0.3 log units RMSE difference and >0.7 difference in MCC). Combining different descriptor sets generally leads to better modeling performance than utilizing individual sets. The best performers were Z-scales (3) combined with ProtFP (Feature), or Z-Scales (3) combined with an average Z-Scale value for each target, while ProtFP (PCA8), ST-Scales, and ProtFP (Feature) rank last. Conclusions While amino acid descriptor sets capture different aspects of amino acids their ability to be used for bioactivity modeling is still – on average – surprisingly similar. Still, combining sets describing complementary information consistently leads to small but consistent improvement in modeling performance (average MCC 0.01 better, average RMSE 0.01 log units lower). Finally, performance differences exist between the targets compared thereby underlining that choosing an appropriate descriptor set is of fundamental for bioactivity modeling, both from the ligand- as well as the protein side. PMID:24059743
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunc, Vlastimil; Duty, Chad E.; Lindahl, John M.
2017-08-01
In this work, ORNL and Techmer investigated and screened different high temperature thermoplastic reinforced materials to fabricate composite molds for autoclave processes using Additive Manufacturing (AM) techniques. This project directly led to the development and commercial release of two printable, high temperature composite materials available through Techmer PM. These new materials are targeted for high temperature tooling made via large scale additive manufacturing.
Links between behavioral factors and inflammation
O’Connor, Mary-Frances; Irwin, Michael R.
2010-01-01
This review focuses on those biobehavioral factors that show robust associations with markers of inflammation, including discussion of the following variables: diet, smoking, coffee, alcohol, exercise and sleep disruption. Each of these variables has been assessed in large-scale epidemiological studies, and many in clinical and experimental studies as well. Treatment strategies that target biobehavioral factors have the potential to complement and add to the benefit of anti-inflammatory medicines. PMID:20130566
Zhu, Yun Guang; Jia, Chuankun; Yang, Jing; Pan, Feng; Huang, Qizhao; Wang, Qing
2015-06-11
A redox flow lithium-oxygen battery (RFLOB) by using soluble redox catalysts with good performance was demonstrated for large-scale energy storage. The new device enables the reversible formation and decomposition of Li2O2 via redox targeting reactions in a gas diffusion tank, spatially separated from the electrode, which obviates the passivation and pore clogging of the cathode.
Modelling short pulse, high intensity laser plasma interactions
NASA Astrophysics Data System (ADS)
Evans, R. G.
2006-06-01
Modelling the interaction of ultra-intense laser pulses with solid targets is made difficult through the large range of length and time scales involved in the transport of relativistic electrons. An implicit hybrid PIC-fluid model using the commercial code LSP (LSP is marketed by MRC (Albuquerque), New Mexico, USA) reveals a variety of complex phenomena which seem to be borne out in experiments and some existing theories.
Functional annotation of HOT regions in the human genome: implications for human disease and cancer
Li, Hao; Chen, Hebing; Liu, Feng; Ren, Chao; Wang, Shengqi; Bo, Xiaochen; Shu, Wenjie
2015-01-01
Advances in genome-wide association studies (GWAS) and large-scale sequencing studies have resulted in an impressive and growing list of disease- and trait-associated genetic variants. Most studies have emphasised the discovery of genetic variation in coding sequences, however, the noncoding regulatory effects responsible for human disease and cancer biology have been substantially understudied. To better characterise the cis-regulatory effects of noncoding variation, we performed a comprehensive analysis of the genetic variants in HOT (high-occupancy target) regions, which are considered to be one of the most intriguing findings of recent large-scale sequencing studies. We observed that GWAS variants that map to HOT regions undergo a substantial net decrease and illustrate development-specific localisation during haematopoiesis. Additionally, genetic risk variants are disproportionally enriched in HOT regions compared with LOT (low-occupancy target) regions in both disease-relevant and cancer cells. Importantly, this enrichment is biased toward disease- or cancer-specific cell types. Furthermore, we observed that cancer cells generally acquire cancer-specific HOT regions at oncogenes through diverse mechanisms of cancer pathogenesis. Collectively, our findings demonstrate the key roles of HOT regions in human disease and cancer and represent a critical step toward further understanding disease biology, diagnosis, and therapy. PMID:26113264
Functional annotation of HOT regions in the human genome: implications for human disease and cancer.
Li, Hao; Chen, Hebing; Liu, Feng; Ren, Chao; Wang, Shengqi; Bo, Xiaochen; Shu, Wenjie
2015-06-26
Advances in genome-wide association studies (GWAS) and large-scale sequencing studies have resulted in an impressive and growing list of disease- and trait-associated genetic variants. Most studies have emphasised the discovery of genetic variation in coding sequences, however, the noncoding regulatory effects responsible for human disease and cancer biology have been substantially understudied. To better characterise the cis-regulatory effects of noncoding variation, we performed a comprehensive analysis of the genetic variants in HOT (high-occupancy target) regions, which are considered to be one of the most intriguing findings of recent large-scale sequencing studies. We observed that GWAS variants that map to HOT regions undergo a substantial net decrease and illustrate development-specific localisation during haematopoiesis. Additionally, genetic risk variants are disproportionally enriched in HOT regions compared with LOT (low-occupancy target) regions in both disease-relevant and cancer cells. Importantly, this enrichment is biased toward disease- or cancer-specific cell types. Furthermore, we observed that cancer cells generally acquire cancer-specific HOT regions at oncogenes through diverse mechanisms of cancer pathogenesis. Collectively, our findings demonstrate the key roles of HOT regions in human disease and cancer and represent a critical step toward further understanding disease biology, diagnosis, and therapy.
TALEs from a spring--superelasticity of Tal effector protein structures.
Flechsig, Holger
2014-01-01
Transcription activator-like effectors (TALEs) are DNA-related proteins that recognise and bind specific target sequences to manipulate gene expression. Recently determined crystal structures show that their common architecture reveals a superhelical overall structure that may undergo drastic conformational changes. To establish a link between structure and dynamics in TALE proteins we have employed coarse-grained elastic-network modelling of currently available structural data and implemented a force-probe setup that allowed us to investigate their mechanical behaviour in computer experiments. Based on the measured force-extension curves we conclude that TALEs exhibit superelastic dynamical properties allowing for large-scale global conformational changes along their helical axis, which represents the soft direction in such proteins. For moderate external forcing the TALE models behave like linear springs, obeying Hooke's law, and the investigated structures can be characterised and compared by a corresponding spring constant. We show that conformational flexibility underlying the large-scale motions is not homogeneously distributed over the TALE structure, but instead soft spot residues around which strain is accumulated and which turn out to represent key agents in the transmission of conformational motions are identified. They correspond to the RVD loop residues that have been experimentally determined to play an eminent role in the binding process of target DNA.
TALEs from a Spring – Superelasticity of Tal Effector Protein Structures
Flechsig, Holger
2014-01-01
Transcription activator-like effectors (TALEs) are DNA-related proteins that recognise and bind specific target sequences to manipulate gene expression. Recently determined crystal structures show that their common architecture reveals a superhelical overall structure that may undergo drastic conformational changes. To establish a link between structure and dynamics in TALE proteins we have employed coarse-grained elastic-network modelling of currently available structural data and implemented a force-probe setup that allowed us to investigate their mechanical behaviour in computer experiments. Based on the measured force-extension curves we conclude that TALEs exhibit superelastic dynamical properties allowing for large-scale global conformational changes along their helical axis, which represents the soft direction in such proteins. For moderate external forcing the TALE models behave like linear springs, obeying Hooke's law, and the investigated structures can be characterised and compared by a corresponding spring constant. We show that conformational flexibility underlying the large-scale motions is not homogeneously distributed over the TALE structure, but instead soft spot residues around which strain is accumulated and which turn out to represent key agents in the transmission of conformational motions are identified. They correspond to the RVD loop residues that have been experimentally determined to play an eminent role in the binding process of target DNA. PMID:25313859
Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images
NASA Astrophysics Data System (ADS)
Yao, Shoukui; Qin, Xiaojuan
2018-02-01
Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Chang, Hang; Han, Ju; Zhong, Cheng; Snijders, Antoine M.; Mao, Jian-Hua
2017-01-01
The capabilities of (I) learning transferable knowledge across domains; and (II) fine-tuning the pre-learned base knowledge towards tasks with considerably smaller data scale are extremely important. Many of the existing transfer learning techniques are supervised approaches, among which deep learning has the demonstrated power of learning domain transferrable knowledge with large scale network trained on massive amounts of labeled data. However, in many biomedical tasks, both the data and the corresponding label can be very limited, where the unsupervised transfer learning capability is urgently needed. In this paper, we proposed a novel multi-scale convolutional sparse coding (MSCSC) method, that (I) automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns; and (II) provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. Extensive experimental evaluation of MSCSC demonstrates the effectiveness of the proposed MSCSC in both regular and transfer learning tasks in various biomedical domains. PMID:28129148
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613
Konijnendijk, Nellie; Shikano, Takahito; Daneels, Dorien; Volckaert, Filip A M; Raeymaekers, Joost A M
2015-09-01
Local adaptation is often obvious when gene flow is impeded, such as observed at large spatial scales and across strong ecological contrasts. However, it becomes less certain at small scales such as between adjacent populations or across weak ecological contrasts, when gene flow is strong. While studies on genomic adaptation tend to focus on the former, less is known about the genomic targets of natural selection in the latter situation. In this study, we investigate genomic adaptation in populations of the three-spined stickleback Gasterosteus aculeatus L. across a small-scale ecological transition with salinities ranging from brackish to fresh. Adaptation to salinity has been repeatedly demonstrated in this species. A genome scan based on 87 microsatellite markers revealed only few signatures of selection, likely owing to the constraints that homogenizing gene flow puts on adaptive divergence. However, the detected loci appear repeatedly as targets of selection in similar studies of genomic adaptation in the three-spined stickleback. We conclude that the signature of genomic selection in the face of strong gene flow is weak, yet detectable. We argue that the range of studies of genomic divergence should be extended to include more systems characterized by limited geographical and ecological isolation, which is often a realistic setting in nature.
High–energy density nonaqueous all redox flow lithium battery enabled with a polymeric membrane
Jia, Chuankun; Pan, Feng; Zhu, Yun Guang; Huang, Qizhao; Lu, Li; Wang, Qing
2015-01-01
Redox flow batteries (RFBs) are considered one of the most promising large-scale energy storage technologies. However, conventional RFBs suffer from low energy density due to the low solubility of the active materials in electrolyte. On the basis of the redox targeting reactions of battery materials, the redox flow lithium battery (RFLB) demonstrated in this report presents a disruptive approach to drastically enhancing the energy density of flow batteries. With LiFePO4 and TiO2 as the cathodic and anodic Li storage materials, respectively, the tank energy density of RFLB could reach ~500 watt-hours per liter (50% porosity), which is 10 times higher than that of a vanadium redox flow battery. The cell exhibits good electrochemical performance under a prolonged cycling test. Our prototype RFLB full cell paves the way toward the development of a new generation of flow batteries for large-scale energy storage. PMID:26702440
Analysis of detection performance of multi band laser beam analyzer
NASA Astrophysics Data System (ADS)
Du, Baolin; Chen, Xiaomei; Hu, Leili
2017-10-01
Compared with microwave radar, Laser radar has high resolution, strong anti-interference ability and good hiding ability, so it becomes the focus of laser technology engineering application. A large scale Laser radar cross section (LRCS) measurement system is designed and experimentally tested. First, the boundary conditions are measured and the long range laser echo power is estimated according to the actual requirements. The estimation results show that the echo power is greater than the detector's response power. Secondly, a large scale LRCS measurement system is designed according to the demonstration and estimation. The system mainly consists of laser shaping, beam emitting device, laser echo receiving device and integrated control device. Finally, according to the designed lidar cross section measurement system, the scattering cross section of target is simulated and tested. The simulation results are basically the same as the test results, and the correctness of the system is proved.
From Wake Steering to Flow Control
Fleming, Paul A.; Annoni, Jennifer; Churchfield, Matthew J.; ...
2017-11-22
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
Tools for phospho- and glycoproteomics of plasma membranes.
Wiśniewski, Jacek R
2011-07-01
Analysis of plasma membrane proteins and their posttranslational modifications is considered as important for identification of disease markers and targets for drug treatment. Due to their insolubility in water, studying of plasma membrane proteins using mass spectrometry has been difficult for a long time. Recent technological developments in sample preparation together with important improvements in mass spectrometric analysis have facilitated analysis of these proteins and their posttranslational modifications. Now, large scale proteomic analyses allow identification of thousands of membrane proteins from minute amounts of sample. Optimized protocols for affinity enrichment of phosphorylated and glycosylated peptides have set new dimensions in the depth of characterization of these posttranslational modifications of plasma membrane proteins. Here, I summarize recent advances in proteomic technology for the characterization of the cell surface proteins and their modifications. In the focus are approaches allowing large scale mapping rather than analytical methods suitable for studying individual proteins or non-complex mixtures.
High-energy density nonaqueous all redox flow lithium battery enabled with a polymeric membrane.
Jia, Chuankun; Pan, Feng; Zhu, Yun Guang; Huang, Qizhao; Lu, Li; Wang, Qing
2015-11-01
Redox flow batteries (RFBs) are considered one of the most promising large-scale energy storage technologies. However, conventional RFBs suffer from low energy density due to the low solubility of the active materials in electrolyte. On the basis of the redox targeting reactions of battery materials, the redox flow lithium battery (RFLB) demonstrated in this report presents a disruptive approach to drastically enhancing the energy density of flow batteries. With LiFePO4 and TiO2 as the cathodic and anodic Li storage materials, respectively, the tank energy density of RFLB could reach ~500 watt-hours per liter (50% porosity), which is 10 times higher than that of a vanadium redox flow battery. The cell exhibits good electrochemical performance under a prolonged cycling test. Our prototype RFLB full cell paves the way toward the development of a new generation of flow batteries for large-scale energy storage.
Listening to the Deep: live monitoring of ocean noise and cetacean acoustic signals.
André, M; van der Schaar, M; Zaugg, S; Houégnigan, L; Sánchez, A M; Castell, J V
2011-01-01
The development and broad use of passive acoustic monitoring techniques have the potential to help assessing the large-scale influence of artificial noise on marine organisms and ecosystems. Deep-sea observatories have the potential to play a key role in understanding these recent acoustic changes. LIDO (Listening to the Deep Ocean Environment) is an international project that is allowing the real-time long-term monitoring of marine ambient noise as well as marine mammal sounds at cabled and standalone observatories. Here, we present the overall development of the project and the use of passive acoustic monitoring (PAM) techniques to provide the scientific community with real-time data at large spatial and temporal scales. Special attention is given to the extraction and identification of high frequency cetacean echolocation signals given the relevance of detecting target species, e.g. beaked whales, in mitigation processes, e.g. during military exercises. Copyright © 2011. Published by Elsevier Ltd.
Control of fluxes in metabolic networks
Basler, Georg; Nikoloski, Zoran; Larhlimi, Abdelhalim; Barabási, Albert-László; Liu, Yang-Yu
2016-01-01
Understanding the control of large-scale metabolic networks is central to biology and medicine. However, existing approaches either require specifying a cellular objective or can only be used for small networks. We introduce new coupling types describing the relations between reaction activities, and develop an efficient computational framework, which does not require any cellular objective for systematic studies of large-scale metabolism. We identify the driver reactions facilitating control of 23 metabolic networks from all kingdoms of life. We find that unicellular organisms require a smaller degree of control than multicellular organisms. Driver reactions are under complex cellular regulation in Escherichia coli, indicating their preeminent role in facilitating cellular control. In human cancer cells, driver reactions play pivotal roles in malignancy and represent potential therapeutic targets. The developed framework helps us gain insights into regulatory principles of diseases and facilitates design of engineering strategies at the interface of gene regulation, signaling, and metabolism. PMID:27197218
On the relationship between human search strategies, conspicuity, and search performance
NASA Astrophysics Data System (ADS)
Hogervorst, Maarten A.; Bijl, Piet; Toet, Alexander
2005-05-01
We determined the relationship between search performance with a limited field of view (FOV) and several scanning- and scene parameters in human observer experiments. The observers (38 trained army scouts) searched through a large search sector for a target (a camouflaged person) on a heath. From trial to trial the target appeared at a different location. With a joystick the observers scanned through a panoramic image (displayed on a PC-monitor) while the scan path was registered. Four conditions were run differing in sensor type (visual or thermal infrared) and window size (large or small). In conditions with a small window size the zoom option could be used. Detection performance was highly dependent on zoom factor and deteriorated when scan speed increased beyond a threshold value. Moreover, the distribution of scan speeds scales with the threshold speed. This indicates that the observers are aware of their limitations and choose a (near) optimal search strategy. We found no correlation between the fraction of detected targets and overall search time for the individual observers, indicating that both are independent measures of individual search performance. Search performance (fraction detected, total search time, time in view for detection) was found to be strongly related to target conspicuity. Moreover, we found the same relationship between search performance and conspicuity for visual and thermal targets. This indicates that search performance can be predicted directly by conspicuity regardless of the sensor type.
Li, Zhongyu; Wu, Junjie; Huang, Yulin; Yang, Haiguang; Yang, Jianyu
2017-01-23
Bistatic forward-looking SAR (BFSAR) is a kind of bistatic synthetic aperture radar (SAR) system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for moving-target imaging with BFSAR, the non-cooperative movement of the moving target induces some new issues: (I) large and unknown range cell migration (RCM) (including range walk and high-order RCM); (II) the spatial-variances of the Doppler parameters (including the Doppler centroid and high-order Doppler) are not only unknown, but also nonlinear for different point-scatterers. In this paper, we put forward an adaptive moving-target imaging method for BFSAR. First, the large and unknown range walk is corrected by applying keystone transform over the whole received echo, and then, the relationships among the unknown high-order RCM, the nonlinear spatial-variances of the Doppler parameters, and the speed of the mover, are established. After that, using an optimization nonlinear chirp scaling (NLCS) technique, not only can the unknown high-order RCM be accurately corrected, but also the nonlinear spatial-variances of the Doppler parameters can be balanced. At last, a high-order polynomial filter is applied to compress the whole azimuth data of the moving target. Numerical simulations verify the effectiveness of the proposed method.
Experimental design and quantitative analysis of microbial community multiomics.
Mallick, Himel; Ma, Siyuan; Franzosa, Eric A; Vatanen, Tommi; Morgan, Xochitl C; Huttenhower, Curtis
2017-11-30
Studies of the microbiome have become increasingly sophisticated, and multiple sequence-based, molecular methods as well as culture-based methods exist for population-scale microbiome profiles. To link the resulting host and microbial data types to human health, several experimental design considerations, data analysis challenges, and statistical epidemiological approaches must be addressed. Here, we survey current best practices for experimental design in microbiome molecular epidemiology, including technologies for generating, analyzing, and integrating microbiome multiomics data. We highlight studies that have identified molecular bioactives that influence human health, and we suggest steps for scaling translational microbiome research to high-throughput target discovery across large populations.
NASA Astrophysics Data System (ADS)
Wei, Gang; Zhang, Wei
2013-06-01
The deformation and fracture behavior of steel projectile impacting ceramic target is an interesting investigation topic. The deformation and failure behavior of projectile and target was investigated experimentally in the normal impact by different velocities. Lab-scale ballistic tests of AD95 ceramic targets with 20 mm thickness against two different hardness 38CrSi steel projectiles with 7.62 mm diameter have been conducted at a range of velocities from 100 to 1000 m/s. Experimental results show that, with the impact velocity increasing, for the soft projectiles, the deformation and fracture modes were mushrooming, shear cracking, petalling and fragmentation(with large fragments and less number), respectively; for the hard projectiles there are three deformation and fracture modes: mushrooming, shearing cracking and fragmentation(with small fragments and large number). All projectiles were rebound after impact. But, with the velocity change, the target failure modes have changed. At low velocity, only radial cracks were found; then circumferential cracks appeared with the increasing velocity; the ceramic cone occurred when the velocity reached 400 m/s above, and manifested in two forms: front surface intact at lower velocity and perforated at higher velocity. The higher velocity, the fragment size is smaller and more uniform distribution. The difference of ceramic target damage is not obvious after impacted by two kinds of projectiles with different hardness at the same velocity. National Natural Science Foundation of China (No.: 11072072).
Willett, Francis R.; Murphy, Brian A.; Memberg, William D.; Blabe, Christine H.; Pandarinath, Chethan; Walter, Benjamin L.; Sweet, Jennifer A.; Miller, Jonathan P.; Henderson, Jaimie M.; Shenoy, Krishna V.; Hochberg, Leigh R.; Kirsch, Robert F.; Ajiboye, A. Bolu
2017-01-01
Objective Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts’ law: MT = a + b log2(D/R ) (where MT is movement time, D is target distance, R is target radius, and a,b are parameters). Fitts’ law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio D/R) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to D/R). Approach Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts’ law. Main Results We found that movement times were better described by the equation MT = a + bD + cR−2, which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the D/R ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user’s motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder. Significance The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts’ law-like relationship to iBCI movements may require nonlinear decoding strategies. PMID:28177925
Forrest, John K; Lansky, Alexandra J; Meller, Stephanie M; Hou, Liming; Sood, Poornima; Applegate, Robert J; Wang, John C; Skelding, Kimberly A; Shah, Aakar; Kereiakes, Dean J; Sudhir, Krishnankutty; Cristea, Ecaterina; Yaqub, Manejeh; Stone, Gregg W
2013-06-01
The aim of this study was to determine whether patients from the Clinical Evaluation of the XIENCE V Everolimus Eluting Coronary Stent System in the Treatment of Patients With de Novo Native Coronary Artery Lesions (SPIRIT) IV trial who underwent percutaneous coronary intervention, who had target lesions with jailed side branches, had improved clinical outcomes when treated with the XIENCE V versus Taxus Express(2) drug-eluting stent. In the SPIRIT III randomized trial, patients with target lesions with jailed side branches after XIENCE V compared with Taxus Express(2) implantation had lower 2-year rates of major adverse cardiac events. The SPIRIT IV trial represents a larger more diverse patient population compared with SPIRIT III. In the large-scale, prospective, multicenter, randomized SPIRIT IV trial, 3,687 patients who underwent coronary stenting with up to 3 de novo native coronary artery lesions were randomized 2:1 to receive XIENCE V versus Taxus Express(2) stents. Two-year clinical outcomes of patients with or without jailed side branches after stenting were compared. A jailed side branch was defined as any side branch >1.0 mm in diameter within the target segment being stented, excluding bifurcations deemed to require treatment. Of the 3,687 patients in SPIRIT IV, a total of 1,426 had side branches that were jailed during angioplasty of the target lesion. Patients with jailed side branches after XIENCE V compared with Taxus Express(2) implantation had significantly lower 2-year rates of target lesion failure (6.5% vs 11.9%, p = 0.001), major adverse cardiac events (6.6% vs 12.2%, p = 0.0008), ischemia-driven target vessel revascularization (4.1% vs 7.9%, p = 0.004), and stent thrombosis (0.6% vs 2.8%, p = 0.001). In conclusion, patients with jailed side branches after stenting with XIENCE V compared to Taxus Express(2) devices had superior clinical outcomes at 2 years in the large-scale randomized SPIRIT IV trial. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Willett, Francis R.; Murphy, Brian A.; Memberg, William D.; Blabe, Christine H.; Pandarinath, Chethan; Walter, Benjamin L.; Sweet, Jennifer A.; Miller, Jonathan P.; Henderson, Jaimie M.; Shenoy, Krishna V.; Hochberg, Leigh R.; Kirsch, Robert F.; Bolu Ajiboye, A.
2017-04-01
Objective. Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts’ law: \\text{MT}=a+b{{log}2}(D/R) (where MT is movement time, D is target distance, R is target radius, and a,~b are parameters). Fitts’ law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio D/R ) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to D/R ). Approach. Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts’ law. Main results. We found that movement times were better described by the equation \\text{MT}=a+bD+c{{R}-2} , which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the D/R ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user’s motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder. Significance. The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts’ law-like relationship to iBCI movements may require non-linear decoding strategies.
NASA Astrophysics Data System (ADS)
Micijevic, E.; Haque, M. O.
2015-12-01
In satellite remote sensing, Landsat sensors are recognized for providing well calibrated satellite images for over four decades. This image data set provides an important contribution to detection and temporal analysis of land changes. Landsat 8 (L8), the latest satellite of the Landsat series, was designed to continue its legacy as well as to embrace advanced technology and satisfy the demand of the broader scientific community. Sentinel 2A (S2A), a European satellite launched in June 2015, is designed to keep data continuity of Landsat and SPOT like satellites. The S2A MSI sensor is equipped with spectral bands similar to L8 OLI and includes some additional ones. Compared to L8 OLI, green and near infrared MSI bands have narrower bandwidths, whereas coastal-aerosol (CA) and cirrus have larger bandwidths. The blue and red MSI bands cover higher wavelengths than the matching OLI bands. Although the spectral band differences are not large, their combination with the spectral signature of a studied target can largely affect the Top Of Atmosphere (TOA) reflectance seen by the sensors. This study investigates the effect of spectral band differences between S2A MSI and L8 OLI sensors. The differences in spectral bands between sensors can be assessed by calculating Spectral Band Adjustment Factors (SBAF). For radiometric calibration purposes, the SBAFs for the calibration test site are used to bring the two sensors to the same radiometric scale. However, the SBAFs are target dependent and different sensors calibrated to the same radiometric scale will (correctly!) measure different reflectance for the same target. Thus, when multiple sensors are used to study a given target, the sensor responses need to be adjusted using SBAFs specific to that target. Comparison of the SBAFs for S2A MSI and L8 OLI based on various vegetation spectral profiles revealed variations in radiometric responses as high as 15%. Depending on target under study, these differences could be even higher.
Simultaneous non-contiguous deletions using large synthetic DNA and site-specific recombinases
Krishnakumar, Radha; Grose, Carissa; Haft, Daniel H.; Zaveri, Jayshree; Alperovich, Nina; Gibson, Daniel G.; Merryman, Chuck; Glass, John I.
2014-01-01
Toward achieving rapid and large scale genome modification directly in a target organism, we have developed a new genome engineering strategy that uses a combination of bioinformatics aided design, large synthetic DNA and site-specific recombinases. Using Cre recombinase we swapped a target 126-kb segment of the Escherichia coli genome with a 72-kb synthetic DNA cassette, thereby effectively eliminating over 54 kb of genomic DNA from three non-contiguous regions in a single recombination event. We observed complete replacement of the native sequence with the modified synthetic sequence through the action of the Cre recombinase and no competition from homologous recombination. Because of the versatility and high-efficiency of the Cre-lox system, this method can be used in any organism where this system is functional as well as adapted to use with other highly precise genome engineering systems. Compared to present-day iterative approaches in genome engineering, we anticipate this method will greatly speed up the creation of reduced, modularized and optimized genomes through the integration of deletion analyses data, transcriptomics, synthetic biology and site-specific recombination. PMID:24914053
Wang, W J
2016-07-06
There is a large population at high risk for diabetes in China, and there has been a dramatic increase in the incidence of diabetes in the country over the past 30 years. Interventions targeting the individual risk factors of diabetes can effectively prevent diabetes; these include factors such as an unhealthy diet, lack of physical activity, overweight, and obesity, among others. Evaluation of related knowledge, attitudes, and behaviors before and after intervention using appropriate scales can measure population demands and the effectiveness of interventions. Scientificity and practicability are basic requirements of scale development. The theoretical basis and measuring items of a scale should be consistent with the theory of behavior change and should measure the content of interventions in a standardized and detailed manner to produce good validity, reliability, and acceptability. The scale of knowledge, attitude, and behavior of lifestyle intervention in a diabetes high-risk population is a tool for demand evaluation and effect evaluation of lifestyle intervention that has good validity and reliability. Established by the National Center for Chronic and Noncommunicable Disease Control and Prevention, its use can help to decrease the Chinese population at high risk for diabetes through targeted and scientifically sound lifestyle interventions. Future development of intervention evaluation scales for useing in high-risk populations should consider new factors and characteristics of the different populations, to develop new scales and modify or simplify existing ones, as well as to extend the measurement dimensions to barriers and supporting environment for behaviors change.
NASA Astrophysics Data System (ADS)
van Heijst, Tristan C. F.; Philippens, Mariëlle E. P.; Charaghvandi, Ramona K.; den Hartogh, Mariska D.; Lagendijk, Jan J. W.; Desirée van den Bongard, H. J. G.; van Asselen, Bram
2016-02-01
In early-stage breast-cancer patients, accelerated partial-breast irradiation techniques (APBI) and hypofractionation are increasingly implemented after breast-conserving surgery (BCS). For a safe and effective radiation therapy (RT), the influence of intra-fraction motion during dose delivery becomes more important as associated fraction durations increase and targets become smaller. Current image-guidance techniques are insufficient to characterize local target movement in high temporal and spatial resolution for extended durations. Magnetic resonance imaging (MRI) can provide high soft-tissue contrast, allow fast imaging, and acquire images during longer periods. The goal of this study was to quantify intra-fraction motion using MRI scans from 21 breast-cancer patients, before and after BCS, in supine RT position, on two time scales. High-temporal 2-dimensional (2D) MRI scans (cine-MRI), acquired every 0.3 s during 2 min, and three 3D MRI scans, acquired over 20 min, were performed. The tumor (bed) and whole breast were delineated on 3D scans and delineations were transferred to the cine-MRI series. Consecutive scans were rigidly registered and delineations were transformed accordingly. Motion in sub-second time-scale (derived from cine-MRI) was generally regular and limited to a median of 2 mm. Infrequently, large deviations were observed, induced by deep inspiration, but these were temporary. Movement on multi-minute scale (derived from 3D MRI) varied more, although medians were restricted to 2.2 mm or lower. Large whole-body displacements (up to 14 mm over 19 min) were sparsely observed. The impact of motion on standard RT techniques is likely small. However, in novel hypofractionated APBI techniques, whole-body shifts may affect adequate RT delivery, given the increasing fraction durations and smaller targets. Motion management may thus be required. For this, on-line MRI guidance could be provided by a hybrid MRI/RT modality, such as the University Medical Center Utrecht MRI linear accelerator.
A semi-automated image analysis procedure for in situ plankton imaging systems.
Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M
2015-01-01
Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups.
A Semi-Automated Image Analysis Procedure for In Situ Plankton Imaging Systems
Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C.; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M.
2015-01-01
Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups. PMID:26010260
Smith, Catherine M; Downs, Sara H; Mitchell, Andy; Hayward, Andrew C; Fry, Hannah; Le Comber, Steven C
2015-01-01
Bovine tuberculosis is a disease of historical importance to human health in the UK that remains a major animal health and economic issue. Control of the disease in cattle is complicated by the presence of a reservoir species, the Eurasian badger. In spite of uncertainty in the degree to which cattle disease results from transmission from badgers, and opposition from environmental groups, culling of badgers has been licenced in two large areas in England. Methods to limit culls to smaller areas that target badgers infected with TB whilst minimising the number of uninfected badgers culled is therefore of considerable interest. Here, we use historical data from a large-scale field trial of badger culling to assess two alternative hypothetical methods of targeting TB-infected badgers based on the distribution of cattle TB incidents: (i) a simple circular 'ring cull'; and (ii) geographic profiling, a novel technique for spatial targeting of infectious disease control that predicts the locations of sources of infection based on the distribution of linked cases. Our results showed that both methods required coverage of very large areas to ensure a substantial proportion of infected badgers were removed, and would result in many uninfected badgers being culled. Geographic profiling, which accounts for clustering of infections in badger and cattle populations, produced a small but non-significant increase in the proportion of setts with TB-infected compared to uninfected badgers included in a cull. It also provided no overall improvement at targeting setts with infected badgers compared to the ring cull. Cattle TB incidents in this study were therefore insufficiently clustered around TB-infected badger setts to design an efficient spatially targeted cull; and this analysis provided no evidence to support a move towards spatially targeted badger culling policies for bovine TB control.
Protein docking by the interface structure similarity: how much structure is needed?
Sinha, Rohita; Kundrotas, Petras J; Vakser, Ilya A
2012-01-01
The increasing availability of co-crystallized protein-protein complexes provides an opportunity to use template-based modeling for protein-protein docking. Structure alignment techniques are useful in detection of remote target-template similarities. The size of the structure involved in the alignment is important for the success in modeling. This paper describes a systematic large-scale study to find the optimal definition/size of the interfaces for the structure alignment-based docking applications. The results showed that structural areas corresponding to the cutoff values <12 Å across the interface inadequately represent structural details of the interfaces. With the increase of the cutoff beyond 12 Å, the success rate for the benchmark set of 99 protein complexes, did not increase significantly for higher accuracy models, and decreased for lower-accuracy models. The 12 Å cutoff was optimal in our interface alignment-based docking, and a likely best choice for the large-scale (e.g., on the scale of the entire genome) applications to protein interaction networks. The results provide guidelines for the docking approaches, including high-throughput applications to modeled structures.
Jun, Goo; Wing, Mary Kate; Abecasis, Gonçalo R; Kang, Hyun Min
2015-06-01
The analysis of next-generation sequencing data is computationally and statistically challenging because of the massive volume of data and imperfect data quality. We present GotCloud, a pipeline for efficiently detecting and genotyping high-quality variants from large-scale sequencing data. GotCloud automates sequence alignment, sample-level quality control, variant calling, filtering of likely artifacts using machine-learning techniques, and genotype refinement using haplotype information. The pipeline can process thousands of samples in parallel and requires less computational resources than current alternatives. Experiments with whole-genome and exome-targeted sequence data generated by the 1000 Genomes Project show that the pipeline provides effective filtering against false positive variants and high power to detect true variants. Our pipeline has already contributed to variant detection and genotyping in several large-scale sequencing projects, including the 1000 Genomes Project and the NHLBI Exome Sequencing Project. We hope it will now prove useful to many medical sequencing studies. © 2015 Jun et al.; Published by Cold Spring Harbor Laboratory Press.
Winkel, Lenny H. E.; Trang, Pham Thi Kim; Lan, Vi Mai; Stengel, Caroline; Amini, Manouchehr; Ha, Nguyen Thi; Viet, Pham Hung; Berg, Michael
2011-01-01
Arsenic contamination of shallow groundwater is among the biggest health threats in the developing world. Targeting uncontaminated deep aquifers is a popular mitigation option although its long-term impact remains unknown. Here we present the alarming results of a large-scale groundwater survey covering the entire Red River Delta and a unique probability model based on three-dimensional Quaternary geology. Our unprecedented dataset reveals that ∼7 million delta inhabitants use groundwater contaminated with toxic elements, including manganese, selenium, and barium. Depth-resolved probabilities and arsenic concentrations indicate drawdown of arsenic-enriched waters from Holocene aquifers to naturally uncontaminated Pleistocene aquifers as a result of > 100 years of groundwater abstraction. Vertical arsenic migration induced by large-scale pumping from deep aquifers has been discussed to occur elsewhere, but has never been shown to occur at the scale seen here. The present situation in the Red River Delta is a warning for other As-affected regions where groundwater is extensively pumped from uncontaminated aquifers underlying high arsenic aquifers or zones. PMID:21245347
Budde, Kristin S; Barron, Daniel S; Fox, Peter T
2014-12-01
Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as "neural signatures of stuttering" (Brown, Ingham, Ingham, Laird, & Fox, 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: (1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and (2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state). Copyright © 2014 Elsevier Inc. All rights reserved.
Carbonell, Alberto; Fahlgren, Noah; Mitchell, Skyler; ...
2015-05-20
Artificial microRNAs (amiRNAs) are used for selective gene silencing in plants. However, current methods to produce amiRNA constructs for silencing transcripts in monocot species are not suitable for simple, cost-effective and large-scale synthesis. Here, a series of expression vectors based on Oryza sativa MIR390 (OsMIR390) precursor was developed for high-throughput cloning and high expression of amiRNAs in monocots. Four different amiRNA sequences designed to target specifically endogenous genes and expressed from OsMIR390-based vectors were validated in transgenic Brachypodium distachyon plants. Surprisingly, amiRNAs accumulated to higher levels and were processed more accurately when expressed from chimeric OsMIR390-based precursors that include distalmore » stem-loop sequences from Arabidopsis thaliana MIR390a (AtMIR390a). In all cases, transgenic plants displayed the predicted phenotypes induced by target gene repression, and accumulated high levels of amiRNAs and low levels of the corresponding target transcripts. Genome-wide transcriptome profiling combined with 5-RLM-RACE analysis in transgenic plants confirmed that amiRNAs were highly specific. Finally, significance Statement A series of amiRNA vectors based on Oryza sativa MIR390 (OsMIR390) precursor were developed for simple, cost-effective and large-scale synthesis of amiRNA constructs to silence genes in monocots. Unexpectedly, amiRNAs produced from chimeric OsMIR390-based precursors including Arabidopsis thaliana MIR390a distal stem-loop sequences accumulated elevated levels of highly effective and specific amiRNAs in transgenic Brachypodium distachyon plants.« less
Piot, Bram; Navin, Deepa; Krishnan, Nattu; Bhardwaj, Ashish; Sharma, Vivek; Marjara, Pritpal
2010-01-01
Objectives This study reports on the results of a large-scale targeted condom social marketing campaign in and around areas where female sex workers are present. The paper also describes the method that was used for the routine monitoring of condom availability in these sites. Methods The lot quality assurance sampling (LQAS) method was used for the assessment of the geographical coverage and quality of coverage of condoms in target areas in four states and along selected national highways in India, as part of Avahan, the India AIDS initiative. Results A significant general increase in condom availability was observed in the intervention area between 2005 and 2008. High coverage rates were gradually achieved through an extensive network of pharmacies and particularly of non-traditional outlets, whereas traditional outlets were instrumental in providing large volumes of condoms. Conclusion LQAS is seen as a valuable tool for the routine monitoring of the geographical coverage and of the quality of delivery systems of condoms and of health products and services in general. With a relatively small sample size, easy data collection procedures and simple analytical methods, it was possible to inform decision-makers regularly on progress towards coverage targets. PMID:20167732
Piot, Bram; Mukherjee, Amajit; Navin, Deepa; Krishnan, Nattu; Bhardwaj, Ashish; Sharma, Vivek; Marjara, Pritpal
2010-02-01
This study reports on the results of a large-scale targeted condom social marketing campaign in and around areas where female sex workers are present. The paper also describes the method that was used for the routine monitoring of condom availability in these sites. The lot quality assurance sampling (LQAS) method was used for the assessment of the geographical coverage and quality of coverage of condoms in target areas in four states and along selected national highways in India, as part of Avahan, the India AIDS initiative. A significant general increase in condom availability was observed in the intervention area between 2005 and 2008. High coverage rates were gradually achieved through an extensive network of pharmacies and particularly of non-traditional outlets, whereas traditional outlets were instrumental in providing large volumes of condoms. LQAS is seen as a valuable tool for the routine monitoring of the geographical coverage and of the quality of delivery systems of condoms and of health products and services in general. With a relatively small sample size, easy data collection procedures and simple analytical methods, it was possible to inform decision-makers regularly on progress towards coverage targets.
Assessing the harms of cannabis cultivation in Belgium.
Paoli, Letizia; Decorte, Tom; Kersten, Loes
2015-03-01
Since the 1990s, a shift from the importation of foreign cannabis to domestic cultivation has taken place in Belgium, as it has in many other countries. This shift has prompted Belgian policy-making bodies to prioritize the repression of cannabis cultivation. Against this background, the article aims to systematically map and assess for the first time ever the harms associated with cannabis cultivation, covering the whole spectrum of growers. This study is based on a web survey primarily targeting small-scale growers (N=1293) and on three interconnected sets of qualitative data on large-scale growers and traffickers (34 closed criminal proceedings, interviews with 32 criminal justice experts, and with 17 large-scale cannabis growers and three traffickers). The study relied on Greenfield and Paoli's (2013) harm assessment framework to identify the harms associated with cannabis cultivation and to assess the incidence, severity and causes of such harms. Cannabis cultivation has become endemic in Belgium. Despite that, it generates, for Belgium, limited harms of medium-low or medium priority. Large-scale growers tend to produce more harms than the small-scale ones. Virtually all the harms associated with cannabis cultivation are the result of the current criminalizing policies. Given the spread of cannabis cultivation and Belgium's position in Europe, reducing the supply of cannabis does not appear to be a realistic policy objective. Given the limited harms generated, there is scarce scientific justification to prioritize cannabis cultivation in Belgian law enforcement strategies. As most harms are generated by large-scale growers, it is this category of cultivator, if any, which should be the focus of law enforcement repression. Given the policy origin of most harms, policy-makers should seek to develop policies likely to reduce such harms. At the same time, further research is needed to comparatively assess the harms associated with cannabis cultivation (and trafficking) with those arising from use. Copyright © 2014 Elsevier B.V. All rights reserved.
Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody
2010-05-24
A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muchero, Wellington; Labbe, Jessy L; Priya, Ranjan
2014-01-01
To date, Populus ranks among a few plant species with a complete genome sequence and other highly developed genomic resources. With the first genome sequence among all tree species, Populus has been adopted as a suitable model organism for genomic studies in trees. However, far from being just a model species, Populus is a key renewable economic resource that plays a significant role in providing raw materials for the biofuel and pulp and paper industries. Therefore, aside from leading frontiers of basic tree molecular biology and ecological research, Populus leads frontiers in addressing global economic challenges related to fuel andmore » fiber production. The latter fact suggests that research aimed at improving quality and quantity of Populus as a raw material will likely drive the pursuit of more targeted and deeper research in order to unlock the economic potential tied in molecular biology processes that drive this tree species. Advances in genome sequence-driven technologies, such as resequencing individual genotypes, which in turn facilitates large scale SNP discovery and identification of large scale polymorphisms are key determinants of future success in these initiatives. In this treatise we discuss implications of genome sequence-enable technologies on Populus genomic and genetic studies of complex and specialized-traits.« less
Bonilha, Leonardo; Tabesh, Ali; Dabbs, Kevin; Hsu, David A.; Stafstrom, Carl E.; Hermann, Bruce P.; Lin, Jack J.
2014-01-01
Recent neuroimaging and behavioral studies have revealed that children with new onset epilepsy already exhibit brain structural abnormalities and cognitive impairment. How the organization of large-scale brain structural networks is altered near the time of seizure onset and whether network changes are related to cognitive performances remain unclear. Recent studies also suggest that regional brain volume covariance reflects synchronized brain developmental changes. Here, we test the hypothesis that epilepsy during early-life is associated with abnormalities in brain network organization and cognition. We used graph theory to study structural brain networks based on regional volume covariance in 39 children with new-onset seizures and 28 healthy controls. Children with new-onset epilepsy showed a suboptimal topological structural organization with enhanced network segregation and reduced global integration compared to controls. At the regional level, structural reorganization was evident with redistributed nodes from the posterior to more anterior head regions. The epileptic brain network was more vulnerable to targeted but not random attacks. Finally, a subgroup of children with epilepsy, namely those with lower IQ and poorer executive function, had a reduced balance between network segregation and integration. Taken together, the findings suggest that the neurodevelopmental impact of new onset childhood epilepsies alters large-scale brain networks, resulting in greater vulnerability to network failure and cognitive impairment. PMID:24453089
Internationalization Measures in Large Scale Research Projects
NASA Astrophysics Data System (ADS)
Soeding, Emanuel; Smith, Nancy
2017-04-01
Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.
NASA Astrophysics Data System (ADS)
Guo, Jie; Zhu, Chang`an
2016-01-01
The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.
NASA Technical Reports Server (NTRS)
Sullivan, Steven J.
2014-01-01
"Rocket University" is an exciting new initiative at Kennedy Space Center led by NASA's Engineering and Technology Directorate. This hands-on experience has been established to develop, refine & maintain targeted flight engineering skills to enable the Agency and KSC strategic goals. Through "RocketU", KSC is developing a nimble, rapid flight engineering life cycle systems knowledge base. Ongoing activities in RocketU develop and test new technologies and potential customer systems through small scale vehicles, build and maintain flight experience through balloon and small-scale rocket missions, and enable a revolving fresh perspective of engineers with hands on expertise back into the large scale NASA programs, providing a more experienced multi-disciplined set of systems engineers. This overview will define the Program, highlight aspects of the training curriculum, and identify recent accomplishments and activities.
Performance of resonant radar target identification algorithms using intra-class weighting functions
NASA Astrophysics Data System (ADS)
Mustafa, A.
The use of calibrated resonant-region radar cross section (RCS) measurements of targets for the classification of large aircraft is discussed. Errors in the RCS estimate of full scale aircraft flying over an ocean, introduced by the ionospheric variability and the sea conditions were studied. The Weighted Target Representative (WTR) classification algorithm was developed, implemented, tested and compared with the nearest neighbor (NN) algorithm. The WTR-algorithm has a low sensitivity to the uncertainty in the aspect angle of the unknown target returns. In addition, this algorithm was based on the development of a new catalog of representative data which reduces the storage requirements and increases the computational efficiency of the classification system compared to the NN-algorithm. Experiments were designed to study and evaluate the characteristics of the WTR- and the NN-algorithms, investigate the classifiability of targets and study the relative behavior of the number of misclassifications as a function of the target backscatter features. The classification results and statistics were shown in the form of performance curves, performance tables and confusion tables.
A saliency-based approach to detection of infrared target
NASA Astrophysics Data System (ADS)
Chen, Yanfei; Sang, Nong; Dan, Zhiping
2013-10-01
Automatic target detection in infrared images is a hot research field of national defense technology. We propose a new saliency-based infrared target detection model in this paper, which is based on the fact that human focus of attention is directed towards the relevant target to interpret the most promising information. For a given image, the convolution of the image log amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale is equivalent to an image saliency detector in the frequency domain. At the same time, orientation and shape features extracted are combined into a saliency map in the spatial domain. Our proposed model decides salient targets based on a final saliency map, which is generated by integration of the saliency maps in the frequency and spatial domain. At last, the size of each salient target is obtained by maximizing entropy of the final saliency map. Experimental results show that the proposed model can highlight both small and large salient regions in infrared image, as well as inhibit repeated distractors in cluttered image. In addition, its detecting efficiency has improved significantly.
Measurement of Two-Plasmon-Decay Dependence on Plasma Density Scale Length
NASA Astrophysics Data System (ADS)
Haberberger, D.
2013-10-01
An accurate understanding of the plasma scale-length (Lq) conditions near quarter-critical density is important in quantifying the hot electrons generated by the two-plasmon-decay (TPD) instability in long-scale-length plasmas. A novel target platform was developed to vary the density scale length and an innovative diagnostic was implemented to measure the density profiles above 1021 cm-3 where TPD is expected to have the largest growth. A series of experiments was performed using the four UV (351-nm) beams on OMEGA EP that varied the Lq by changing the radius of curvature of the target while maintaining a constant Iq/Tq. The fraction of laser energy converted to hot electrons (fhot) was observed to increase rapidly from 0.005% to 1% by increasing the plasma scale length from 130 μm to 300 μm, corresponding to target diameters of 0.4 mm to 8 mm. A new diagnostic was developed based on refractometry using angular spectral filters to overcome the large phase accumulation in standard interferometric techniques. The angular filter refractometer measures the refraction angles of a 10-ps, 263-nm probe laser after propagating through the plasma. An angular spectral filter is used in the Fourier plane of the probe beam, where the refractive angles of the rays are mapped to space. The edges of the filter are present in the image plane and represent contours of constant refraction angle. These contours are used to infer the phase of the probe beam, which are used to calculate the plasma density profile. In long-scale-length plasmas, the diagnostic currently measures plasma densities from ~1019 cm-3 to ~2 × 1021 cm-3. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944. In collaboration with D. H. Edgell, S. X. Hu, S. Ivancic, R. Boni, C. Dorrer, and D. H. Froula (Laboratory for Laser Energetics, U. of Rochester).
Calculation of the Frequency Distribution of the Energy Deposition in DNA Volumes by Heavy Ions
NASA Technical Reports Server (NTRS)
Plante, Ianik; Cicinotta, Francis A.
2012-01-01
Radiation quality effects are largely determined by energy deposition in small volumes of characteristic sizes less than 10 nm representative of short-segments of DNA, the DNA nucleosome, or molecules initiating oxidative stress in the nucleus, mitochondria, or extra-cellular matrix. On this scale, qualitatively distinct types of molecular damage are possible for high linear energy transfer (LET) radiation such as heavy ions compared to low LET radiation. Unique types of DNA lesions or oxidative damages are the likely outcome of the energy deposition. The frequency distribution for energy imparted to 1-20 nm targets per unit dose or particle fluence is a useful descriptor and can be evaluated as a function of impact parameter from an ions track. In this work, the simulation of 1-Gy irradiation of a cubic volume of 5 micron by: 1) 450 (1)H(+) ions, 300 MeV; 2) 10 (12)C(6+) ions, 290 MeV/amu and 3) (56)Fe(26+) ions, 1000 MeV/amu was done with the Monte-Carlo simulation code RITRACKS. Cylindrical targets are generated in the irradiated volume, with random orientation. The frequency distribution curves of the energy deposited in the targets is obtained. For small targets (i.e. <25 nm size), the probability of an ion to hit a target is very small; therefore a large number of tracks and targets as well as a large number of histories are necessary to obtain statistically significant results. This simulation is very time-consuming and is difficult to perform by using the original version of RITRACKS. Consequently, the code RITRACKS was adapted to use multiple CPU on a workstation or on a computer cluster. To validate the simulation results, similar calculations were performed using targets with fixed position and orientation, for which experimental data are available [5]. Since the probability of single- and double-strand breaks in DNA as function of energy deposited is well know, the results that were obtained can be used to estimate the yield of DSB, and can be extended to include other targeted or non-target effects.
Desoximetasone 0.25% Spray for the Relief of Scaling in Adults With Plaque Psoriasis.
Keegan, Brian Robert
2015-08-01
Data from two Phase 3, double-blind, randomized, vehicle-controlled parallel studies were evaluated to determine the efficacy and safety of twice daily desoximetasone 0.25% spray for the treatment of plaque psoriasis. In addition to global disease assessments, scaling assessments were performed at baseline and at weeks 1, 2, and 4. To qualify for inclusion, subjects were required to have a clinical diagnosis of stable plaque psoriasis involving ≥10% of the body surface area (BSA), a combined target lesion severity score (TLSS) of ≥7 for the target lesion, a plaque elevation score of ≥3 (moderate) for the target lesion, and a Physician Global Assessment (PGA) score of 3 (moderate) or 4 (severe) at baseline for the overall disease severity. At the baseline visit, the mean proportions of BSA affected by psoriasis were 17% (range 10% to 86%) in the desoximetasone 0.25% spray group and 16% (range 10% to 70%) in the vehicle spray group. Approximately 90% of the patients in each group had moderate to very severe scaling at baseline. Desoximetasone 0.25% spray was effective with significant improvements in overall severity and was well tolerated, with dryness, irritation, and pruritus at the application site being the only reported adverse events occurring in >1% of patients, each of which occurred in less than 3% of patients. As a large proportion of psoriasis patients (94%) have reported being bothered by scaling, the relief of scaling was examined in these studies. At week 1, 69.7% of patients on desoximetasone 0.25% spray had scaling that was considered clear / almost clear / mild compared with 48.3% for those on vehicle spray ( P = .0027). By week 4, the proportion of patients with clear / almost clear / mild scaling had risen to 83.9% in the desoximetasone 0.25% spray group (P < .0001). After four weeks of treatment, 66.4% of patients in the topical corticosteroid group had an overall improvement of at least two grades of disease severity. This demonstrates that desoximetasone 0.25% spray provided fast and effective relief of scaling in patients with plaque psoriasis affecting 10% to 86% of their BSA.
NASA Astrophysics Data System (ADS)
Law, Jane; Quick, Matthew
2013-01-01
This paper adopts a Bayesian spatial modeling approach to investigate the distribution of young offender residences in York Region, Southern Ontario, Canada, at the census dissemination area level. Few geographic researches have analyzed offender (as opposed to offense) data at a large map scale (i.e., using a relatively small areal unit of analysis) to minimize aggregation effects. Providing context is the social disorganization theory, which hypothesizes that areas with economic deprivation, high population turnover, and high ethnic heterogeneity exhibit social disorganization and are expected to facilitate higher instances of young offenders. Non-spatial and spatial Poisson models indicate that spatial methods are superior to non-spatial models with respect to model fit and that index of ethnic heterogeneity, residential mobility (1 year moving rate), and percentage of residents receiving government transfer payments are, respectively, the most significant explanatory variables related to young offender location. These findings provide overwhelming support for social disorganization theory as it applies to offender location in York Region, Ontario. Targeting areas where prevalence of young offenders could or could not be explained by social disorganization through decomposing the estimated risk map are helpful for dealing with juvenile offenders in the region. Results prompt discussion into geographically targeted police services and young offender placement pertaining to risk of recidivism. We discuss possible reasons for differences and similarities between the previous findings (that analyzed offense data and/or were conducted at a smaller map scale) and our findings, limitations of our study, and practical outcomes of this research from a law enforcement perspective.
Henriksson, Greger; Hagman, Olle; Andréasson, Håkan
2011-01-01
Policy measures that reduce or replace road traffic can improve environmental conditions in most large cities. In Stockholm a congestion charge was introduced during a test period in 2006. This was a full-scale trial that proved to meet its targets by reducing traffic crossing the inner city segment during rush hours by 20%. Emissions of carbon dioxide and particles were also substantially reduced. This study, based on in-depth interviews with 40 inhabitants, analyses how and why new travel habits emerged. The results show that particular, sometimes unexpected, features of everyday life (habits, resources, opportunities, values, etc.) were crucial for adjustment of travel behaviour in relation to the policy instrument. One example was that those accustomed to mixing different modes of transport on a daily basis more easily adapted their travel in the targeted way. On a more general level, the results revealed that the policy measure could actually tip the scales for the individual towards trying out a new behaviour. PMID:21909301
Mapping nonlinear receptive field structure in primate retina at single cone resolution
Li, Peter H; Greschner, Martin; Gunning, Deborah E; Mathieson, Keith; Sher, Alexander; Litke, Alan M; Paninski, Liam
2015-01-01
The function of a neural circuit is shaped by the computations performed by its interneurons, which in many cases are not easily accessible to experimental investigation. Here, we elucidate the transformation of visual signals flowing from the input to the output of the primate retina, using a combination of large-scale multi-electrode recordings from an identified ganglion cell type, visual stimulation targeted at individual cone photoreceptors, and a hierarchical computational model. The results reveal nonlinear subunits in the circuity of OFF midget ganglion cells, which subserve high-resolution vision. The model explains light responses to a variety of stimuli more accurately than a linear model, including stimuli targeted to cones within and across subunits. The recovered model components are consistent with known anatomical organization of midget bipolar interneurons. These results reveal the spatial structure of linear and nonlinear encoding, at the resolution of single cells and at the scale of complete circuits. DOI: http://dx.doi.org/10.7554/eLife.05241.001 PMID:26517879
Pushing CHARA to its Limit: A Pathway Toward 80X80 Pixel Images of Stellar Surfaces
NASA Astrophysics Data System (ADS)
Norris, Ryan
2018-04-01
Imagine a future with 80x80 pixel images of stellar surfaces. With a maximum baseline of 330 m, the CHARA Array is already capable of achieving 0.5 mas resolution, sufficient for imaging the red supergiant Betelgeuse (d = 42.3 mas ) at such a scale. However several issues have hampered attempts to image the largest stars at CHARA, including a lack of baselines shorter than 34 m and instrument sensitivities unable to measure the faintest fringes. Here we discuss what is needed to achieve imaging of large stars at CHARA. We will present suggestions for future telescope placement, describing the advantages of a short baseline, while also considering the needs of other imaging targets that might benefit from additional baselines. We will also present developments in image reconstruction methods that can improve the resolution of images today, albeit of smaller targets and at a lesser scale. Of course, there will be example images, created using simulated oifits data and state of the art reconstruction techniques!
Measuring Energy Scaling of Laser Driven Magnetic Fields
NASA Astrophysics Data System (ADS)
Williams, Jackson; Goyon, Clement; Mariscal, Derek; Pollock, Brad; Patankar, Siddharth; Moody, John
2016-10-01
Laser-driven magnetic fields are of interest in particle confinement, fast ignition, and ICF platforms as an alternative to pulsed power systems to achieve many times higher fields. A comprehensive model describing the mechanism responsible for creating and maintaining magnetic fields from laser-driven coils has not yet been established. Understanding the scaling of key experimental parameters such as spatial and temporal uniformity and duration are necessary to implement coil targets in practical applications yet these measurements prove difficult due to the highly transient nature of the fields. We report on direct voltage measurements of laser-driven coil targets in which the laser energy spans more than four orders of magnitude. Results suggest that at low energies, laser-driven coils can be modeled as an electric circuit; however, at higher energies plasma effects dominate and a simple circuit treatment is insufficient to describe all observed phenomenon. The favorable scaling with laser power and pulse duration, observed in the present study and others at kilojoule energies, has positive implications for sustained, large magnetic fields for applications on the NIF. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Kimura, Takeshi; Morimoto, Takeshi; Nakagawa, Yoshihisa; Kawai, Kazuya; Miyazaki, Shunichi; Muramatsu, Toshiya; Shiode, Nobuo; Namura, Masanobu; Sone, Takahito; Oshima, Shigeru; Nishikawa, Hideo; Hiasa, Yoshikazu; Hayashi, Yasuhiko; Nobuyoshi, Masakiyo; Mitudo, Kazuaki
2012-01-31
There is a scarcity of long-term data from large-scale drug-eluting stent registries with a large enough sample to evaluate low-frequency events such as stent thrombosis (ST). Five-year outcomes were evaluated in 12 812 consecutive patients undergoing sirolimus-eluting stent (SES) implantation in the j-Cypher registry. Cumulative incidence of definite ST was low (30 day, 0.3%; 1 year, 0.6%; and 5 years, 1.6%). However, late and very late ST continued to occur without attenuation up to 5 years after sirolimus-eluting stent implantation (0.26%/y). Cumulative incidence of target lesion revascularization within the first year was low (7.3%). However, late target lesion revascularization beyond 1 year also continued to occur without attenuation up to 5 years (2.2%/y). Independent risk factors of ST were completely different according to the timing of ST onset, suggesting the presence of different pathophysiological mechanisms of ST according to the timing of ST onset: acute coronary syndrome and target of proximal left anterior descending coronary artery for early ST; side-branch stenting, diabetes mellitus, and end-stage renal disease with or without hemodialysis for late ST; and current smoking and total stent length >28 mm for very late ST. Independent risk factors of late target lesion revascularization beyond 1 year were generally similar to those risk factors identified for early target lesion revascularization. Late adverse events such as very late ST and late target lesion revascularization are continuous hazards, lasting at least up to 5 years after implantation of the first-generation drug-eluting stents (sirolimus-eluting stents), which should be the targets for developing improved coronary stents.
LiveBench-1: continuous benchmarking of protein structure prediction servers.
Bujnicki, J M; Elofsson, A; Fischer, D; Rychlewski, L
2001-02-01
We present a novel, continuous approach aimed at the large-scale assessment of the performance of available fold-recognition servers. Six popular servers were investigated: PDB-Blast, FFAS, T98-lib, GenTHREADER, 3D-PSSM, and INBGU. The assessment was conducted using as prediction targets a large number of selected protein structures released from October 1999 to April 2000. A target was selected if its sequence showed no significant similarity to any of the proteins previously available in the structural database. Overall, the servers were able to produce structurally similar models for one-half of the targets, but significantly accurate sequence-structure alignments were produced for only one-third of the targets. We further classified the targets into two sets: easy and hard. We found that all servers were able to find the correct answer for the vast majority of the easy targets if a structurally similar fold was present in the server's fold libraries. However, among the hard targets--where standard methods such as PSI-BLAST fail--the most sensitive fold-recognition servers were able to produce similar models for only 40% of the cases, half of which had a significantly accurate sequence-structure alignment. Among the hard targets, the presence of updated libraries appeared to be less critical for the ranking. An "ideally combined consensus" prediction, where the results of all servers are considered, would increase the percentage of correct assignments by 50%. Each server had a number of cases with a correct assignment, where the assignments of all the other servers were wrong. This emphasizes the benefits of considering more than one server in difficult prediction tasks. The LiveBench program (http://BioInfo.PL/LiveBench) is being continued, and all interested developers are cordially invited to join.
Synaptic electronics: materials, devices and applications.
Kuzum, Duygu; Yu, Shimeng; Wong, H-S Philip
2013-09-27
In this paper, the recent progress of synaptic electronics is reviewed. The basics of biological synaptic plasticity and learning are described. The material properties and electrical switching characteristics of a variety of synaptic devices are discussed, with a focus on the use of synaptic devices for neuromorphic or brain-inspired computing. Performance metrics desirable for large-scale implementations of synaptic devices are illustrated. A review of recent work on targeted computing applications with synaptic devices is presented.
2014-08-01
chemical warfare nerve agents (CWNA). Enzymes identified in these screens should be capable of catalytically neutralizing the target agent under...soluble form. 4. Large-scale production of selected enzyme candidates, and their kinetic, structural and pharmacological evaluation 6...employed, with an enzyme protein concentration of 0.5-2 mM in the assay cuvette, the activity measured was indistinguishable from the rate of
The MiniCLEAN Dark Matter Experiment
NASA Astrophysics Data System (ADS)
Schnee, Richard; Deap/Clean Collaboration
2011-10-01
The MiniCLEAN dark matter experiment exploits a single-phase liquid argon (LAr) detector, instrumented with photomultiplier tubes submerged in the cryogen with nearly 4 π coverage of a 500 kg target (150 kg fiducial) mass. The high light yield and large difference in singlet/triplet scintillation time-profiles in LAr provide effective defense against radioactive backgrounds through pulse-shape discrimination and event position reconstruction. The detector is also designed for a liquid neon target which, in the event of a positive signal in LAr, will enable an independent verification of backgrounds and provide a unique test of the expected A2 dependence of the WIMP interaction rate. The conceptually simple design can be scaled to target masses in excess of 10 tons in a relatively straightforward and economic manner. The experimental technique and current status of MiniCLEAN will be summarized.
A simple large-scale synthesis of mesoporous In2O3 for gas sensing applications
NASA Astrophysics Data System (ADS)
Zhang, Su; Song, Peng; Yan, Huihui; Yang, Zhongxi; Wang, Qi
2016-08-01
In this paper, large-scale mesoporous In2O3 nanostructures were synthesized by a facile Lewis acid catalytic the furfural alcohol resin (FAR) template route for the high-yield. Their morphology and structure were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), differential thermal and thermogravimetry analysis (DSC-TG) and the Brunauer-Emmett-Teller (BET) approach. The as-obtained mesoporous In2O3 nanostructures possess excellent mesoporous and network structure, which increases the contact area with the gases, it is conducive for adsorption-desorption of gas on the surface of In2O3. The In2O3 particles and pores were both about 15 nm and very uniform. In gas-sensing measurements with target gases, the gas sensor based on mesoporous In2O3 nanostructures showed a good response, short response-recovery time, good selectivity and stability to ethanol. These properties are due to the large specific surface area of mesoporous structure. This synthetic method could use as a new design concept for functional mesoporous nanomaterials and for mass production.
Shargie, Estifanos Biru; Ngondi, Jeremiah; Graves, Patricia M.; Getachew, Asefaw; Hwang, Jimee; Gebre, Teshome; Mosher, Aryc W.; Ceccato, Pietro; Endeshaw, Tekola; Jima, Daddi; Tadesse, Zerihun; Tenaw, Eskindir; Reithinger, Richard; Emerson, Paul M.; Richards, Frank O.; Ghebreyesus, Tedros Adhanom
2010-01-01
Following recent large scale-up of malaria control interventions in Ethiopia, this study aimed to compare ownership and use of long-lasting insecticidal nets (LLIN), and the change in malaria prevalence using two population-based household surveys in three regions of the country. Each survey used multistage cluster random sampling with 25 households per cluster. Household net ownership tripled from 19.6% in 2006 to 68.4% in 2007, with mean LLIN per household increasing from 0.3 to 1.2. Net use overall more than doubled from 15.3% to 34.5%, but in households owning LLIN, use declined from 71.7% to 48.3%. Parasitemia declined from 4.1% to 0.4%. Large scale-up of net ownership over a short period of time was possible. However, a large increase in net ownership was not necessarily mirrored directly by increased net use. Better targeting of nets to malaria-risk areas and sustained behavioural change communication are needed to increase and maintain net use. PMID:20936103
Land grabbing: a preliminary quantification of economic impacts on rural livelihoods.
Davis, Kyle F; D'Odorico, Paolo; Rulli, Maria Cristina
2014-01-01
Global demands on agricultural land are increasing due to population growth, dietary changes and the use of biofuels. Their effect on food security is to reduce humans' ability to cope with the uncertainties of global climate change. In light of the 2008 food crisis, to secure reliable future access to sufficient agricultural land, many nations and corporations have begun purchasing large tracts of land in the global South, a phenomenon deemed "land grabbing" by popular media. Because land investors frequently export crops without providing adequate employment, this represents an effective income loss for local communities. We study 28 countries targeted by large-scale land acquisitions [comprising 87 % of reported cases and 27 million hectares (ha)] and estimate the effects of such investments on local communities' incomes. We find that this phenomenon can potentially affect the incomes of ~12 million people globally with implications for food security, poverty levels and urbanization. While it is important to note that our study incorporates a number of assumptions and limitations, it provides a much needed initial quantification of the economic impacts of large-scale land acquisitions on rural livelihoods.
Status and Prospects for Indirect Dark Matter Searches with the Fermi Large Area Telescope
NASA Astrophysics Data System (ADS)
Charles, Eric; Fermi-LAT Collaboration
2014-01-01
During the first five years of operation of the Fermi Large Area Telescope (LAT) the LAT collaboration has performed numerous searches for signatures of Dark Matter interactions in both gamma-ray and cosmic-ray data. These searches feature many different target types, including dwarf spheroidal galaxies, galaxy clusters, the Milky Way halo and inner Galaxy and unassociated LAT sources. They make use of a variety of techniques, and have been performed in both the spatial and spectral domains, as well as via less conventional strategies such as examining the potential Dark Matter contribution to both large scale and small scale anisotropies. To date no clear gamma-ray or cosmic-ray signal from dark matter annihilation or decay has been observed, and the deepest current limits for annihilation exclude many Dark Matter particle models with the canonical thermal relic cross section and masses up to 30 GeV. In this contribution we will briefly review the status of each of the searches by the LAT collaboration. We will also discuss the limiting factors for the various search strategies and examine the prospects for the future.
The future of management: The NASA paradigm
NASA Technical Reports Server (NTRS)
Harris, Philip R.
1992-01-01
Prototypes of 21st century management, especially for large scale enterprises, may well be found within the aerospace industry. The space era inaugurated a number of projects of such scope and magnitude that another type of management had to be created to ensure successful achievement. The challenges will be not just in terms of technology and its management, but also human and cultural in dimension. Futurists, students of management, and those concerned with technological administration would do well to review the literature of emerging space management for its wider implications. NASA offers a paradigm, or demonstrated model, of future trends in the field of management at large. More research is needed on issues of leadership for Earth based project in space and space based programs with managers there. It is needed to realize that large scale technical enterprises, such as are undertaken in space, require a new form of management. NASA and other responsible agencies are urged to study excellence in space macromanagement, including the necessary multidisciplinary skills. Two recommended targets are the application of general living systems theory and macromanagement concepts for space stations in the 1990s.
Cross-lingual neighborhood effects in generalized lexical decision and natural reading.
Dirix, Nicolas; Cop, Uschi; Drieghe, Denis; Duyck, Wouter
2017-06-01
The present study assessed intra- and cross-lingual neighborhood effects, using both a generalized lexical decision task and an analysis of a large-scale bilingual eye-tracking corpus (Cop, Dirix, Drieghe, & Duyck, 2016). Using new neighborhood density and frequency measures, the general lexical decision task yielded an inhibitory cross-lingual neighborhood density effect on reading times of second language words, replicating van Heuven, Dijkstra, and Grainger (1998). Reaction times for native language words were not influenced by neighborhood density or frequency but error rates showed cross-lingual neighborhood effects depending on target word frequency. The large-scale eye movement corpus confirmed effects of cross-lingual neighborhood on natural reading, even though participants were reading a novel in a unilingual context. Especially second language reading and to a lesser extent native language reading were influenced by lexical candidates from the nontarget language, although these effects in natural reading were largely facilitatory. These results offer strong and direct support for bilingual word recognition models that assume language-independent lexical access. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
RAAS inhibitors and cardiovascular protection in large scale trials.
von Lueder, Thomas G; Krum, Henry
2013-04-01
Hypertension, coronary artery disease and heart failure affect over half of the adult population in most Western societies, and are prime causes of CV morbidity and mortality. With the ever-increasing worldwide prevalence of CV disease due to ageing and the "diabetes" pandemic, guideline groups have recognized the importance of achieving cardioprotection in affected individuals as well as in those at risk for future CV events. The renin-angiotensin-aldosterone system (RAAS) is the most important system controlling blood pressure (BP), cardiovascular and renal function in man. As our understanding of the crucial role of RAAS in the pathogenesis of most, if not all, CV disease has expanded over the past decades, so has the development of drugs targeting its individual components. Angiotensin-converting enzyme inhibitors (ACEi), Ang-II receptor blockers (ARB), and mineralcorticoid receptor antagonists (MRA) have been evaluated in large clinical trials for their potential to mediate cardioprotection, singly or in combination. Direct renin inhibitors are currently under scrutiny, as well as novel dual-acting RAAS-blocking agents. Herein, we review the evidence generated from large-scale clinical trials of cardioprotection achieved through RAAS-blockade.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keshner, M. S.; Arya, R.
2004-10-01
Hewlett Packard has created a design for a ''Solar City'' factory that will process 30 million sq. meters of glass panels per year and produce 2.1-3.6 GW of solar panels per year-100x the volume of a typical, thin-film, solar panel manufacturer in 2004. We have shown that with a reasonable selection of materials, and conservative assumptions, this ''Solar City'' can produce solar panels and hit the price target of $1.00 per peak watt (6.5x-8.5x lower than prices in 2004) as the total price for a complete and installed rooftop (or ground mounted) solar energy system. This breakthrough in the pricemore » of solar energy comes without the need for any significant new invention. It comes entirely from the manufacturing scale of a large plant and the cost savings inherent in operating at such a large manufacturing scale. We expect that further optimizations from these simple designs will lead to further improvements in cost. The manufacturing process and cost depend on the choice for the active layer that converts sunlight into electricity. The efficiency by which sunlight is converted into electricity can range from 7% to 15%. This parameter has a large effect on the overall price per watt. There are other impacts, as well, and we have attempted to capture them without creating undue distractions. Our primary purpose is to demonstrate the impact of large-scale manufacturing. This impact is largely independent of the choice of active layer. It is not our purpose to compare the pro's and con's for various types of active layers. Significant improvements in cost per watt can also come from scientific advances in active layers that lead to higher efficiency. But, again, our focus is on manufacturing gains and not on the potential advances in the basic technology.« less
Small-scale fisheries bycatch jeopardizes endangered Pacific loggerhead turtles.
Peckham, S Hoyt; Maldonado Diaz, David; Walli, Andreas; Ruiz, Georgita; Crowder, Larry B; Nichols, Wallace J
2007-10-17
Although bycatch of industrial-scale fisheries can cause declines in migratory megafauna including seabirds, marine mammals, and sea turtles, the impacts of small-scale fisheries have been largely overlooked. Small-scale fisheries occur in coastal waters worldwide, employing over 99% of the world's 51 million fishers. New telemetry data reveal that migratory megafauna frequent coastal habitats well within the range of small-scale fisheries, potentially producing high bycatch. These fisheries occur primarily in developing nations, and their documentation and management are limited or non-existent, precluding evaluation of their impacts on non-target megafauna. 30 North Pacific loggerhead turtles that we satellite-tracked from 1996-2005 ranged oceanwide, but juveniles spent 70% of their time at a high use area coincident with small-scale fisheries in Baja California Sur, Mexico (BCS). We assessed loggerhead bycatch mortality in this area by partnering with local fishers to 1) observe two small-scale fleets that operated closest to the high use area and 2) through shoreline surveys for discarded carcasses. Minimum annual bycatch mortality in just these two fleets at the high use area exceeded 1000 loggerheads year(-1), rivaling that of oceanwide industrial-scale fisheries, and threatening the persistence of this critically endangered population. As a result of fisher participation in this study and a bycatch awareness campaign, a consortium of local fishers and other citizens are working to eliminate their bycatch and to establish a national loggerhead refuge. Because of the overlap of ubiquitous small-scale fisheries with newly documented high-use areas in coastal waters worldwide, our case study suggests that small-scale fisheries may be among the greatest current threats to non-target megafauna. Future research is urgently needed to quantify small-scale fisheries bycatch worldwide. Localizing coastal high use areas and mitigating bycatch in partnership with small-scale fishers may provide a crucial solution toward ensuring the persistence of vulnerable megafauna.
Small-Scale Fisheries Bycatch Jeopardizes Endangered Pacific Loggerhead Turtles
Peckham, S. Hoyt; Diaz, David Maldonado; Walli, Andreas; Ruiz, Georgita; Crowder, Larry B.; Nichols, Wallace J.
2007-01-01
Background Although bycatch of industrial-scale fisheries can cause declines in migratory megafauna including seabirds, marine mammals, and sea turtles, the impacts of small-scale fisheries have been largely overlooked. Small-scale fisheries occur in coastal waters worldwide, employing over 99% of the world's 51 million fishers. New telemetry data reveal that migratory megafauna frequent coastal habitats well within the range of small-scale fisheries, potentially producing high bycatch. These fisheries occur primarily in developing nations, and their documentation and management are limited or non-existent, precluding evaluation of their impacts on non-target megafauna. Principal Findings/Methodology 30 North Pacific loggerhead turtles that we satellite-tracked from 1996–2005 ranged oceanwide, but juveniles spent 70% of their time at a high use area coincident with small-scale fisheries in Baja California Sur, Mexico (BCS). We assessed loggerhead bycatch mortality in this area by partnering with local fishers to 1) observe two small-scale fleets that operated closest to the high use area and 2) through shoreline surveys for discarded carcasses. Minimum annual bycatch mortality in just these two fleets at the high use area exceeded 1000 loggerheads year−1, rivaling that of oceanwide industrial-scale fisheries, and threatening the persistence of this critically endangered population. As a result of fisher participation in this study and a bycatch awareness campaign, a consortium of local fishers and other citizens are working to eliminate their bycatch and to establish a national loggerhead refuge. Conclusions/Significance Because of the overlap of ubiquitous small-scale fisheries with newly documented high-use areas in coastal waters worldwide, our case study suggests that small-scale fisheries may be among the greatest current threats to non-target megafauna. Future research is urgently needed to quantify small-scale fisheries bycatch worldwide. Localizing coastal high use areas and mitigating bycatch in partnership with small-scale fishers may provide a crucial solution toward ensuring the persistence of vulnerable megafauna. PMID:17940605
Large-scale 3D simulations of ICF and HEDP targets
NASA Astrophysics Data System (ADS)
Marinak, Michael M.
2000-10-01
The radiation hydrodynamics code HYDRA continues to be developed and applied to 3D simulations of a variety of targets for both inertial confinement fusion (ICF) and high energy density physics. Several packages have been added enabling this code to perform ICF target simulations with similar accuracy as two-dimensional codes of long-time historical use. These include a laser ray trace and deposition package, a heavy ion deposition package, implicit Monte Carlo photonics, and non-LTE opacities, derived from XSN or the linearized response matrix approach.(R. More, T. Kato, Phys. Rev. Lett. 81, 814 (1998), S. Libby, F. Graziani, R. More, T. Kato, Proceedings of the 13th International Conference on Laser Interactions and Related Plasma Phenomena, (AIP, New York, 1997).) LTE opacities can also be calculated for arbitrary mixtures online by combining tabular values generated by different opacity codes. Thermonuclear burn, charged particle transport, neutron energy deposition, electron-ion coupling and conduction, and multigroup radiation diffusion packages are also installed. HYDRA can employ ALE hydrodynamics; a number of grid motion algorithms are available. Multi-material flows are resolved using material interface reconstruction. Results from large-scale simulations run on up to 1680 processors, using a combination of massively parallel processing and symmetric multiprocessing, will be described. A large solid angle simulation of Rayleigh-Taylor instability growth in a NIF ignition capsule has resolved simultaneously the full spectrum of the most dangerous modes that grow from surface roughness. Simulations of a NIF hohlraum illuminated with the initial 96 beam configuration have also been performed. The effect of the hohlraum’s 3D intrinsic drive asymmetry on the capsule implosion will be considered. We will also discuss results from a Nova experiment in which a copper sphere is crushed by a planar shock. Several interacting hydrodynamic instabilities, including the Widnall instability, cause breakup of the resulting vortex ring.
An HTRF® Assay for the Protein Kinase ATM.
Adams, Phillip; Clark, Jonathan; Hawdon, Simon; Hill, Jennifer; Plater, Andrew
2017-01-01
Ataxia telangiectasia mutated (ATM) is a serine/threonine kinase that plays a key role in the regulation of DNA damage pathways and checkpoint arrest. In recent years, there has been growing interest in ATM as a therapeutic target due to its association with cancer cell survival following genotoxic stress such as radio- and chemotherapy. Large-scale targeted drug screening campaigns have been hampered, however, by technical issues associated with the production of sufficient quantities of purified ATM and the availability of a suitable high-throughput assay. Using a purified, functionally active recombinant ATM and one of its physiological substrates, p53, we have developed an in vitro FRET-based activity assay that is suitable for high-throughput drug screening.
Perfect quantum multiple-unicast network coding protocol
NASA Astrophysics Data System (ADS)
Li, Dan-Dan; Gao, Fei; Qin, Su-Juan; Wen, Qiao-Yan
2018-01-01
In order to realize long-distance and large-scale quantum communication, it is natural to utilize quantum repeater. For a general quantum multiple-unicast network, it is still puzzling how to complete communication tasks perfectly with less resources such as registers. In this paper, we solve this problem. By applying quantum repeaters to multiple-unicast communication problem, we give encoding-decoding schemes for source nodes, internal ones and target ones, respectively. Source-target nodes share EPR pairs by using our encoding-decoding schemes over quantum multiple-unicast network. Furthermore, quantum communication can be accomplished perfectly via teleportation. Compared with existed schemes, our schemes can reduce resource consumption and realize long-distance transmission of quantum information.
Nano/biosensors based on large-area graphene
NASA Astrophysics Data System (ADS)
Ducos, Pedro Jose
Two dimensional materials have properties that make them ideal for applications in chemical and biomolecular sensing. Their high surface/volume ratio implies that all atoms are exposed to the environment, in contrast to three dimensional materials with most atoms shielded from interactions inside the bulk. Graphene additionally has an extremely high carrier mobility, even at ambient temperature and pressure, which makes it ideal as a transduction device. The work presented in this thesis describes large-scale fabrication of Graphene Field Effect Transistors (GFETs), their physical and chemical characterization, and their application as biomolecular sensors. Initially, work was focused on developing an easily scalable fabrication process. A large-area graphene growth, transfer and photolithography process was developed that allowed the scaling of production of devices from a few devices per single transfer in a chip, to over a thousand devices per transfer in a full wafer of fabrication. Two approaches to biomolecules sensing were then investigated, through nanoparticles and through chemical linkers. Gold and platinum Nanoparticles were used as intermediary agents to immobilize a biomolecule. First, gold nanoparticles were monodispersed and functionalized with thiolated probe DNA to yield DNA biosensors with a detection limit of 1 nM and high specificity against noncomplementary DNA. Second, devices are modified with platinum nanoparticles and functionalized with thiolated genetically engineered scFv HER3 antibodies to realize a HER3 biosensor. Sensors retain the high affinity from the scFv fragment and show a detection limit of 300 pM. We then show covalent and non-covalent chemical linkers between graphene and antibodies. The chemical linker 1-pyrenebutanoic acid succinimidyl ester (pyrene) stacks to the graphene by Van der Waals interaction, being a completely non-covalent interaction. The linker 4-Azide-2,3,5,6-tetrafluorobenzoic acid, succinimidyl ester (azide) is a photoactivated perfluorophenyl azide that covalently binds to graphene. A comparison is shown for genetically engineered scFv HER3 antibodies and show a low detection limit of 10 nM and 100 pM for the pyrene and azide, respectively. Finally, we use the azide linker to demonstrate a large-scale fabrication of a multiplexed array for Lyme disease. Simultaneous detection of a mixture of two target proteins of the Lyme disease bacterium (Borrelia burgdorferi), this is done by separating the antibodies corresponding to each target in the mixture to different regions of the chip. We show we can differentiate concentrations of the two targets.
Advanced Distillation Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena Fanelli; Ravi Arora; Annalee Tonkovich
2010-03-24
The Advanced Distillation project was concluded on December 31, 2009. This U.S. Department of Energy (DOE) funded project was completed successfully and within budget during a timeline approved by DOE project managers, which included a one year extension to the initial ending date. The subject technology, Microchannel Process Technology (MPT) distillation, was expected to provide both capital and operating cost savings compared to conventional distillation technology. With efforts from Velocys and its project partners, MPT distillation was successfully demonstrated at a laboratory scale and its energy savings potential was calculated. While many objectives established at the beginning of the projectmore » were met, the project was only partially successful. At the conclusion, it appears that MPT distillation is not a good fit for the targeted separation of ethane and ethylene in large-scale ethylene production facilities, as greater advantages were seen for smaller scale distillations. Early in the project, work involved flowsheet analyses to discern the economic viability of ethane-ethylene MPT distillation and develop strategies for maximizing its impact on the economics of the process. This study confirmed that through modification to standard operating processes, MPT can enable net energy savings in excess of 20%. This advantage was used by ABB Lumus to determine the potential impact of MPT distillation on the ethane-ethylene market. The study indicated that a substantial market exists if the energy saving could be realized and if installed capital cost of MPT distillation was on par or less than conventional technology. Unfortunately, it was determined that the large number of MPT distillation units needed to perform ethane-ethylene separation for world-scale ethylene facilities, makes the targeted separation a poor fit for the technology in this application at the current state of manufacturing costs. Over the course of the project, distillation experiments were performed with the targeted mixture, ethane-ethylene, as well as with analogous low relative volatility systems: cyclohexane-hexane and cyclopentane-pentane. Devices and test stands were specifically designed for these efforts. Development progressed from experiments and models considering sections of a full scale device to the design, fabrication, and operation of a single-channel distillation unit with integrated heat transfer. Throughout the project, analytical and numerical models and Computational Fluid Dynamics (CFD) simulations were validated with experiments in the process of developing this platform technology. Experimental trials demonstrated steady and controllable distillation for a variety of process conditions. Values of Height-to-an-Equivalent Theoretical Plate (HETP) ranging from less than 0.5 inch to a few inches were experimentally proven, demonstrating a ten-fold performance enhancement relative to conventional distillation. This improvement, while substantial, is not sufficient for MPT distillation to displace very large scale distillation trains. Fortunately, parallel efforts in the area of business development have yielded other applications for MPT distillation, including smaller scale separations that benefit from the flowsheet flexibility offered by the technology. Talks with multiple potential partners are underway. Their outcome will also help determine the path ahead for MPT distillation.« less
NASA Astrophysics Data System (ADS)
Tóthmérész, Béla; Mitchley, Jonathan; Jongepierová, Ivana; Baasch, Annett; Fajmon, Karel; Kirmer, Anita; Prach, Karel; Řehounková, Klára; Tischew, Sabine; Twiston-Davies, Grace; Dutoit, Thierry; Buisson, Elise; Jeunatre, Renaud; Valkó, Orsolya; Deák, Balázs; Török, Péter
2017-04-01
Sustaining the human well-being and the quality of life, it is essential to develop and support green infrastructure (strategically planned network of natural and semi-natural areas with other environmental features designed and managed to deliver a wide range of ecosystem services). For developing and sustaining green infrastructure the conservation and restoration of biodiversity in natural and traditionally managed habitats is essential. Species-rich landscapes in Europe have been maintained over centuries by various kinds of low-intensity use. Recently, they suffered by losses in extent and diversity due to land degradation by intensification or abandonment. Conservation of landscape-scale biodiversity requires the maintenance of species-rich habitats and the restoration of lost grasslands. We are focusing on landscape-level restoration studies including multiple sites in wide geographical scale (including Czech Republic, France, Germany, Hungary, and UK). In a European-wide perspective we aimed at to address four specific questions: (i) What were the aims and objectives of landscape-scale restoration? (ii) What results have been achieved? (iii) What are the costs of large-scale restoration? (iv) What policy tools are available for the restoration of landscape-scale biodiversity? We conclude that landscape-level restoration offers exciting new opportunities to reconnect long-disrupted ecological processes and to restore landscape connectivity. Generally, these measures enable to enhance the biodiversity at the landscape scale. The development of policy tools to achieve restoration at the landscape scale are essential for the achievement of the ambitious targets of the Convention on Biological Diversity and the European Biodiversity Strategy for ecosystem restoration.
Mpc-scale diffuse radio emission in two massive cool-core clusters of galaxies
NASA Astrophysics Data System (ADS)
Sommer, Martin W.; Basu, Kaustuv; Intema, Huib; Pacaud, Florian; Bonafede, Annalisa; Babul, Arif; Bertoldi, Frank
2017-04-01
Radio haloes are diffuse synchrotron sources on scales of ˜1 Mpc that are found in merging clusters of galaxies, and are believed to be powered by electrons re-accelerated by merger-driven turbulence. We present measurements of extended radio emission on similarly large scales in two clusters of galaxies hosting cool cores: Abell 2390 and Abell 2261. The analysis is based on interferometric imaging with the Karl G. Jansky Very Large Array, Very Large Array and Giant Metrewave Radio Telescope. We present detailed radio images of the targets, subtract the compact emission components and measure the spectral indices for the diffuse components. The radio emission in A2390 extends beyond a known sloshing-like brightness discontinuity, and has a very steep in-band spectral slope at 1.5 GHz that is similar to some known ultrasteep spectrum radio haloes. The diffuse signal in A2261 is more extended than in A2390 but has lower luminosity. X-ray morphological indicators, derived from XMM-Newton X-ray data, place these clusters in the category of relaxed or regular systems, although some asymmetric features that can indicate past minor mergers are seen in the X-ray brightness images. If these two Mpc-scale radio sources are categorized as giant radio haloes, they question the common assumption of radio haloes occurring exclusively in clusters undergoing violent merging activity, in addition to commonly used criteria for distinguishing between radio haloes and minihaloes.
Forcey, Greg M.; Thogmartin, Wayne E.; Linz, George M.; McKann, Patrick C.
2014-01-01
Bird populations are influenced by many environmental factors at both large and small scales. Our study evaluated the influences of regional climate and land-use variables on the Northern Harrier (Circus cyaneus), Black Tern (Childonias niger), and Marsh Wren (Cistothorus palustris) in the prairie potholes of the upper Midwest of the United States. These species were chosen because their diverse habitat preference represent the spectrum of habitat conditions present in the Prairie Potholes, ranging from open prairies to dense cattail marshes. We evaluated land-use covariates at three logarithmic spatial scales (1,000 ha, 10,000 ha, and 100,000 ha) and constructed models a priori using information from published habitat associations and climatic influences. The strongest influences on the abundance of each of the three species were the percentage of wetland area across all three spatial scales and precipitation in the year preceding that when bird surveys were conducted. Even among scales ranging over three orders of magnitude the influence of spatial scale was small, as models with the same variables expressed at different scales were often in the best model subset. Examination of the effects of large-scale environmental variables on wetland birds elucidated relationships overlooked in many smaller-scale studies, such as the influences of climate and habitat variables at landscape scales. Given the spatial variation in the abundance of our focal species within the prairie potholes, our model predictions are especially useful for targeting locations, such as northeastern South Dakota and central North Dakota, where management and conservation efforts would be optimally beneficial. This modeling approach can also be applied to other species and geographic areas to focus landscape conservation efforts and subsequent small-scale studies, especially in constrained economic climates.
Impact decapitation from laboratory to basin scales
NASA Technical Reports Server (NTRS)
Schultz, P. H.; Gault, D. E.
1991-01-01
Although vertical hypervelocity impacts result in the annihilation (melting/vaporization) of the projectile, oblique impacts (less than 15 deg) fundamentally change the partitioning of energy with fragments as large as 10 percent of the original projectile surviving. Laboratory experiments reveal that both ductile and brittle projectiles produce very similar results where limiting disruption depends on stresses proportional to the vertical velocity component. Failure of the projectile at laboratory impact velocities (6 km/s) is largely controlled by stresses established before the projectile has penetrated a significant distance into the target. The planetary surface record exhibits numerous examples of oblique impacts with evidence fir projectile failure and downrange sibling collisions.
Supporting Knowledge Transfer in IS Deployment Projects
NASA Astrophysics Data System (ADS)
Schönström, Mikael
To deploy new information systems is an expensive and complex task, and does seldom result in successful usage where the system adds strategic value to the firm (e.g. Sharma et al. 2003). It has been argued that innovation diffusion is a knowledge integration problem (Newell et al. 2000). Knowledge about business processes, deployment processes, information systems and technology are needed in a large-scale deployment of a corporate IS. These deployments can therefore to a large extent be argued to be a knowledge management (KM) problem. An effective deployment requires that knowledge about the system is effectively transferred to the target organization (Ko et al. 2005).
Ribosomal Translocation: One Step Closer to the Molecular Mechanism
Shoji, Shinichiro; Walker, Sarah E.; Fredrick, Kurt
2010-01-01
Protein synthesis occurs in ribosomes, the targets of numerous antibiotics. How these large and complex machines read and move along mRNA have proven to be challenging questions. In this Review, we focus on translocation, the last step of the elongation cycle in which movement of tRNA and mRNA is catalyzed by elongation factor G. Translocation entails large-scale movements of the tRNAs and conformational changes in the ribosome that require numerous tertiary contacts to be disrupted and reformed. We highlight recent progress toward elucidating the molecular basis of translocation and how various antibiotics influence tRNA–mRNA movement. PMID:19173642
NASA Astrophysics Data System (ADS)
Schumann, G.
2016-12-01
Routinely obtaining real-time 2-D inundation patterns of a flood event at a meaningful spatial resolution and over large scales is at the moment only feasible with either operational aircraft flights or satellite imagery. Of course having model simulations of floodplain inundation available to complement the remote sensing data is highly desirable, for both event re-analysis and forecasting event inundation. Using the Texas 2015 flood disaster, we demonstrate the value of multi-scale EO data for large scale 2-D floodplain inundation modeling and forecasting. A dynamic re-analysis of the Texas 2015 flood disaster was run using a 2-D flood model developed for accurate large scale simulations. We simulated the major rivers entering the Gulf of Mexico and used flood maps produced from both optical and SAR satellite imagery to examine regional model sensitivities and assess associated performance. It was demonstrated that satellite flood maps can complement model simulations and add value, although this is largely dependent on a number of important factors, such as image availability, regional landscape topology, and model uncertainty. In the preferred case where model uncertainty is high, landscape topology is complex (i.e. urbanized coastal area) and satellite flood maps are available (in case of SAR for instance), satellite data can significantly reduce model uncertainty by identifying the "best possible" model parameter set. However, most often the situation is occurring where model uncertainty is low and spatially contiguous flooding can be mapped from satellites easily enough, such as in rural large inland river floodplains. Consequently, not much value from satellites can be added. Nevertheless, where a large number of flood maps are available, model credibility can be increased substantially. In the case presented here this was true for at least 60% of the many thousands of kilometers of river flow length simulated, where satellite flood maps existed. The next steps of this project is to employ a technique termed "targeted observation" approach, which is an assimilation based procedure that allows quantifying the impact observations have on model predictions at the local scale and also along the entire river system, when assimilated with the model at specific "overpass" locations.
Smith, Kenneth J; Davy, Jeanette A; Rosenberg, Donald L
2010-04-01
This study examined alternative seven-, five-, and three-factor structures for the Academic Motivation Scale, with data from a large convenience sample of 2,078 students matriculating in various business courses at three AACSB-accredited regional comprehensive universities. In addition, the invariance of the scale's factor structure between male and female students and between undergraduate and Master's of Business Administration students was investigated. Finally, the internal consistency of the items loading on each of the seven AMS subscales was assessed as well as whether the correlations among the subscales supported a continuum of self-determination. Results for the full sample as well as the targeted subpopulations supported the seven factor configuration of the scale with adequate model fit achieved for all but the MBA student group. The data also generated acceptable internal consistency statistics for all of the subscales. However, in line with a number of previous studies, the correlations between subscales failed to fully support the scale's simplex structure as proposed by self-determination theory.
Evaluation of target efficiencies for solid-liquid separation steps in biofuels production.
Kochergin, Vadim; Miller, Keith
2011-01-01
Development of liquid biofuels has entered a new phase of large scale pilot demonstration. A number of plants that are in operation or under construction face the task of addressing the engineering challenges of creating a viable plant design, scaling up and optimizing various unit operations. It is well-known that separation technologies account for 50-70% of both capital and operating cost. Additionally, reduction of environmental impact creates technological challenges that increase project cost without adding to the bottom line. Different technologies vary in terms of selection of unit operations; however, solid-liquid separations are likely to be a major contributor to the overall project cost. Despite the differences in pretreatment approaches, similar challenges arise for solid-liquid separation unit operations. A typical process for ethanol production from biomass includes several solid-liquid separation steps, depending on which particular stream is targeted for downstream processing. The nature of biomass-derived materials makes it either difficult or uneconomical to accomplish complete separation in a single step. Therefore, setting realistic efficiency targets for solid-liquid separations is an important task that influences overall process recovery and economics. Experimental data will be presented showing typical characteristics for pretreated cane bagasse at various stages of processing into cellulosic ethanol. Results of generic material balance calculations will be presented to illustrate the influence of separation target efficiencies on overall process recoveries and characteristics of waste streams.
Log-polar mapping-based scale space tracking with adaptive target response
NASA Astrophysics Data System (ADS)
Li, Dongdong; Wen, Gongjian; Kuai, Yangliu; Zhang, Ximing
2017-05-01
Correlation filter-based tracking has exhibited impressive robustness and accuracy in recent years. Standard correlation filter-based trackers are restricted to translation estimation and equipped with fixed target response. These trackers produce an inferior performance when encountered with a significant scale variation or appearance change. We propose a log-polar mapping-based scale space tracker with an adaptive target response. This tracker transforms the scale variation of the target in the Cartesian space into a shift along the logarithmic axis in the log-polar space. A one-dimensional scale correlation filter is learned online to estimate the shift along the logarithmic axis. With the log-polar representation, scale estimation is achieved accurately without a multiresolution pyramid. To achieve an adaptive target response, a variance of the Gaussian function is computed from the response map and updated online with a learning rate parameter. Our log-polar mapping-based scale correlation filter and adaptive target response can be combined with any correlation filter-based trackers. In addition, the scale correlation filter can be extended to a two-dimensional correlation filter to achieve joint estimation of the scale variation and in-plane rotation. Experiments performed on an OTB50 benchmark demonstrate that our tracker achieves superior performance against state-of-the-art trackers.
Elastic and inelastic scattering for the 10B+58Ni system at near-barrier energies
NASA Astrophysics Data System (ADS)
Scarduelli, V.; Crema, E.; Guimarães, V.; Abriola, D.; Arazi, A.; de Barbará, E.; Capurro, O. A.; Cardona, M. A.; Gallardo, J.; Hojman, D.; Martí, G. V.; Pacheco, A. J.; Rodrígues, D.; Yang, Y. Y.; Deshmukh, N. N.; Paes, B.; Lubian, J.; Mendes Junior, D. R.; Morcelle, V.; Monteiro, D. S.
2017-11-01
Full angular distributions of the 10B elastically and inelastically scattered by 58Ni have been measured at different energies around the Coulomb barrier. The elastic and inelastic scattering of 10B on a medium mass target has been measured for the first time. The obtained angular distributions have been analyzed in terms of large-scale coupled reaction channel calculations, where several inelastic transitions of the projectile and the target, as well as the most relevant one- and two-step transfer reactions have been included in the coupling matrix. The roles of the spin reorientation, the spin-orbit interaction, and the large ground-state deformation of the 10B, in the reaction mechanism, were also investigated. The real part of the interaction potential between projectile and target was represented by a parameter-free double-folding potential, whereas no imaginary potential at the surface was considered. In this sense, the theoretical calculations were parameter free and their results were compared to experimental data to investigate the relative importance of the different reaction channels. A striking influence of the ground-state spin reorientation of the 10B nucleus was found, while all transfer reactions investigated had a minimum contribution to the dynamics of the system. Finally, the large static deformation of the 10B and the spin-orbit coupling can also play an important role in the system studied.
Shelton, Jeremy B; Barocas, Daniel A; Conway, Frances; Hart, Kathleen; Nelson, Kinloch; Richstone, Lee; Gonzalez, Ricardo R; Raman, Jay D; Scherr, Douglas S
2005-05-01
To estimate the incidence of prostate cancer among African-American men and Caribbean immigrants to the United States, to assess the applicability of large-scale prostate screening trials to a community screening program, and to recruit unscreened men. African-American and Caribbean-American men were targeted with a community-based prostate cancer screening program in Jamaica, New York. Serum prostate-specific antigen determination and digital rectal examination were used to determine abnormal findings. The incidence of an abnormal screening examination was used to project the incidence of prostate cancer, which was compared with that in other reported trials. The projected incidence of prostate cancer among African-Americans and Caribbean-Americans older than 50 years was 8% and 7%, respectively, similar to that reported in other trials of African-American men. The projected incidence of prostate cancer in Caribbean-American men aged 40 to 49 years was 1%, the same as the high rate reported among Caribbean men. As in other trials, a family history of prostate cancer and age were strong predictors of abnormal findings. Of the recruited men older than 50 years, 58% had never been screened compared with 42% nationally. Large population-based screening trials have identified ethnic groups at high risk of prostate cancer. This trial detected high rates of abnormal screening findings by targeting ethnicity. The incidence of an abnormal examination was high in Caribbean-American men younger than 50 years old. Finally, this trial successfully recruited underscreened men.
How the amygdala affects emotional memory by altering brain network properties.
Hermans, Erno J; Battaglia, Francesco P; Atsak, Piray; de Voogd, Lycia D; Fernández, Guillén; Roozendaal, Benno
2014-07-01
The amygdala has long been known to play a key role in supporting memory for emotionally arousing experiences. For example, classical fear conditioning depends on neural plasticity within this anterior medial temporal lobe region. Beneficial effects of emotional arousal on memory, however, are not restricted to simple associative learning. Our recollection of emotional experiences often includes rich representations of, e.g., spatiotemporal context, visceral states, and stimulus-response associations. Critically, such memory features are known to bear heavily on regions elsewhere in the brain. These observations led to the modulation account of amygdala function, which postulates that amygdala activation enhances memory consolidation by facilitating neural plasticity and information storage processes in its target regions. Rodent work in past decades has identified the most important brain regions and neurochemical processes involved in these modulatory actions, and neuropsychological and neuroimaging work in humans has produced a large body of convergent data. Importantly, recent methodological developments make it increasingly realistic to monitor neural interactions underlying such modulatory effects as they unfold. For instance, functional connectivity network modeling in humans has demonstrated how information exchanges between the amygdala and specific target regions occur within the context of large-scale neural network interactions. Furthermore, electrophysiological and optogenetic techniques in rodents are beginning to make it possible to quantify and even manipulate such interactions with millisecond precision. In this paper we will discuss that these developments will likely lead to an updated view of the amygdala as a critical nexus within large-scale networks supporting different aspects of memory processing for emotionally arousing experiences. Copyright © 2014 Elsevier Inc. All rights reserved.
Reliability of observer ratings in the assessment of personality disorders: a preliminary study.
Coolidge, F L; Burns, E M; Mooney, J A
1995-01-01
A 200-item, self-report personality disorder inventory (Coolidge Axis II Inventory; CATI) was administered to 52 married target subjects. Their spouses and a close friend completed a significant-other form about the targets. The mean correlation across all personality disorder scales was .51 for the targets-spouses, .36 for the targets-friends, and .41 for the spouses-friends. Twenty-eight target-spouse correlations were significant and ranged from .99 to -.40. The mean correlation for the individual 13 personality disorder scales was .46 for target-spouses and ranged from .63 for the histrionic scale to .27 for the paranoid scale. The results were interpreted as establishing a basis for significant other assessment of personality disorders.
Farmer, Cristan A; Aman, Michael G
2010-01-01
Although often lacking "malice", aggression is fairly common in children with intellectual or developmental disability (I/DD). Despite this, there are no scales available that are appropriate for an in-depth analysis of aggressive behavior in this population. Such scales are needed for the study of aggressive behavior, which is a common target symptom in clinical trials. We assessed the reliability and validity of the Children's Scale of Hostility and Aggression: Reactive/Proactive (C-SHARP), a new aggression scale created for children with I/DD. Data are presented from a survey of 365 children with I/DD aged 3-21 years. Interrater reliability was very high for the Problem Scale, which characterizes type of aggression. Reliability was lower but largely acceptable for the Provocation Scale, which assesses motivation. Validity of the Problem Scale was supported by expected differences in children with autism, Down syndrome, comorbid disruptive behavior disorders (DBDs) and ADHD. The Provocation Scale, which categorizes behavior as proactive or reactive, showed expected differences in children with DBD, but was less effective in those with ADHD. The C-SHARP appears to have fundamentally sound psychometric characteristics, although more research is needed.
Enyeart, Peter J; Mohr, Georg; Ellington, Andrew D; Lambowitz, Alan M
2014-01-13
Mobile group II introns are bacterial retrotransposons that combine the activities of an autocatalytic intron RNA (a ribozyme) and an intron-encoded reverse transcriptase to insert site-specifically into DNA. They recognize DNA target sites largely by base pairing of sequences within the intron RNA and achieve high DNA target specificity by using the ribozyme active site to couple correct base pairing to RNA-catalyzed intron integration. Algorithms have been developed to program the DNA target site specificity of several mobile group II introns, allowing them to be made into 'targetrons.' Targetrons function for gene targeting in a wide variety of bacteria and typically integrate at efficiencies high enough to be screened easily by colony PCR, without the need for selectable markers. Targetrons have found wide application in microbiological research, enabling gene targeting and genetic engineering of bacteria that had been intractable to other methods. Recently, a thermostable targetron has been developed for use in bacterial thermophiles, and new methods have been developed for using targetrons to position recombinase recognition sites, enabling large-scale genome-editing operations, such as deletions, inversions, insertions, and 'cut-and-pastes' (that is, translocation of large DNA segments), in a wide range of bacteria at high efficiency. Using targetrons in eukaryotes presents challenges due to the difficulties of nuclear localization and sub-optimal magnesium concentrations, although supplementation with magnesium can increase integration efficiency, and directed evolution is being employed to overcome these barriers. Finally, spurred by new methods for expressing group II intron reverse transcriptases that yield large amounts of highly active protein, thermostable group II intron reverse transcriptases from bacterial thermophiles are being used as research tools for a variety of applications, including qRT-PCR and next-generation RNA sequencing (RNA-seq). The high processivity and fidelity of group II intron reverse transcriptases along with their novel template-switching activity, which can directly link RNA-seq adaptor sequences to cDNAs during reverse transcription, open new approaches for RNA-seq and the identification and profiling of non-coding RNAs, with potentially wide applications in research and biotechnology.
Liu, Xiaonan; Ding, Wentao; Jiang, Huifeng
2017-07-19
Plant natural products (PNPs) are widely used as pharmaceuticals, nutraceuticals, seasonings, pigments, etc., with a huge commercial value on the global market. However, most of these PNPs are still being extracted from plants. A resource-conserving and environment-friendly synthesis route for PNPs that utilizes microbial cell factories has attracted increasing attention since the 1940s. However, at the present only a handful of PNPs are being produced by microbial cell factories at an industrial scale, and there are still many challenges in their large-scale application. One of the challenges is that most biosynthetic pathways of PNPs are still unknown, which largely limits the number of candidate PNPs for heterologous microbial production. Another challenge is that the metabolic fluxes toward the target products in microbial hosts are often hindered by poor precursor supply, low catalytic activity of enzymes and obstructed product transport. Consequently, despite intensive studies on the metabolic engineering of microbial hosts, the fermentation costs of most heterologously produced PNPs are still too high for industrial-scale production. In this paper, we review several aspects of PNP production in microbial cell factories, including important design principles and recent progress in pathway mining and metabolic engineering. In addition, implemented cases of industrial-scale production of PNPs in microbial cell factories are also highlighted.
Vangelista, Silvia; Cinquanta, Eugenio; Martella, Christian; Alia, Mario; Longo, Massimo; Lamperti, Alessio; Mantovan, Roberto; Basset, Francesco Basso; Pezzoli, Fabio; Molle, Alessandro
2016-04-29
Large-scale integration of MoS2 in electronic devices requires the development of reliable and cost-effective deposition processes, leading to uniform MoS2 layers on a wafer scale. Here we report on the detailed study of the heterogeneous vapor-solid reaction between a pre-deposited molybdenum solid film and sulfur vapor, thus resulting in a controlled growth of MoS2 films onto SiO2/Si substrates with a tunable thickness and cm(2)-scale uniformity. Based on Raman spectroscopy and photoluminescence, we show that the degree of crystallinity in the MoS2 layers is dictated by the deposition temperature and thickness. In particular, the MoS2 structural disorder observed at low temperature (<750 °C) and low thickness (two layers) evolves to a more ordered crystalline structure at high temperature (1000 °C) and high thickness (four layers). From an atomic force microscopy investigation prior to and after sulfurization, this parametrical dependence is associated with the inherent granularity of the MoS2 nanosheet that is inherited by the pristine morphology of the pre-deposited Mo film. This work paves the way to a closer control of the synthesis of wafer-scale and atomically thin MoS2, potentially extendable to other transition metal dichalcogenides and hence targeting massive and high-volume production for electronic device manufacturing.
Lost in the city: revisiting Milgram's experiment in the age of social networks.
Szüle, János; Kondor, Dániel; Dobos, László; Csabai, István; Vattay, Gábor
2014-01-01
As more and more users access social network services from smart devices with GPS receivers, the available amount of geo-tagged information makes repeating classical experiments possible on global scales and with unprecedented precision. Inspired by the original experiments of Milgram, we simulated message routing within a representative sub-graph of the network of Twitter users with about 6 million geo-located nodes and 122 million edges. We picked pairs of users from two distant metropolitan areas and tried to find a route between them using local geographic information only; our method was to forward messages to a friend living closest to the target. We found that the examined network is navigable on large scales, but navigability breaks down at the city scale and the network becomes unnavigable on intra-city distances. This means that messages usually arrived to the close proximity of the target in only 3-6 steps, but only in about 20% of the cases was it possible to find a route all the way to the recipient, in spite of the network being connected. This phenomenon is supported by the distribution of link lengths; on larger scales the distribution behaves approximately as P(d) ≈ 1/d, which was found earlier by Kleinberg to allow efficient navigation, while on smaller scales, a fractal structure becomes apparent. The intra-city correlation dimension of the network was found to be D2 = 1.25, less than the dimension D2 = 1.78 of the distribution of the population.
Ghanakota, Phani; van Vlijmen, Herman; Sherman, Woody; Beuming, Thijs
2018-04-23
The ability to target protein-protein interactions (PPIs) with small molecule inhibitors offers great promise in expanding the druggable target space and addressing a broad range of untreated diseases. However, due to their nature and function of interacting with protein partners, PPI interfaces tend to extend over large surfaces without the typical pockets of enzymes and receptors. These features present unique challenges for small molecule inhibitor design. As such, determining whether a particular PPI of interest could be pursued with a small molecule discovery strategy requires an understanding of the characteristics of the PPI interface and whether it has hotspots that can be leveraged by small molecules to achieve desired potency. Here, we assess the ability of mixed-solvent molecular dynamic (MSMD) simulations to detect hotspots at PPI interfaces. MSMD simulations using three cosolvents (acetonitrile, isopropanol, and pyrimidine) were performed on a large test set of 21 PPI targets that have been experimentally validated by small molecule inhibitors. We compare MSMD, which includes explicit solvent and full protein flexibility, to a simpler approach that does not include dynamics or explicit solvent (SiteMap) and find that MSMD simulations reveal additional information about the characteristics of these targets and the ability for small molecules to inhibit the PPI interface. In the few cases were MSMD simulations did not detect hotspots, we explore the shortcomings of this technique and propose future improvements. Finally, using Interleukin-2 as an example, we highlight the advantage of the MSMD approach for detecting transient cryptic druggable pockets that exists at PPI interfaces.
Growth of equilibrium structures built from a large number of distinct component types.
Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen
2014-09-14
We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.
First shock tuning and backscatter measurements for large case-to-capsule ratio beryllium targets
NASA Astrophysics Data System (ADS)
Loomis, Eric; Yi, Austin; Kline, John; Kyrala, George; Simakov, Andrei; Wilson, Doug; Ralph, Joe; Dewald, Eduard; Strozzi, David; Celliers, Peter; Millot, Marius; Tommasini, Riccardo
2016-10-01
The current under performance of target implosions on the National Ignition Facility (NIF) has necessitated scaling back from high convergence ratio to access regimes of reduced physics uncertainties. These regimes, we expect, are more predictable by existing radiation hydrodynamics codes giving us a better starting point for isolating key physics questions. One key question is the lack of predictable in-flight and hot spot shape due to a complex hohlraum radiation environment. To achieve more predictable, shape tunable implosions we have designed and fielded a large 4.2 case-to-capsule ratio (CCR) target at the NIF using 6.72 mm diameter Au hohlraums and 1.6 mm diameter Cu-doped Be capsules. Simulations show that at these dimensions during a 10 ns 3-shock laser pulse reaching 270 eV hohlraum temperatures, the interaction between hohlraum and capsule plasma, which at lower CCR lead to beam propagation impedance by artificial plasma stagnation, are reduced. In this talk we will present measurements of early time drive symmetry using two-axis line-imaging velocimetry (VISAR) and streaked radiography measuring velocity of the imploding shell and their comparisons to post-shot calculations using the code HYDRA (Lawrence Livermore National Laboratory).
The salience network causally influences default mode network activity during moral reasoning
Wilson, Stephen M.; D’Esposito, Mark; Kayser, Andrew S.; Grossman, Scott N.; Poorzand, Pardis; Seeley, William W.; Miller, Bruce L.; Rankin, Katherine P.
2013-01-01
Large-scale brain networks are integral to the coordination of human behaviour, and their anatomy provides insights into the clinical presentation and progression of neurodegenerative illnesses such as Alzheimer’s disease, which targets the default mode network, and behavioural variant frontotemporal dementia, which targets a more anterior salience network. Although the default mode network is recruited when healthy subjects deliberate about ‘personal’ moral dilemmas, patients with Alzheimer’s disease give normal responses to these dilemmas whereas patients with behavioural variant frontotemporal dementia give abnormal responses to these dilemmas. We hypothesized that this apparent discrepancy between activation- and patient-based studies of moral reasoning might reflect a modulatory role for the salience network in regulating default mode network activation. Using functional magnetic resonance imaging to characterize network activity of patients with behavioural variant frontotemporal dementia and healthy control subjects, we present four converging lines of evidence supporting a causal influence from the salience network to the default mode network during moral reasoning. First, as previously reported, the default mode network is recruited when healthy subjects deliberate about ‘personal’ moral dilemmas, but patients with behavioural variant frontotemporal dementia producing atrophy in the salience network give abnormally utilitarian responses to these dilemmas. Second, patients with behavioural variant frontotemporal dementia have reduced recruitment of the default mode network compared with healthy control subjects when deliberating about these dilemmas. Third, a Granger causality analysis of functional neuroimaging data from healthy control subjects demonstrates directed functional connectivity from nodes of the salience network to nodes of the default mode network during moral reasoning. Fourth, this Granger causal influence is diminished in patients with behavioural variant frontotemporal dementia. These findings are consistent with a broader model in which the salience network modulates the activity of other large-scale networks, and suggest a revision to a previously proposed ‘dual-process’ account of moral reasoning. These findings also characterize network interactions underlying abnormal moral reasoning in frontotemporal dementia, which may serve as a model for the aberrant judgement and interpersonal behaviour observed in this disease and in other disorders of social function. More broadly, these findings link recent work on the dynamic interrelationships between large-scale brain networks to observable impairments in dementia syndromes, which may shed light on how diseases that target one network also alter the function of interrelated networks. PMID:23576128
Tracing Multi-Scale Climate Change at Low Latitude from Glacier Shrinkage
NASA Astrophysics Data System (ADS)
Moelg, T.; Cullen, N. J.; Hardy, D. R.; Kaser, G.
2009-12-01
Significant shrinkage of glaciers on top of Africa's highest mountain (Kilimanjaro, 5895 m a.s.l.) has been observed between the late 19th century and the present. Multi-year data from our automatic weather station on the largest remaining slope glacier at 5873 m allow us to force and verify a process-based distributed glacier mass balance model. This generates insights into energy and mass fluxes at the glacier-atmosphere interface, their feedbacks, and how they are linked to atmospheric conditions. By means of numerical atmospheric modeling and global climate model simulations, we explore the linkages of the local climate in Kilimanjaro's summit zone to larger-scale climate dynamics - which suggests a causal connection between Indian Ocean dynamics, mesoscale mountain circulation, and glacier mass balance. Based on this knowledge, the verified mass balance model is used for backward modeling of the steady-state glacier extent observed in the 19th century, which yields the characteristics of local climate change between that time and the present (30-45% less precipitation, 0.1-0.3 hPa less water vapor pressure, 2-4 percentage units less cloud cover at present). Our multi-scale approach provides an important contribution, from a cryospheric viewpoint, to the understanding of how large-scale climate change propagates to the tropical free troposphere. Ongoing work in this context targets the millennium-scale relation between large-scale climate and glacier behavior (by downscaling precipitation), and the possible effects of regional anthropogenic activities (land use change) on glacier mass balance.
Field cage studies and progressive evaluation of genetically-engineered mosquitoes.
Facchinelli, Luca; Valerio, Laura; Ramsey, Janine M; Gould, Fred; Walsh, Rachael K; Bond, Guillermo; Robert, Michael A; Lloyd, Alun L; James, Anthony A; Alphey, Luke; Scott, Thomas W
2013-01-01
A genetically-engineered strain of the dengue mosquito vector Aedes aegypti, designated OX3604C, was evaluated in large outdoor cage trials for its potential to improve dengue prevention efforts by inducing population suppression. OX3604C is engineered with a repressible genetic construct that causes a female-specific flightless phenotype. Wild-type females that mate with homozygous OX3604C males will not produce reproductive female offspring. Weekly introductions of OX3604C males eliminated all three targeted Ae. aegypti populations after 10-20 weeks in a previous laboratory cage experiment. As part of the phased, progressive evaluation of this technology, we carried out an assessment in large outdoor field enclosures in dengue endemic southern Mexico. OX3604C males were introduced weekly into field cages containing stable target populations, initially at 10:1 ratios. Statistically significant target population decreases were detected in 4 of 5 treatment cages after 17 weeks, but none of the treatment populations were eliminated. Mating competitiveness experiments, carried out to explore the discrepancy between lab and field cage results revealed a maximum mating disadvantage of up 59.1% for OX3604C males, which accounted for a significant part of the 97% fitness cost predicted by a mathematical model to be necessary to produce the field cage results. Our results indicate that OX3604C may not be effective in large-scale releases. A strain with the same transgene that is not encumbered by a large mating disadvantage, however, could have improved prospects for dengue prevention. Insights from large outdoor cage experiments may provide an important part of the progressive, stepwise evaluation of genetically-engineered mosquitoes.
Field Cage Studies and Progressive Evaluation of Genetically-Engineered Mosquitoes
Facchinelli, Luca; Valerio, Laura; Ramsey, Janine M.; Gould, Fred; Walsh, Rachael K.; Bond, Guillermo; Robert, Michael A.; Lloyd, Alun L.; James, Anthony A.; Alphey, Luke; Scott, Thomas W.
2013-01-01
Background A genetically-engineered strain of the dengue mosquito vector Aedes aegypti, designated OX3604C, was evaluated in large outdoor cage trials for its potential to improve dengue prevention efforts by inducing population suppression. OX3604C is engineered with a repressible genetic construct that causes a female-specific flightless phenotype. Wild-type females that mate with homozygous OX3604C males will not produce reproductive female offspring. Weekly introductions of OX3604C males eliminated all three targeted Ae. aegypti populations after 10–20 weeks in a previous laboratory cage experiment. As part of the phased, progressive evaluation of this technology, we carried out an assessment in large outdoor field enclosures in dengue endemic southern Mexico. Methodology/Principal Findings OX3604C males were introduced weekly into field cages containing stable target populations, initially at 10∶1 ratios. Statistically significant target population decreases were detected in 4 of 5 treatment cages after 17 weeks, but none of the treatment populations were eliminated. Mating competitiveness experiments, carried out to explore the discrepancy between lab and field cage results revealed a maximum mating disadvantage of up 59.1% for OX3604C males, which accounted for a significant part of the 97% fitness cost predicted by a mathematical model to be necessary to produce the field cage results. Conclusions/Significance Our results indicate that OX3604C may not be effective in large-scale releases. A strain with the same transgene that is not encumbered by a large mating disadvantage, however, could have improved prospects for dengue prevention. Insights from large outdoor cage experiments may provide an important part of the progressive, stepwise evaluation of genetically-engineered mosquitoes. PMID:23350003
Private algorithms for the protected in social network search
Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory
2016-01-01
Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets. PMID:26755606
Private algorithms for the protected in social network search.
Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory
2016-01-26
Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Feng; Jaitly, Navdeep; Jayachandran, Hemalatha
2007-10-12
To identify phosphoproteins regulated by the phosphoprotein phosphatase (PPP) family of S/T phosphatases, we performed a large-scale characterization of changes in protein phosphorylation on extracts from HeLa cells treated with or without calyculin A, a potent PPP enzyme inhibitor. A label-free comparative Phosphoproteomics approach using immobilized metal ion affinity chromatography and targeted tandem mass spectrometry was employed to discover and identify signatures based upon distinctive changes in abundance. Overall, 232 proteins were identified as either direct or indirect targets for PPP enzyme regulation. Most of the present identifications represent novel PPP enzyme targets at the level of both phosphorylation sitemore » and protein. These include phosphorylation sites within signaling proteins such as p120 Catenin, A Kinase Anchoring Protein 8, JunB, and Type II Phosphatidyl Inositol 4 Kinase. These data can be used to define underlying signaling pathways and events regulated by the PPP family of S/T phosphatases.« less