Sample records for typically requires multiple

  1. Mapping of multiple parameter m-health scenarios to mobile WiMAX QoS variables.

    PubMed

    Alinejad, Ali; Philip, N; Istepanian, R S H

    2011-01-01

    Multiparameter m-health scenarios with bandwidth demanding requirements will be one of key applications in future 4 G mobile communication systems. These applications will potentially require specific spectrum allocations with higher quality of service requirements. Furthermore, one of the key 4 G technologies targeting m-health will be medical applications based on WiMAX systems. Hence, it is timely to evaluate such multiple parametric m-health scenarios over mobile WiMAX networks. In this paper, we address the preliminary performance analysis of mobile WiMAX network for multiparametric telemedical scenarios. In particular, we map the medical QoS to typical WiMAX QoS parameters to optimise the performance of these parameters in typical m-health scenario. Preliminary performance analyses of the proposed multiparametric scenarios are evaluated to provide essential information for future medical QoS requirements and constraints in these telemedical network environments.

  2. Children's Comprehension Monitoring of Multiple Situational Dimensions of a Narrative

    ERIC Educational Resources Information Center

    Wassenburg, Stephanie I.; Beker, Katinka; van den Broek, Paul; van der Schoot, Menno

    2015-01-01

    Narratives typically consist of information on multiple aspects of a situation. In order to successfully create a coherent representation of the described situation, readers are required to monitor all these situational dimensions during reading. However, little is known about whether these dimensions differ in the ease with which they can be…

  3. Estimating the optimal dynamic antipsychotic treatment regime: Evidence from the sequential multiple assignment randomized CATIE Schizophrenia Study

    PubMed Central

    Shortreed, Susan M.; Moodie, Erica E. M.

    2012-01-01

    Summary Treatment of schizophrenia is notoriously difficult and typically requires personalized adaption of treatment due to lack of efficacy of treatment, poor adherence, or intolerable side effects. The Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE) Schizophrenia Study is a sequential multiple assignment randomized trial comparing the typical antipsychotic medication, perphenazine, to several newer atypical antipsychotics. This paper describes the marginal structural modeling method for estimating optimal dynamic treatment regimes and applies the approach to the CATIE Schizophrenia Study. Missing data and valid estimation of confidence intervals are also addressed. PMID:23087488

  4. Managing Multiple Tasks in Complex, Dynamic Environments

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    Sketchy planners are designed to achieve goals in realistically complex, time-pressured, and uncertain task environments. However, the ability to manage multiple, potentially interacting tasks in such environments requires extensions to the functionality these systems typically provide. This paper identifies a number of factors affecting how interacting tasks should be prioritized, interrupted, and resumed, and then describes a sketchy planner called APEX that takes account of these factors when managing multiple tasks.

  5. Multiple fracturing experiments: propellant and borehole considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuderman, J F

    1982-01-01

    The technology for multiple fracturing of a wellbore, using progressively burning propellants, is being developed to enhance natural gas recovery. Multiple fracturing appears especially attractive for stimulating naturally fractured reservoirs such as Devonian shales where it is expected to effectively intersect existing fractures and connect them to a wellbore. Previous experiments and modeling efforts defined pressure risetimes required for multiple fracturing as a function of borehole diameter, but identified only a weak dependence on peak pressure attained. Typically, from four to eight equally spaced major fractures occur as a function of pressure risetime and in situ stress orientation. The presentmore » experiments address propellant and rock response considerations required to achieve the desired pressure risetimes for reliable multiple fracturing.« less

  6. Visual Impairments in People with Severe and Profound Multiple Disabilities: An Inventory of Visual Functioning

    ERIC Educational Resources Information Center

    van den Broek, Ellen G. C.; Janssen, C. G. C.; van Ramshorst, T.; Deen, L.

    2006-01-01

    Background: The prevalence of visual impairments in people with severe and profound multiple disabilities (SPMD) is the subject of considerable debate and is difficult to assess. Methods: In a typical Dutch care organization, all clients with SPMD (n = 76) participated in the study and specific instruments adapted to these clients (requiring a…

  7. Achieving Airport Carbon Neutrality

    DOT National Transportation Integrated Search

    2016-03-01

    This report is a guide for airports that wish to reduce or eliminate greenhouse gas (GHG) emissions from existing buildings and operations. Reaching carbon neutrality typically requires the use of multiple mechanisms to first minimize energy consumpt...

  8. Evaluating the Effectiveness of the Stimulus Pairing Observation Procedure and Multiple Exemplar Instruction on Tact and Listener Responses in Children with Autism

    ERIC Educational Resources Information Center

    Byrne, Brittany L.; Rehfeldt, Ruth Anne; Aguirre, Angelica A.

    2014-01-01

    The stimulus pairing observation procedure (SPOP) combined with multiple exemplar instruction (MEI) has been shown to be effective with typically developing preschoolers in establishing the joint stimulus control required for the development of naming. The purpose of the current investigation was to evaluate the effectiveness and efficiency of the…

  9. Implementing bioinformatic workflows within the bioextract server

    USDA-ARS?s Scientific Manuscript database

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  10. Joining the Dots: The Challenge of Creating Coherent School Improvement

    ERIC Educational Resources Information Center

    Robinson, Viviane; Bendikson, Linda; McNaughton, Stuart; Wilson, Aaron; Zhu, Tong

    2017-01-01

    Background/Context: Sustained school improvement requires adequate organizational and instructional coherence, yet, in typical high schools, subject department organization, norms of teacher professional autonomy, and involvement in multiple initiatives present powerful obstacles to forging a coherent approach to improvement. This study examines…

  11. Inspiring Integration in College Students Reading Multiple Biology Texts

    ERIC Educational Resources Information Center

    Firetto, Carla

    2013-01-01

    Introductory biology courses typically present topics on related biological systems across separate chapters and lectures. A complete foundational understanding requires that students understand how these biological systems are related. Unfortunately, spontaneous generation of these connections is rare for novice learners. These experiments focus…

  12. Comparative analysis on flexibility requirements of typical Cryogenic Transfer lines

    NASA Astrophysics Data System (ADS)

    Jadon, Mohit; Kumar, Uday; Choukekar, Ketan; Shah, Nitin; Sarkar, Biswanath

    2017-04-01

    The cryogenic systems and their applications; primarily in large Fusion devices, utilize multiple cryogen transfer lines of various sizes and complexities to transfer cryogenic fluids from plant to the various user/ applications. These transfer lines are composed of various critical sections i.e. tee section, elbows, flexible components etc. The mechanical sustainability (under failure circumstances) of these transfer lines are primary requirement for safe operation of the system and applications. The transfer lines need to be designed for multiple design constraints conditions like line layout, support locations and space restrictions. The transfer lines are subjected to single load and multiple load combinations, such as operational loads, seismic loads, leak in insulation vacuum loads etc. [1]. The analytical calculations and flexibility analysis using professional software are performed for the typical transfer lines without any flexible component, the results were analysed for functional and mechanical load conditions. The failure modes were identified along the critical sections. The same transfer line was then refurbished with the flexible components and analysed for failure modes. The flexible components provide additional flexibility to the transfer line system and make it safe. The results obtained from the analytical calculations were compared with those obtained from the flexibility analysis software calculations. The optimization of the flexible component’s size and selection was performed and components were selected to meet the design requirements as per code.

  13. Bridging the gap between data analysis and data collection in FIA and forest monitoring globally: successes, research findings, and lessons learned from the Western US and Southeast Asia

    Treesearch

    Leif Mortenson

    2015-01-01

    Globally, national forest inventories (NFI) require a large work force typically consisting of multiple teams spread across multiple locations in order to successfully capture a given nation’s forest resources. This is true of the Forest Inventory and Analysis (FIA) program in the US and in many inventories in developing countries that are supported by USFS...

  14. Sub-Lexical Reading Intervention in a Student with Dyslexia and Asperger's Disorder

    ERIC Educational Resources Information Center

    Wright, Craig; Conlon, Elizabeth; Wright, Michalle; Dyck, Murray

    2011-01-01

    Dyslexia is a common presenting condition in clinic and educational settings. Unlike the homogenous groups used in randomised trials, educators typically manage children who have multiple developmental problems. Investigations are required into how these complex cases respond to treatment identified as efficacious by controlled trials. This study…

  15. Efficacy of silk channel injections with insecticides for management of Lepidopteran pests of sweet corn

    USDA-ARS?s Scientific Manuscript database

    The primary Lepidopteran pests of sweet corn in Georgia are the corn earworm, Helicoverpa zea (Boddie), and the fall armyworm, Spodoptera frugiperda (J.E. Smith). Control of these pests typically requires multiple insecticide applications from first silking until harvest, with commercial growers fre...

  16. Emergence of Intraverbal Responding Following Tact Instruction with Compound Stimuli

    ERIC Educational Resources Information Center

    Devine, Bailey; Carp, Charlotte L.; Hiett, Kiley A.; Petursdottir, Anna Ingeborg

    2016-01-01

    Effective intraverbal responding often requires control by multiple elements of a verbal stimulus. The purpose of this study was to examine the emergence of such intraverbal relations following tact instruction with compound stimuli and to analyze any resulting error patterns. Participants were seven typically developing children between 3 and…

  17. Exploring barriers to remaining physically active: a case report of a person with multiple sclerosis.

    PubMed

    Zalewski, Kathryn

    2007-03-01

    Physical therapy intervention for those with chronic disabling conditions typically follows an episode of care approach: therapists provide services when a decrement in functional performance occurs such that individuals require intervention to return to baseline performance. Attention to the psychosocial supports required for successful transition can be unintentionally minimized when the focus of an episode of care follows a change in physical function. The purpose of this case report is to present and discuss the challenges to successful community reintegration following physical therapy intervention with an emphasis on developing independent exercise habits in management of a person with multiple sclerosis. RW, presented in this case study, is a 52-year-old man diagnosed with progressive multiple sclerosis five years before self-referral to a pro bono physical therapy clinic.

  18. Evidence for Preserved Novel Word Learning in Down Syndrome Suggests Multiple Routes to Vocabulary Acquisition

    ERIC Educational Resources Information Center

    Mosse, Emma K.; Jarrold, Christopher

    2011-01-01

    Purpose: Three studies investigated novel word learning, some requiring phonological production, each involving between 11 and 17 individuals with Down syndrome, and between 15 and 24 typically developing individuals matched for receptive vocabulary. The effect of stimuli wordlikeness and incidental procedure-based memory demands were examined to…

  19. Contextual Richness and Word Learning: Context Enhances Comprehension but Retrieval Enhances Retention

    ERIC Educational Resources Information Center

    van den Broek, Gesa S. E.; Takashima, Atsuko; Segers, Eliane; Verhoeven, Ludo

    2018-01-01

    Learning new vocabulary from context typically requires multiple encounters during which word meaning can be retrieved from memory or inferred from context. We compared the effect of memory retrieval and context inferences on short- and long-term retention in three experiments. Participants studied novel words and then practiced the words either…

  20. Genetic variance partitioning and genome-wide prediction with allele dosage information in autotetraploid potato

    USDA-ARS?s Scientific Manuscript database

    Potato breeding cycles typically last 6-7 years because of the modest seed multiplication rate and large number of traits required of new varieties. Genomic selection has the potential to increase genetic gain per unit of time, through higher accuracy and/or a shorter cycle. Both possibilities were ...

  1. A successful trap design for capturing large terrestrial snakes

    Treesearch

    Shirley J. Burgdorf; D. Craig Rudolph; Richard N. Conner; Daniel Saenz; Richard R. Schaefer

    2005-01-01

    Large scale trapping protocols for snakes can be expensive and require large investments of personnel and time. Typical methods, such as pitfall and small funnel traps, are not useful or suitable for capturing large snakes. A method was needed to survey multiple blocks of habitat for the Louisiana Pine Snake (Pituophis ruthveni), throughout its...

  2. Real Clients, Real Management, Real Failure: The Risks and Rewards of Service Learning

    ERIC Educational Resources Information Center

    Cyphert, Dale

    2006-01-01

    There are multiple advantages to service-learning projects across the business curriculum, but in communication classes the author has found their biggest value to be authenticity. A "real-world" assignment requires the flexible, creative integration of communication skills in an environment where, "unlike exams and other typical university…

  3. What Gene-Environment Interactions Can Tell Us about Social Competence in Typical and Atypical Populations

    ERIC Educational Resources Information Center

    Iarocci, Grace; Yager, Jodi; Elfers, Theo

    2007-01-01

    Social competence is a complex human behaviour that is likely to involve a system of genes that interacts with a myriad of environmental risk and protective factors. The search for its genetic and environmental origins and influences is equally complex and will require a multidimensional conceptualization and multiple methods and levels of…

  4. Teaching Movable "Du": Guidelines for Developing Enrhythmic Reading Skills

    ERIC Educational Resources Information Center

    Dalby, Bruce

    2015-01-01

    Reading music notation with fluency is a complex skill requiring well-founded instruction by the music teacher and diligent practice on the part of the learner. The task is complicated by the fact that there are multiple ways to notate a given rhythm. Beginning music students typically have their first encounter with enrhythmic notation when they…

  5. Cross-beam coherence of infrasonic signals at local and regional ranges.

    PubMed

    Alberts, W C Kirkpatrick; Tenney, Stephen M

    2017-11-01

    Signals collected by infrasound arrays require continuous analysis by skilled personnel or by automatic algorithms in order to extract useable information. Typical pieces of information gained by analysis of infrasonic signals collected by multiple sensor arrays are arrival time, line of bearing, amplitude, and duration. These can all be used, often with significant accuracy, to locate sources. A very important part of this chain is associating collected signals across multiple arrays. Here, a pairwise, cross-beam coherence method of signal association is described that allows rapid signal association for high signal-to-noise ratio events captured by multiple infrasound arrays at ranges exceeding 150 km. Methods, test cases, and results are described.

  6. Developmental Profiles for Multiple Object Tracking and Spatial Memory: Typically Developing Preschoolers and People with Williams Syndrome

    ERIC Educational Resources Information Center

    O'Hearn, Kirsten; Hoffman, James E.; Landau, Barbara

    2010-01-01

    The ability to track moving objects, a crucial skill for mature performance on everyday spatial tasks, has been hypothesized to require a specialized mechanism that may be available in infancy (i.e. indexes). Consistent with the idea of specialization, our previous work showed that object tracking was more impaired than a matched spatial memory…

  7. Estimating habitat value using forest inventory data: the fisher (Martes pennanti) in northwestern California

    Treesearch

    William J. Zielinski; Jeffrey R. Dunk; Andrew N. Gray

    2012-01-01

    Managing forests for multiple objectives requires balancing timber and vegetation management objectives with needs of sensitive species. Especially challenging is how to retain the habitat elements for species that are typically associated with late-seral forests. We develop a regionally specific, multivariate model describing habitat selection that can be used – when...

  8. Dead simple OWL design patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osumi-Sutherland, David; Courtot, Melanie; Balhoff, James P.

    Bio-ontologies typically require multiple axes of classification to support the needs of their users. Development of such ontologies can only be made scalable and sustainable by the use of inference to automate classification via consistent patterns of axiomatization. Many bio-ontologies originating in OBO or OWL follow this approach. These patterns need to be documented in a form that requires minimal expertise to understand and edit and that can be validated and applied using any of the various programmatic approaches to working with OWL ontologies. We describe a system, Dead Simple OWL Design Patterns (DOS-DPs), which fulfills these requirements, illustrating themore » system with examples from the Gene Ontology. In conclusion, the rapid adoption of DOS-DPs by multiple ontology development projects illustrates both the ease-of use and the pressing need for the simple design pattern system we have developed.« less

  9. Dead simple OWL design patterns

    DOE PAGES

    Osumi-Sutherland, David; Courtot, Melanie; Balhoff, James P.; ...

    2017-06-05

    Bio-ontologies typically require multiple axes of classification to support the needs of their users. Development of such ontologies can only be made scalable and sustainable by the use of inference to automate classification via consistent patterns of axiomatization. Many bio-ontologies originating in OBO or OWL follow this approach. These patterns need to be documented in a form that requires minimal expertise to understand and edit and that can be validated and applied using any of the various programmatic approaches to working with OWL ontologies. We describe a system, Dead Simple OWL Design Patterns (DOS-DPs), which fulfills these requirements, illustrating themore » system with examples from the Gene Ontology. In conclusion, the rapid adoption of DOS-DPs by multiple ontology development projects illustrates both the ease-of use and the pressing need for the simple design pattern system we have developed.« less

  10. Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin

    2016-04-01

    Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.

  11. Integrated optics to improve resolution on multiple configuration

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Ding, Quanxin; Guo, Chunjie; Zhou, Liwei

    2015-04-01

    Inspired to in order to reveal the structure to improve imaging resolution, further technical requirement is proposed in some areas of the function and influence on the development of multiple configuration. To breakthrough diffraction limit, smart structures are recommended as the most efficient and economical method, while by used to improve the system performance, especially on signal to noise ratio and resolution. Integrated optics were considered in the selection, with which typical multiple configuration, by use the method of simulation experiment. Methodology can change traditional design concept and to develop the application space. Our calculations using multiple matrix transfer method, also the correlative algorithm and full calculations, show the expected beam shaping through system and, in particular, the experimental results will support our argument, which will be reported in the presentation.

  12. The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex

    PubMed Central

    Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A.; Grill-Spector, Kalanit; Rossion, Bruno

    2016-01-01

    Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. SIGNIFICANCE STATEMENT Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. PMID:27511014

  13. Tests of cosmic ray radiography for power industry applications

    NASA Astrophysics Data System (ADS)

    Durham, J. M.; Guardincerri, E.; Morris, C. L.; Bacon, J.; Fabritius, J.; Fellows, S.; Poulson, D.; Plaud-Ramos, K.; Renshaw, J.

    2015-06-01

    In this report, we assess muon multiple scattering tomography as a non-destructive inspection technique in several typical areas of interest to the nuclear power industry, including monitoring concrete degradation, gate valve conditions, and pipe wall thickness. This work is motivated by the need for imaging methods that do not require the licensing, training, and safety controls of x-rays, and by the need to be able to penetrate considerable overburden to examine internal details of components that are otherwise inaccessible, with minimum impact on industrial operations. In some scenarios, we find that muon tomography may be an attractive alternative to more typical measurements.

  14. Tests of cosmic ray radiography for power industry applications

    DOE PAGES

    Durham, J. M.; Guardincerri, E.; Morris, C. L.; ...

    2015-06-30

    In this report, we assess muon multiple scattering tomography as a non-destructive inspection technique in several typical areas of interest to the nuclear power industry, including monitoring concrete degradation, gate valve conditions, and pipe wall thickness. This work is motivated by the need for imaging methods that do not require the licensing, training, and safety controls of x-rays, and by the need to be able to penetrate considerable overburden to examine internal details of components that are otherwise inaccessible, with minimum impact on industrial operations. In some instances, we find that muon tomography may be an attractive alternative to moremore » typical measurements.« less

  15. Universal Common Communication Substrate (UCCS) Specification; Universal Common Communication Substrate (UCCS) Implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Universal Common Communication Substrate (UCCS) is a low-level communication substrate that exposes high-performance communication primitives, while providing network interoperability. It is intended to support multiple upper layer protocol (ULPs) or programming models including SHMEM,UPC,Titanium,Co-Array Fortran,Global Arrays,MPI,GASNet, and File I/O. it provides various communication operations including one-sided and two-sided point-to-point, collectives, and remote atomic operations. In addition to operations for ULPs, it provides an out-of-band communication channel required typically required to wire-up communication libraries.

  16. Progress in Developing Transfer Functions for Surface Scanning Eddy Current Inspections

    NASA Astrophysics Data System (ADS)

    Shearer, J.; Heebl, J.; Brausch, J.; Lindgren, E.

    2009-03-01

    As US Air Force (USAF) aircraft continue to age, additional inspections are required for structural components. The validation of new inspections typically requires a capability demonstration of the method using representative structure with representative damage. To minimize the time and cost required to prepare such samples, Electric Discharge machined (EDM) notches are commonly used to represent fatigue cracks in validation studies. However, the sensitivity to damage typically changes as a function of damage type. This requires a mathematical relationship to be developed between the responses from the two different flaw types to enable the use of EDM notched samples to validate new inspections. This paper reviews progress to develop transfer functions for surface scanning eddy current inspections of aluminum and titanium alloys found in structural aircraft components. Multiple samples with well characterized grown fatigue cracks and master gages with EDM notches, both with a range of flaw sizes, were used to collect flaw signals with USAF field inspection equipment. Analysis of this empirical data was used to develop a transfer function between the response from the EDM notches and grown fatigue cracks.

  17. A Comparison of Three IRT Approaches to Examinee Ability Change Modeling in a Single-Group Anchor Test Design

    ERIC Educational Resources Information Center

    Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim

    2014-01-01

    Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…

  18. Multiple mutant clones in blood rarely coexist

    NASA Astrophysics Data System (ADS)

    Dingli, David; Pacheco, Jorge M.; Traulsen, Arne

    2008-02-01

    Leukemias arise due to mutations in the genome of hematopoietic (blood) cells. Hematopoiesis has a multicompartment architecture, with cells exhibiting different rates of replication and differentiation. At the root of this process, one finds a small number of stem cells, and hence the description of the mutation-selection dynamics of blood cells calls for a stochastic approach. We use stochastic dynamics to investigate to which extent acquired hematopoietic disorders are associated with mutations of single or multiple genes within developing blood cells. Our analysis considers the appearance of mutations both in the stem cell compartment as well as in more committed compartments. We conclude that in the absence of genomic instability, acquired hematopoietic disorders due to mutations in multiple genes are most likely very rare events, as multiple mutations typically require much longer development times compared to those associated with a single mutation.

  19. Optimizing Multiple Analyte Injections in Surface Plasmon Resonance Biosensors with Analytes having Different Refractive Index Increments

    PubMed Central

    Mehand, Massinissa Si; Srinivasan, Bala; De Crescenzo, Gregory

    2015-01-01

    Surface plasmon resonance-based biosensors have been successfully applied to the study of the interactions between macromolecules and small molecular weight compounds. In an effort to increase the throughput of these SPR-based experiments, we have already proposed to inject multiple compounds simultaneously over the same surface. When specifically applied to small molecular weight compounds, such a strategy would however require prior knowledge of the refractive index increment of each compound in order to correctly interpret the recorded signal. An additional experiment is typically required to obtain this information. In this manuscript, we show that through the introduction of an additional global parameter corresponding to the ratio of the saturating signals associated with each molecule, the kinetic parameters could be identified with similar confidence intervals without any other experimentation. PMID:26515024

  20. Single-shot secure quantum network coding on butterfly network with free public communication

    NASA Astrophysics Data System (ADS)

    Owari, Masaki; Kato, Go; Hayashi, Masahito

    2018-01-01

    Quantum network coding on the butterfly network has been studied as a typical example of quantum multiple cast network. We propose a secure quantum network code for the butterfly network with free public classical communication in the multiple unicast setting under restricted eavesdropper’s power. This protocol certainly transmits quantum states when there is no attack. We also show the secrecy with shared randomness as additional resource when the eavesdropper wiretaps one of the channels in the butterfly network and also derives the information sending through public classical communication. Our protocol does not require verification process, which ensures single-shot security.

  1. The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex.

    PubMed

    Weiner, Kevin S; Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A; Grill-Spector, Kalanit; Rossion, Bruno

    2016-08-10

    Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. Copyright © 2016 the authors 0270-6474/16/368426-16$15.00/0.

  2. Choosing the Most Effective Pattern Classification Model under Learning-Time Constraint.

    PubMed

    Saito, Priscila T M; Nakamura, Rodrigo Y M; Amorim, Willian P; Papa, João P; de Rezende, Pedro J; Falcão, Alexandre X

    2015-01-01

    Nowadays, large datasets are common and demand faster and more effective pattern analysis techniques. However, methodologies to compare classifiers usually do not take into account the learning-time constraints required by applications. This work presents a methodology to compare classifiers with respect to their ability to learn from classification errors on a large learning set, within a given time limit. Faster techniques may acquire more training samples, but only when they are more effective will they achieve higher performance on unseen testing sets. We demonstrate this result using several techniques, multiple datasets, and typical learning-time limits required by applications.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiber, J.; Max-Planck-Institut für Quantenoptik Garching, Hans-Kopfermann-Str. 1, 85748 Garching bei München; Bolton, P. R.

    An overview of progress and typical yields from intense laser-plasma acceleration of ions is presented. The evolution of laser-driven ion acceleration at relativistic intensities ushers prospects for improved functionality and diverse applications which can represent a varied assortment of ion beam requirements. This mandates the development of the integrated laser-driven ion accelerator system, the multiple components of which are described. Relevant high field laser-plasma science and design of controlled optimum pulsed laser irradiation on target are dominant single shot (pulse) considerations with aspects that are appropriate to the emerging petawatt era. The pulse energy scaling of maximum ion energies andmore » typical differential spectra obtained over the past two decades provide guidance for continued advancement of laser-driven energetic ion sources and their meaningful applications.« less

  4. Beyond R0 Maximisation: On Pathogen Evolution and Environmental Dimensions.

    PubMed

    Lion, Sébastien; Metz, Johan A J

    2018-06-01

    A widespread tenet is that evolution of pathogens maximises their basic reproduction ratio, R 0 . The breakdown of this principle is typically discussed as exception. Here, we argue that a radically different stance is needed, based on evolutionarily stable strategy (ESS) arguments that take account of the 'dimension of the environmental feedback loop'. The R 0 maximisation paradigm requires this feedback loop to be one-dimensional, which notably excludes pathogen diversification. By contrast, almost all realistic ecological ingredients of host-pathogen interactions (density-dependent mortality, multiple infections, limited cross-immunity, multiple transmission routes, host heterogeneity, and spatial structure) will lead to multidimensional feedbacks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Towards a Methodology for Identifying Program Constraints During Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Romo, Lilly; Gates, Ann Q.; Della-Piana, Connie Kubo

    1997-01-01

    Requirements analysis is the activity that involves determining the needs of the customer, identifying the services that the software system should provide and understanding the constraints on the solution. The result of this activity is a natural language document, typically referred to as the requirements definition document. Some of the problems that exist in defining requirements in large scale software projects includes synthesizing knowledge from various domain experts and communicating this information across multiple levels of personnel. One approach that addresses part of this problem is called context monitoring and involves identifying the properties of and relationships between objects that the system will manipulate. This paper examines several software development methodologies, discusses the support that each provide for eliciting such information from experts and specifying the information, and suggests refinements to these methodologies.

  6. Development of an EMCCD for LIDAR applications

    NASA Astrophysics Data System (ADS)

    De Monte, B.; Bell, R. T.

    2017-11-01

    A novel detector, incorporating e2v's EMCCD (L3VisionTM) [1] technology for use in LIDAR (Light Detection And Ranging) applications has been designed, manufactured and characterised. The most critical performance aspect was the requirement to collect charge from a 120μm square detection area for a 667ns temporal sampling window, with low crosstalk between successive samples, followed by signal readout with sub-electron effective noise. Additional requirements included low dark signal, high quantum efficiency at the 355nm laser wavelength and the ability to handle bright laser echoes, without corruption of the much fainter useful signals. The detector architecture used high speed charge binning to combine signal from each sampling window into a single charge packet. This was then passed through a multiplication register (EMCCD) operating with a typical gain of 100X to a conventional charge detection circuit. The detector achieved a typical quantum efficiency of 80% and a total noise in darkness of < 0.5 electrons rms. Development of the detector was supported by ESA.

  7. Space station microscopy: Beyond the box

    NASA Technical Reports Server (NTRS)

    Hunter, N. R.; Pierson, Duane L.; Mishra, S. K.

    1993-01-01

    Microscopy aboard Space Station Freedom poses many unique challenges for in-flight investigations. Disciplines such as material processing, plant and animal research, human reseach, enviromental monitoring, health care, and biological processing have diverse microscope requirements. The typical microscope not only does not meet the comprehensive needs of these varied users, but also tends to require excessive crew time. To assess user requirements, a comprehensive survey was conducted among investigators with experiments requiring microscopy. The survey examined requirements such as light sources, objectives, stages, focusing systems, eye pieces, video accessories, etc. The results of this survey and the application of an Intelligent Microscope Imaging System (IMIS) may address these demands for efficient microscopy service in space. The proposed IMIS can accommodate multiple users with varied requirements, operate in several modes, reduce crew time needed for experiments, and take maximum advantage of the restrictive data/ instruction transmission environment on Freedom.

  8. Quality specifications for articles of botanical origin from the United States Pharmacopeia.

    PubMed

    Ma, Cuiying; Oketch-Rabah, Hellen; Kim, Nam-Cheol; Monagas, Maria; Bzhelyansky, Anton; Sarma, Nandakumara; Giancaspro, Gabriel

    2018-06-01

    In order to define appropriate quality of botanical dietary supplements, botanical drugs, and herbal medicines, the United States Pharmacopeia (USP) and the Herbal Medicines Compendium (HMC) contain science-based quality standards that include multiple interrelated tests to provide a full quality characterization for each article in terms of its identity, purity, and content. To provide a comprehensive description of the pharmacopeial tests and requirements for articles of botanical origin in the aforementioned compendia. Selective chromatographic procedures, such as high-performance liquid chromatography (HPLC) and high-performance thin-layer chromatography (HPTLC), are used as Identification tests in pharmacopeial monographs to detect species substitution or other confounders. HPLC quantitative tests are typically used to determine the content of key constituents, i.e., the total or individual amount of plant secondary metabolites that are considered bioactive constituents or analytical marker compounds. Purity specifications are typically set to limit the content of contaminants such as toxic elements, pesticides, and fungal toxins. Additional requirements highlight the importance of naming, definition, use of reference materials, and packaging/storage conditions. Technical requirements for each section of the monographs were illustrated with specific examples. Tests were performed on authentic samples using pharmacopeial reference standards. The chromatographic analytical procedures were validated to provide characteristic profiles for the identity and/or accurate determination of the content of quality markers. The multiple tests included in each monograph complement each other to provide an appropriate pharmacopeial quality characterization for the botanicals used as herbal medicines and dietary supplements. The monographs provide detailed specifications for identity, content of bioactive constituents or quality markers, and limits of contaminants, adulterants, and potentially toxic substances. Additional requirements such as labeling and packaging further contribute to preserve the quality of these products. Compliance with pharmacopeial specifications should be required to ensure the reliability of botanical articles used for health care purposes. Copyright © 2018. Published by Elsevier GmbH.

  9. Flexible Architecture for FPGAs in Embedded Systems

    NASA Technical Reports Server (NTRS)

    Clark, Duane I.; Lim, Chester N.

    2012-01-01

    Commonly, field-programmable gate arrays (FPGAs) being developed in cPCI embedded systems include the bus interface in the FPGA. This complicates the development because the interface is complicated and requires a lot of development time and FPGA resources. In addition, flight qualification requires a substantial amount of time be devoted to just this interface. Another complication of putting the cPCI interface into the FPGA being developed is that configuration information loaded into the device by the cPCI microprocessor is lost when a new bit file is loaded, requiring cumbersome operations to return the system to an operational state. Finally, SRAM-based FPGAs are typically programmed via specialized cables and software, with programming files being loaded either directly into the FPGA, or into PROM devices. This can be cumbersome when doing FPGA development in an embedded environment, and does not have an easy path to flight. Currently, FPGAs used in space applications are usually programmed via multiple space-qualified PROM devices that are physically large and require extra circuitry (typically including a separate one-time programmable FPGA) to enable them to be used for this application. This technology adds a cPCI interface device with a simple, flexible, high-performance backend interface supporting multiple backend FPGAs. It includes a mechanism for programming the FPGAs directly via the microprocessor in the embedded system, eliminating specialized hardware, software, and PROM devices and their associated circuitry. It has a direct path to flight, and no extra hardware and minimal software are required to support reprogramming in flight. The device added is currently a small FPGA, but an advantage of this technology is that the design of the device does not change, regardless of the application in which it is being used. This means that it needs to be qualified for flight only once, and is suitable for one-time programmable devices or an application specific integrated circuit (ASIC). An application programming interface (API) further reduces the development time needed to use the interface device in a system.

  10. Health Of Americans Who Must Work Longer To Reach Social Security Retirement Age.

    PubMed

    Choi, HwaJung; Schoeni, Robert F

    2017-10-01

    To receive full Social Security benefits, Americans born after 1937 must claim those benefits at an older age than earlier birth cohorts. Additionally, proposals to improve the fiscal position of Social Security typically include increasing the age at which workers can receive full benefits. Birth cohorts required to work longer are in worse health at ages 49-60, based on multiple measures of morbidity, than cohorts who could retire earlier. Project HOPE—The People-to-People Health Foundation, Inc.

  11. Elaboration on an Integrated Architecture and Requirement Practice: Prototyping with Quality Attribute Focus

    DTIC Science & Technology

    2013-05-01

    release level prototyping as:  The R&D prototype is typically funded by the organization, rather than the client .  The work is done in an R&D...performance) with hopes that this capability could be offered to multiple clients . The clustering prototype is developed in the organization’s R&D...ICSE Conference 2013) [5] A. Martini, L. Pareto , and J. Bosch, “Enablers and inhibitors for speed with reuse,” Proceedings of the 16th Software

  12. De novo establishment of wild-type song culture in the zebra finch.

    PubMed

    Fehér, Olga; Wang, Haibin; Saar, Sigal; Mitra, Partha P; Tchernichovski, Ofer

    2009-05-28

    Culture is typically viewed as consisting of traits inherited epigenetically, through social learning. However, cultural diversity has species-typical constraints, presumably of genetic origin. A celebrated, if contentious, example is whether a universal grammar constrains syntactic diversity in human languages. Oscine songbirds exhibit song learning and provide biologically tractable models of culture: members of a species show individual variation in song and geographically separated groups have local song dialects. Different species exhibit distinct song cultures, suggestive of genetic constraints. Without such constraints, innovations and copying errors should cause unbounded variation over multiple generations or geographical distance, contrary to observations. Here we report an experiment designed to determine whether wild-type song culture might emerge over multiple generations in an isolated colony founded by isolates, and, if so, how this might happen and what type of social environment is required. Zebra finch isolates, unexposed to singing males during development, produce song with characteristics that differ from the wild-type song found in laboratory or natural colonies. In tutoring lineages starting from isolate founders, we quantified alterations in song across tutoring generations in two social environments: tutor-pupil pairs in sound-isolated chambers and an isolated semi-natural colony. In both settings, juveniles imitated the isolate tutors but changed certain characteristics of the songs. These alterations accumulated over learning generations. Consequently, songs evolved towards the wild-type in three to four generations. Thus, species-typical song culture can appear de novo. Our study has parallels with language change and evolution. In analogy to models in quantitative genetics, we model song culture as a multigenerational phenotype partly encoded genetically in an isolate founding population, influenced by environmental variables and taking multiple generations to emerge.

  13. Expression of short hairpin RNAs using the compact architecture of retroviral microRNA genes.

    PubMed

    Burke, James M; Kincaid, Rodney P; Aloisio, Francesca; Welch, Nicole; Sullivan, Christopher S

    2017-09-29

    Short hairpin RNAs (shRNAs) are effective in generating stable repression of gene expression. RNA polymerase III (RNAP III) type III promoters (U6 or H1) are typically used to drive shRNA expression. While useful for some knockdown applications, the robust expression of U6/H1-driven shRNAs can induce toxicity and generate heterogeneous small RNAs with undesirable off-target effects. Additionally, typical U6/H1 promoters encompass the majority of the ∼270 base pairs (bp) of vector space required for shRNA expression. This can limit the efficacy and/or number of delivery vector options, particularly when delivery of multiple gene/shRNA combinations is required. Here, we develop a compact shRNA (cshRNA) expression system based on retroviral microRNA (miRNA) gene architecture that uses RNAP III type II promoters. We demonstrate that cshRNAs coded from as little as 100 bps of total coding space can precisely generate small interfering RNAs (siRNAs) that are active in the RNA-induced silencing complex (RISC). We provide an algorithm with a user-friendly interface to design cshRNAs for desired target genes. This cshRNA expression system reduces the coding space required for shRNA expression by >2-fold as compared to the typical U6/H1 promoters, which may facilitate therapeutic RNAi applications where delivery vector space is limiting. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. A trajectory design method via target practice for air-breathing hypersonic vehicle

    NASA Astrophysics Data System (ADS)

    Kong, Xue; Yang, Ming; Ning, Guodong; Wang, Songyan; Chao, Tao

    2017-11-01

    There are strong coupling interactions between aerodynamics and scramjet, this kind of aircraft also has multiple restrictions, such as the range and difference of dynamic pressure, airflow, and fuel. On the one hand, we need balance the requirement between maneuverability of vehicle and stabilization of scramjet. On the other hand, we need harmonize the change of altitude and the velocity. By describing aircraft's index system of climbing capability, acceleration capability, the coupling degree in aerospace, this paper further propose a rapid design method which based on target practice. This method aimed for reducing the coupling degree, it depresses the coupling between aircraft and engine in navigation phase, satisfy multiple restriction conditions to leave some control buffer and create good condition for control implementation. According to the simulation, this method could be used for multiple typical fly commissions such as climbing, acceleration or both.

  15. Miniature High-Force, Long-Stroke SMA Linear Actuators

    NASA Technical Reports Server (NTRS)

    Cummin, Mark A.; Donakowski, William; Cohen, Howard

    2008-01-01

    Improved long-stroke shape-memory-alloy (SMA) linear actuators are being developed to exert significantly higher forces and operate at higher activation temperatures than do prior SMA actuators. In these actuators, long linear strokes are achieved through the principle of displacement multiplication, according to which there are multiple stages, each intermediate stage being connected by straight SMA wire segments to the next stage so that relative motions of stages are additive toward the final stage, which is the output stage. Prior SMA actuators typically include polymer housings or shells, steel or aluminum stages, and polymer pads between successive stages of displacement-multiplication assemblies. Typical output forces of prior SMA actuators range from 10 to 20 N, and typical strokes range from 0.5 to 1.5 cm. An important disadvantage of prior SMA wire actuators is relatively low cycle speed, which is related to actuation temperature as follows: The SMA wires in prior SMA actuators are typically made of a durable nickel/titanium alloy that has a shape-memory activation temperature of 80 C. An SMA wire can be heated quickly from below to above its activation temperature to obtain a stroke in one direction, but must then be allowed to cool to somewhat below its activation temperature (typically, less than or equal to 60 C in the case of an activation temperature of 80 C) to obtain a stroke in the opposite direction (return stroke). At typical ambient temperatures, cooling times are of the order of several seconds. Cooling times thus limit cycle speeds. Wires made of SMA alloys having significantly higher activation temperatures [denoted ultra-high-temperature (UHT) SMA alloys] cool to the required lower return-stroke temperatures more rapidly, making it possible to increase cycle speeds. The present development is motivated by a need, in some applications (especially aeronautical and space-flight applications) for SMA actuators that exert higher forces, operate at greater cycle speeds, and have stronger housings that can withstand greater externally applied forces and impacts. The main novel features of the improved SMA actuators are the following: 1) The ends of the wires are anchored in compact crimps made from short steel tubes. Each wire end is inserted in a tube, the tube is flattened between planar jaws to make the tube grip the wire, the tube is compressed to a slight U-cross-section deformation to strengthen the grip, then the crimp is welded onto one of the actuator stages. The pull strength of a typical crimp is about 125 N -- comparable to the strength of the SMA wire and greater than the typical pull strengths of wire-end anchors in prior SMA actuators. Greater pull strength is one of the keys to achievement of higher actuation force; 2) For greater strength and resistance to impacts, housings are milled from aluminum instead of being made from polymers. Each housing is made from two pieces in a clamshell configuration. The pieces are anodized to reduce sliding friction; 3) Stages are made stronger (to bear greater compression loads without excessive flexing) by making them from steel sheets thicker than those used in prior SMA actuators. The stages contain recessed pockets to accommodate the crimps. Recessing the pockets helps to keep overall dimensions as small as possible; and, 4) UHT SMA wires are used to satisfy the higher-speed/higher-temperature requirement.

  16. Active sensing in the categorization of visual patterns

    PubMed Central

    Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M

    2016-01-01

    Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546

  17. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    PubMed

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  18. SMEX-Lite Modular Solar Array Architecture

    NASA Technical Reports Server (NTRS)

    Lyons, John

    2002-01-01

    For the most part, Goddard solar arrays have been custom designs that are unique to each mission. The solar panel design has been frozen prior to issuing an RFP for their procurement. There has typically been 6-9 months between RFP release and contract award, followed by an additional 24 months for performance of the contract. For Small Explorer (SMEX) missions, with three years between mission definition and launch, this has been a significant problem. The SMEX solar panels have been sufficiently small that the contract performance period has been reduced to 12-15 months. The bulk of this time is used up in the final design definition and fabrication of flight solar cell assemblies. Even so, it has been virtually impossible to have the spacecraft design at a level of maturity sufficient to freeze the solar panel geometry and release the RFP in time to avoid schedule problems with integrating the solar panels to the spacecraft. With that in mind, the SMEX-Lite project team developed a modular architecture for the assembly of solar arrays to greatly reduce the cost and schedule associated with the development of a mission- specific solar array. In the modular architecture, solar cells are fabricated onto small substrate panels. This modular panel (approximately 8.5" x 17" in this case) becomes the building block for constructing solar arrays for multiple missions with varying power requirements and geometrical arrangements. The mechanical framework that holds these modules together as a solar array is the only mission-unique design, changing in size and shape as required for each mission. There are several advantages to this approach. First, the typical solar array development cycle requires a mission unique design, procurement, and qualification including a custom qualification panel. With the modular architecture, a single qualification of the SMEX-Lite modules and the associated mechanical framework in a typical configuration provided a qualification by similarity to multiple missions. It then becomes possible to procure solar array modules in advance of mission definition and respond quickly and inexpensively to a selected mission's unique requirements. The solar array modular architecture allows the procurement of solar array modules before the array geometry has been frozen. This reduces the effect of procurement lead-time on the mission integration and test flow by as much as 50%. Second, by spreading the non-recurring costs over multiple missions, the cost per unit area is also reduced. In the case of the SMEX-Lite procurement, this reduction was by about one third of the cost per unit area compared to previous SMEX mission-unique procurements. Third, the modular architecture greatly facilitates the infusion of new solar cell technologies into flight programs as these technologies become available. New solar cell technologies need only be fabricated onto a standard-sized module to be incorporated into the next available mission. The modular solar array can be flown in a mixed configuration with some new and some standard cell technologies. Since each module has its own wiring terminals, the array can be arranged as desired electrically with little impact to cost and schedule. The solar array modular architecture does impose some additional constraints on systems and subsystem engineers. First, they must work with discrete solar array modules rather than size the array to fit exactly within an available envelope. The array area is constrained to an integer multiple of the module area. Second, the modular design is optimized for space radiation and thermal environments not greatly different from a typical SMEX LEO environment. For example, a mission with a highly elliptical orbit (e.g., Polar, SMEX/FAST) would require thicker coverglasses to protect the solar cells from the more intense radiation environment.

  19. Development of an EMCCD for lidar applications

    NASA Astrophysics Data System (ADS)

    De Monte, B.; Bell, R. T.

    2017-11-01

    A novel detector, incorporating e2v's L3 CCD (L3Vision™) [1] technology for use in LIDAR (Light Detection And Ranging) applications has been designed, manufactured and characterised. The most critical performance aspect was the requirement to collect charge from a 120μm square detection area for a 667ns temporal sampling window, with low crosstalk between successive samples, followed by signal readout with sub-electron effective noise. Additional requirements included low dark signal, high quantum efficiency at the 355nm laser wavelength and the ability to handle bright laser echoes, without corruption of the much fainter useful signals. The detector architecture used high speed charge binning to combine signal from each sampling window into a single charge packet. This was then passed through a multiplication register (Electron Multiplying Charge Coupled Device) operating with a typical gain of 100X to a conventional charge detection circuit. The detector achieved a typical quantum efficiency of 80% and a total noise in darkness of < 0.5 electrons rms. Development of the detector was supported by ESA (European Space Agency).

  20. Emergency Dose Estimation Using Optically Stimulated Luminescence from Human Tooth Enamel.

    PubMed

    Sholom, S; Dewitt, R; Simon, S L; Bouville, A; McKeever, S W S

    2011-09-01

    Human teeth were studied for potential use as emergency Optically Stimulated Luminescence (OSL) dosimeters. By using multiple-teeth samples in combination with a custom-built sensitive OSL reader, (60)Co-equivalent doses below 0.64 Gy were measured immediately after exposure with the lowest value being 27 mGy for the most sensitive sample. The variability of OSL sensitivity, from individual to individual using multiple-teeth samples, was determined to be 53%. X-ray and beta exposure were found to produce OSL curves with the same shape that differed from those due to ultraviolet (UV) exposure; as a result, correlation was observed between OSL signals after X-ray and beta exposure and was absent if compared to OSL signals after UV exposure. Fading of the OSL signal was "typical" for most teeth with just a few of incisors showing atypical behavior. Typical fading dependences were described by a bi-exponential decay function with "fast" (decay time around of 12 min) and "slow" (decay time about 14 h) components. OSL detection limits, based on the techniques developed to-date, were found to be satisfactory from the point-of-view of medical triage requirements if conducted within 24 hours of the exposure.

  1. A new look at photometry of the Moon

    USGS Publications Warehouse

    Goguen, J.D.; Stone, T.C.; Kieffer, H.H.; Buratti, B.J.

    2010-01-01

    We use ROLO photometry (Kieffer, H.H., Stone, T.C. [2005]. Astron. J. 129, 2887-2901) to characterize the before and after full Moon radiance variation for a typical highlands site and a typical mare site. Focusing on the phase angle range 45??. ) to calculate the scattering matrix and solve the radiative transfer equation for I/. F. The mean single scattering albedo is ??=0.808, the asymmetry parameter is ???cos. ?????=0.77 and the phase function is very strongly peaked in both the forward and backward scattering directions. The fit to the observations for the highland site is excellent and multiply scattered photons contribute 80% of I/. F. We conclude that either model, roughness or multiple scattering, can match the observations, but that the strongly anisotropic phase functions of realistic particles require rigorous calculation of many orders of scattering or spurious photometric roughness estimates are guaranteed. Our multiple scattering calculation is the first to combine: (1) a regolith model matched to the measured particle size distribution and index of refraction of the lunar soil, (2) a rigorous calculation of the particle phase function and solution of the radiative transfer equation, and (3) application to lunar photometry with absolute radiance calibration. ?? 2010 Elsevier Inc.

  2. CNS cavernous haemangioma: "popcorn" in the brain and spinal cord.

    PubMed

    Hegde, A N; Mohan, S; Lim, C C T

    2012-04-01

    Cavernous haemangiomas (CH) are relatively uncommon non-shunting vascular malformations of the central nervous system and can present with seizures or with neurological deficits due to haemorrhage. Radiologists can often suggest the diagnosis of CH based on characteristic magnetic resonance imaging (MRI) features, thus avoiding further invasive procedures such as digital subtraction angiography or surgical biopsy. Although typical MRI appearance combined with the presence of multiple focal low signal lesions on T2*-weighted images or the presence of one or more developmental venous anomaly within the brain can improve the diagnostic confidence, serial imaging studies are often required if a solitary CH presents at a time when the imaging appearances had not yet matured to the typical "popcorn" appearance. Copyright © 2011 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  3. Benchmarking of a motion sensing system for medical imaging and radiotherapy

    NASA Astrophysics Data System (ADS)

    Barnes, Peter J.; Baldock, Clive; Meikle, Steven R.; Fulton, Roger R.

    2008-10-01

    We have tested the performance of an Optotrak Certus system, which optically tracks multiple markers, in both position and time. To do this, we have developed custom code which enables a range of testing protocols, and make this code available to the community. We find that the Certus' positional accuracy is very high, around 20 µm at a distance of 2.8 m. In contrast, we find that its timing accuracy is typically no better than around 5-10% for typical data rates, whether one is using an ethernet connection or a dedicated SCSI link from the system to a host computer. However, with our code we are able to attach very accurate timestamps to the data frames, and in cases where regularly-spaced data are not an absolute requirement, this will be more than adequate.

  4. Documentation of a restart option for the U.S. Geological Survey coupled Groundwater and Surface-Water Flow (GSFLOW) model

    USGS Publications Warehouse

    Regan, R. Steve; Niswonger, Richard G.; Markstrom, Steven L.; Barlow, Paul M.

    2015-10-02

    The spin-up simulation should be run for a sufficient length of time necessary to establish antecedent conditions throughout a model domain. Each GSFLOW application can require different lengths of time to account for the hydrologic stresses to propagate through a coupled groundwater and surface-water system. Typically, groundwater hydrologic processes require many years to come into equilibrium with dynamic climate and other forcing (or stress) data, such as precipitation and well pumping, whereas runoff-dominated surface-water processes respond relatively quickly. Use of a spin-up simulation can substantially reduce execution-time requirements for applications where the time period of interest is small compared to the time for hydrologic memory; thus, use of the restart option can be an efficient strategy for forecast and calibration simulations that require multiple simulations starting from the same day.

  5. Introduction of the ASGARD Code

    NASA Technical Reports Server (NTRS)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian

    2017-01-01

    ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).

  6. Renoduodenal Fistula After Transcatheter Embolization of Renal Angiomyolipoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheth, Rahul A.; Feldman, Adam S.; Walker, T. Gregory, E-mail: tgwalker@partners.org

    Transcatheter embolization of renal angiomyolipomas is a routinely performed, nephron-sparing procedure with a favorable safety profile. Complications from this procedure are typically minor in severity, with postembolization syndrome the most common minor complication. Abscess formation is a recognized but uncommon major complication of this procedure and is presumably due to superinfection of the infarcted tissue after arterial embolization. In this case report, we describe the formation of a renoduodenal fistula after embolization of an angiomyolipoma, complicated by intracranial abscess formation and requiring multiple percutaneous drainage procedures and eventual partial nephrectomy.

  7. Multi-Objective Hybrid Optimal Control for Multiple-Flyby Interplanetary Mission Design Using Chemical Propulsion

    NASA Technical Reports Server (NTRS)

    Englander, Jacob; Vavrina, Matthew

    2015-01-01

    The customer (scientist or project manager) most often does not want just one point solution to the mission design problem Instead, an exploration of a multi-objective trade space is required. For a typical main-belt asteroid mission the customer might wish to see the trade-space of: Launch date vs. Flight time vs. Deliverable mass, while varying the destination asteroid, planetary flybys, launch year, etcetera. To address this question we use a multi-objective discrete outer-loop which defines many single objective real-valued inner-loop problems.

  8. Salang Hospital: Lack of Water and Power Severely Limits Hospital Services, and Major Construction Deficiencies Raise Safety Concerns

    DTIC Science & Technology

    2014-01-01

    November 27, 2013 SIGAR 14-31-IP/Salang Hospital Page 3 • Hospital staff stated they believe the hospital’s septic tank is leaking. The staff told...us that, to the best of their knowledge, no leach field2 was built for the septic tank . • The statement of work required the contractor to provide...2 A leach field is typically installed with a septic tank for subsurface disposal of liquid waste. Multiple perforated pipes buried under ground

  9. Novel quad-band terahertz metamaterial absorber based on single pattern U-shaped resonator

    NASA Astrophysics Data System (ADS)

    Wang, Ben-Xin; Wang, Gui-Zhen

    2017-03-01

    A novel quad-band terahertz metamaterial absorber using four different modes of single pattern resonator is demonstrated. Four obvious frequencies with near-perfect absorption are realized. Near-field distributions of the four modes are provided to reveal the physical picture of the multiple-band absorption. Unlike most previous quad-band absorbers that typically require four or more patterns, the designed absorber has only one resonant structure, which is simpler than previous works. The presented quad-band absorber has potential applications in biological sensing, medical imaging, and material detection.

  10. Internet Protocol-Hybrid Opto-Electronic Ring Network (IP-HORNET): A Novel Internet Protocol-Over-Wavelength Division Multiplexing (IP-Over-WDM) Multiple-Access Metropolitan Area Network (MAN)

    DTIC Science & Technology

    2003-04-01

    usage times. End users may range from today’s typical users, such as home and business users, to futuristic users such as automobiles , appliances, hand...has the ability to drop a reprogrammable quantity of wavelengths into the node. The second technological requirement is a protocol that automatically...goal of the R-OADM is to have the ability to drop a reprogrammable number of wavelengths. If it is determined that at peak usage the node must receive M

  11. Multiobjective optimisation design for enterprise system operation in the case of scheduling problem with deteriorating jobs

    NASA Astrophysics Data System (ADS)

    Wang, Hongfeng; Fu, Yaping; Huang, Min; Wang, Junwei

    2016-03-01

    The operation process design is one of the key issues in the manufacturing and service sectors. As a typical operation process, the scheduling with consideration of the deteriorating effect has been widely studied; however, the current literature only studied single function requirement and rarely considered the multiple function requirements which are critical for a real-world scheduling process. In this article, two function requirements are involved in the design of a scheduling process with consideration of the deteriorating effect and then formulated into two objectives of a mathematical programming model. A novel multiobjective evolutionary algorithm is proposed to solve this model with combination of three strategies, i.e. a multiple population scheme, a rule-based local search method and an elitist preserve strategy. To validate the proposed model and algorithm, a series of randomly-generated instances are tested and the experimental results indicate that the model is effective and the proposed algorithm can achieve the satisfactory performance which outperforms the other state-of-the-art multiobjective evolutionary algorithms, such as nondominated sorting genetic algorithm II and multiobjective evolutionary algorithm based on decomposition, on all the test instances.

  12. Estimating Statistical Power When Making Adjustments for Multiple Tests

    ERIC Educational Resources Information Center

    Porter, Kristin E.

    2016-01-01

    In recent years, there has been increasing focus on the issue of multiple hypotheses testing in education evaluation studies. In these studies, researchers are typically interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time or across multiple treatment groups. When…

  13. "The Math You Need" When Faculty Need It: Enhancing Quantitative Skills at a Broad Spectrum of Higher Education Institutions

    NASA Astrophysics Data System (ADS)

    Baer, E. M.; Wenner, J. M.

    2014-12-01

    Implementation of "The Math You Need, When You Need It" (TMYN) modules at a wide variety of institutions suggests a broad need for faculty support in helping students develop quantitative skills necessary in introductory geoscience courses. Designed to support students in applying geoscience relevant quantitative skills, TMYN modules are web-based, self-paced and commonly assigned outside of class. They include topics such as calculating slope, rearranging equations, and unit conversions and provide several applications of the mathematical technique to geoscience problems. Each instructor chooses modules that are applicable to the content in his/her individual course and students typically work through the module immediately before the module topic is applied in lab or class. Instructors assigned TMYN modules in their courses at more than 40 diverse institutions, including four-year colleges and universities (4YCs) that vary from non-selective to highly selective and open-door two-year colleges (2YCs). Analysis of module topics assigned, frequency of module use, and institutional characteristics reveals similarities and differences among faculty perception of required quantitative skills and incoming student ability at variably selective institutions. Results indicate that institutional type and selectivity are not correlated with module topic; that is, faculty apply similar quantitative skills in all introductory geoscience courses. For example, nearly every instructor assigned the unit conversions module, whereas very few required the trigonometry module. However, differences in number of assigned modules and faculty expectations are observed between 2YCs and 4YCs (no matter the selectivity). Two-year college faculty typically assign a higher number of modules per course and faculty at 4YCs more often combine portions of multiple modules or cover multiple mathematical concepts in a single assignment. These observations suggest that quantitative skills required for introductory geoscience courses are similar among all higher-education institution types. However, faculty at 4YCs may expect students to acquire and apply multiple quantitative skills in the same class/lab, whereas 2YC faculty may structure assignments to introduce and apply only one quantitative technique at a time.

  14. Multiple imputation as one tool to provide longitudinal databases for modelling human height and weight development.

    PubMed

    Aßmann, C

    2016-06-01

    Besides large efforts regarding field work, provision of valid databases requires statistical and informational infrastructure to enable long-term access to longitudinal data sets on height, weight and related issues. To foster use of longitudinal data sets within the scientific community, provision of valid databases has to address data-protection regulations. It is, therefore, of major importance to hinder identifiability of individuals from publicly available databases. To reach this goal, one possible strategy is to provide a synthetic database to the public allowing for pretesting strategies for data analysis. The synthetic databases can be established using multiple imputation tools. Given the approval of the strategy, verification is based on the original data. Multiple imputation by chained equations is illustrated to facilitate provision of synthetic databases as it allows for capturing a wide range of statistical interdependencies. Also missing values, typically occurring within longitudinal databases for reasons of item non-response, can be addressed via multiple imputation when providing databases. The provision of synthetic databases using multiple imputation techniques is one possible strategy to ensure data protection, increase visibility of longitudinal databases and enhance the analytical potential.

  15. Automatic multiple zebrafish larvae tracking in unconstrained microscopic video conditions.

    PubMed

    Wang, Xiaoying; Cheng, Eva; Burnett, Ian S; Huang, Yushi; Wlodkowic, Donald

    2017-12-14

    The accurate tracking of zebrafish larvae movement is fundamental to research in many biomedical, pharmaceutical, and behavioral science applications. However, the locomotive characteristics of zebrafish larvae are significantly different from adult zebrafish, where existing adult zebrafish tracking systems cannot reliably track zebrafish larvae. Further, the far smaller size differentiation between larvae and the container render the detection of water impurities inevitable, which further affects the tracking of zebrafish larvae or require very strict video imaging conditions that typically result in unreliable tracking results for realistic experimental conditions. This paper investigates the adaptation of advanced computer vision segmentation techniques and multiple object tracking algorithms to develop an accurate, efficient and reliable multiple zebrafish larvae tracking system. The proposed system has been tested on a set of single and multiple adult and larvae zebrafish videos in a wide variety of (complex) video conditions, including shadowing, labels, water bubbles and background artifacts. Compared with existing state-of-the-art and commercial multiple organism tracking systems, the proposed system improves the tracking accuracy by up to 31.57% in unconstrained video imaging conditions. To facilitate the evaluation on zebrafish segmentation and tracking research, a dataset with annotated ground truth is also presented. The software is also publicly accessible.

  16. Description of the AILS Alerting Algorithm

    NASA Technical Reports Server (NTRS)

    Samanant, Paul; Jackson, Mike

    2000-01-01

    This document provides a complete description of the Airborne Information for Lateral Spacing (AILS) alerting algorithms. The purpose of AILS is to provide separation assurance between aircraft during simultaneous approaches to closely spaced parallel runways. AILS will allow independent approaches to be flown in such situations where dependent approaches were previously required (typically under Instrument Meteorological Conditions (IMC)). This is achieved by providing multiple levels of alerting for pairs of aircraft that are in parallel approach situations. This document#s scope is comprehensive and covers everything from general overviews, definitions, and concepts down to algorithmic elements and equations. The entire algorithm is presented in complete and detailed pseudo-code format. This can be used by software programmers to program AILS into a software language. Additional supporting information is provided in the form of coordinate frame definitions, data requirements, calling requirements as well as all necessary pre-processing and post-processing requirements. This is important and required information for the implementation of AILS into an analysis, a simulation, or a real-time system.

  17. Plasmonic field confinement for separate absorption-multiplication in InGaAs nanopillar avalanche photodiodes

    PubMed Central

    Farrell, Alan C.; Senanayake, Pradeep; Hung, Chung-Hong; El-Howayek, Georges; Rajagopal, Abhejit; Currie, Marc; Hayat, Majeed M.; Huffaker, Diana L.

    2015-01-01

    Avalanche photodiodes (APDs) are essential components in quantum key distribution systems and active imaging systems requiring both ultrafast response time to measure photon time of flight and high gain to detect low photon flux. The internal gain of an APD can improve system signal-to-noise ratio (SNR). Excess noise is typically kept low through the selection of material with intrinsically low excess noise, using separate-absorption-multiplication (SAM) heterostructures, or taking advantage of the dead-space effect using thin multiplication regions. In this work we demonstrate the first measurement of excess noise and gain-bandwidth product in III–V nanopillars exhibiting substantially lower excess noise factors compared to bulk and gain-bandwidth products greater than 200 GHz. The nanopillar optical antenna avalanche detector (NOAAD) architecture is utilized for spatially separating the absorption region from the avalanche region via the NOA resulting in single carrier injection without the use of a traditional SAM heterostructure. PMID:26627932

  18. Successful treatment of severe sinusoidal obstruction syndrome despite multiple organ failure with defibrotide after allogeneic stem cell transplantation: a case report.

    PubMed

    Behre, Gerhard; Theurich, Sebastian; Christopeit, Maximilian; Weber, Thomas

    2009-03-10

    We report a case of sinusoidal obstruction syndrome, a typical and life-threatening complication after allogeneic stem-cell transplantation, successfully treated with defibrotide despite massive multiple organ failure. A 64-year-old Caucasian woman underwent allogeneic peripheral blood stem-cell transplantation from her human leukocyte antigen-identical sister against aggressive lymphoplasmocytoid immunocytoma. Seven days later, the patient developed severe sinusoidal obstruction syndrome according to the modified Seattle criteria. We initiated treatment with defibrotide. Despite early treatment, multiple organ failure with kidney failure requiring dialysis and ventilator-dependent lung failure aggravated the clinical course. Furthermore, central nervous dysfunction occurred as well as transfusion refractory thrombocytopenia. As highlighted in our report, defibrotide is the most promising drug in the treatment of the formerly, almost lethal, severe sinusoidal obstruction syndrome to date. This is demonstrated very clearly in our patient. She improved completely, even after renal, cerebral and respiratory failure.

  19. Successful treatment of severe sinusoidal obstruction syndrome despite multiple organ failure with defibrotide after allogeneic stem cell transplantation: a case report

    PubMed Central

    2009-01-01

    Introduction We report a case of sinusoidal obstruction syndrome, a typical and life-threatening complication after allogeneic stem-cell transplantation, successfully treated with defibrotide despite massive multiple organ failure. Case presentation A 64-year-old Caucasian woman underwent allogeneic peripheral blood stem-cell transplantation from her human leukocyte antigen-identical sister against aggressive lymphoplasmocytoid immunocytoma. Seven days later, the patient developed severe sinusoidal obstruction syndrome according to the modified Seattle criteria. We initiated treatment with defibrotide. Despite early treatment, multiple organ failure with kidney failure requiring dialysis and ventilator-dependent lung failure aggravated the clinical course. Furthermore, central nervous dysfunction occurred as well as transfusion refractory thrombocytopenia. Conclusion As highlighted in our report, defibrotide is the most promising drug in the treatment of the formerly, almost lethal, severe sinusoidal obstruction syndrome to date. This is demonstrated very clearly in our patient. She improved completely, even after renal, cerebral and respiratory failure. PMID:19830097

  20. Compact atom interferometer using single laser

    NASA Astrophysics Data System (ADS)

    Chiow, Sheng-wey; Yu, Nan

    2018-06-01

    A typical atom interferometer requires vastly different laser frequencies at different stages of operation, e.g., near resonant light for laser cooling and far detuned light for atom optics, such that multiple lasers are typically employed. The number of laser units constrains the achievable minimum size and power in practical devices for resource critical environments such as space. We demonstrate a compact atom interferometer accelerometer operated by a single diode laser. This is achieved by dynamically changing the laser output frequency in GHz range while maintaining spectroscopic reference to an atomic transition via a sideband generated by phase modulation. At the same time, a beam path sharing configuration is also demonstrated for a compact sensor head design, in which atom interferometer beams share the same path as that of the cooling beam. This beam path sharing also significantly simplifies three-axis atomic accelerometry in microgravity using single sensor head.

  1. Multiple scattering in the remote sensing of natural surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wen-Hao; Weeks, R.; Gillespie, A.R.

    1996-07-01

    Radiosity models predict the amount of light scattered many times (multiple scattering) among scene elements in addition to light interacting with a surface only once (direct reflectance). Such models are little used in remote sensing studies because they require accurate digital terrain models and, typically, large amounts of computer time. We have developed a practical radiosity model that runs relatively quickly within suitable accuracy limits, and have used it to explore problems caused by multiple-scattering in image calibration, terrain correction, and surface roughness estimation for optical images. We applied the radiosity model to real topographic surfaces sampled at two verymore » different spatial scales: 30 m (rugged mountains) and 1 cm (cobbles and gravel on an alluvial fan). The magnitude of the multiple-scattering (MS) effect varies with solar illumination geometry, surface reflectivity, sky illumination and surface roughness. At the coarse scale, for typical illumination geometries, as much as 20% of the image can be significantly affected (>5%) by MS, which can account for as much as {approximately}10% of the radiance from sunlit slopes, and much more for shadowed slopes, otherwise illuminated only by skylight. At the fine scale, radiance from as much as 30-40% of the scene can have a significant MS component, and the MS contribution is locally as high as {approximately}70%, although integrating to the meter scale reduces this limit to {approximately}10%. Because the amount of MS increases with reflectivity as well as roughness, MS effects will distort the shape of reflectance spectra as well as changing their overall amplitude. The change is proportional to surface roughness. Our results have significant implications for determining reflectivity and surface roughness in remote sensing.« less

  2. Folding Proteins at 500 ns/hour with Work Queue.

    PubMed

    Abdul-Wahid, Badi'; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A

    2012-10-01

    Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.

  3. Folding Proteins at 500 ns/hour with Work Queue

    PubMed Central

    Abdul-Wahid, Badi’; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A.

    2014-01-01

    Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour. PMID:25540799

  4. Accelerated Genome Engineering through Multiplexing

    PubMed Central

    Zhao, Huimin

    2015-01-01

    Throughout the biological sciences, the past fifteen years have seen a push towards the analysis and engineering of biological systems at the organism level. Given the complexity of even the simplest organisms, though, to elicit a phenotype of interest often requires genotypic manipulation of several loci. By traditional means, sequential editing of genomic targets requires a significant investment of time and labor, as the desired editing event typically occurs at a very low frequency against an overwhelming unedited background. In recent years, the development of a suite of new techniques has greatly increased editing efficiency, opening up the possibility for multiple editing events to occur in parallel. Termed as multiplexed genome engineering, this approach to genome editing has greatly expanded the scope of possible genome manipulations in diverse hosts, ranging from bacteria to human cells. The enabling technologies for multiplexed genome engineering include oligonucleotide-based and nuclease-based methodologies, and their application has led to the great breadth of successful examples described in this review. While many technical challenges remain, there also exists a multiplicity of opportunities in this rapidly expanding field. PMID:26394307

  5. Fast mix table construction for material discretization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, S. R.

    2013-07-01

    An effective hybrid Monte Carlo-deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a 'mix table,' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mixmore » table in O(number of voxels x log number of mixtures) time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation. (authors)« less

  6. Fast Mix Table Construction for Material Discretization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Seth R

    2013-01-01

    An effective hybrid Monte Carlo--deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a ``mix table,'' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mix table inmore » $$O(\\text{number of voxels}\\times \\log \\text{number of mixtures})$$ time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation.« less

  7. Computationally efficient stochastic optimization using multiple realizations

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Bürger, C. M.; Finkel, M.

    2008-02-01

    The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.

  8. Bright-White Beetle Scales Optimise Multiple Scattering of Light

    NASA Astrophysics Data System (ADS)

    Burresi, Matteo; Cortese, Lorenzo; Pattelli, Lorenzo; Kolle, Mathias; Vukusic, Peter; Wiersma, Diederik S.; Steiner, Ullrich; Vignolini, Silvia

    2014-08-01

    Whiteness arises from diffuse and broadband reflection of light typically achieved through optical scattering in randomly structured media. In contrast to structural colour due to coherent scattering, white appearance generally requires a relatively thick system comprising randomly positioned high refractive-index scattering centres. Here, we show that the exceptionally bright white appearance of Cyphochilus and Lepidiota stigma beetles arises from a remarkably optimised anisotropy of intra-scale chitin networks, which act as a dense scattering media. Using time-resolved measurements, we show that light propagating in the scales of the beetles undergoes pronounced multiple scattering that is associated with the lowest transport mean free path reported to date for low-refractive-index systems. Our light transport investigation unveil high level of optimisation that achieves high-brightness white in a thin low-mass-per-unit-area anisotropic disordered nanostructure.

  9. Tunable filters for multispectral imaging of aeronomical features

    NASA Astrophysics Data System (ADS)

    Goenka, C.; Semeter, J. L.; Noto, J.; Dahlgren, H.; Marshall, R.; Baumgardner, J.; Riccobono, J.; Migliozzi, M.

    2013-10-01

    Multispectral imaging of optical emissions in the Earth's upper atmosphere unravels vital information about dynamic phenomena in the Earth-space environment. Wavelength tunable filters allow us to accomplish this without using filter wheels or multiple imaging setups, but with identifiable caveats and trade-offs. We evaluate one such filter, a liquid crystal Fabry-Perot etalon, as a potential candidate for the next generation of imagers for aeronomy. The tunability of such a filter can be exploited in imaging features such as the 6300-6364 Å oxygen emission doublet, or studying the rotational temperature of N2+ in the 4200-4300 Å range, observations which typically require multiple instruments. We further discuss the use of this filter in an optical instrument, called the Liquid Crystal Hyperspectral Imager (LiCHI), which will be developed to make simultaneous measurements in various wavelength ranges.

  10. De novo establishment of wild-type song culture in the zebra finch

    PubMed Central

    Feher, Olga; Wang, Haibin; Saar, Sigal; Mitra, Partha P.; Tchernichovski, Ofer

    2009-01-01

    What sort of culture would evolve in an island colony of naive founders? This question cannot be studied experimentally in humans. We performed the analogous experiment using socially learned birdsong. Culture is typically viewed as consisting of traits inherited epigenetically, via social learning. However, cultural diversity has species-typical constraints1, presumably of genetic origin. A celebrated, if contentious, example is whether a universal grammar constrains syntactic diversity in human languages2. Oscine songbirds exhibit song learning and provide biologically tractable models of culture: members of a species show individual variation in song3 and geographically separated groups have local song dialects 4,5. Different species exhibit distinct song cultures6,7, suggestive of genetic constraints8,9. Absent such constraints, innovations and copying errors should cause unbounded variation over multiple generations or geographical distance, contrary to observations9. We asked if wild-type song culture might emerge over multiple generations in an isolated colony founded by isolates, and if so, how this might happen and what type of social environment is required10. Zebra finch isolates, unexposed to singing males during development, produce song with characteristics that differ from the wild-type song found in laboratory11 or natural colonies. In tutoring lineages starting from isolate founders, we quantified alterations in song across tutoring generations in two social environments: tutor-pupil pairs in sound-isolated chambers and an isolated semi-natural colony. In both settings, juveniles imitated the isolate tutors, but changed certain characteristics of the songs. These alterations accumulated over learning generations. Consequently, songs evolved toward the wild-type in 3–4 generations. Thus, species-typical song culture can appear de novo. Our study has parallels with language change and evolution12,13. In analogy to models in quantitative genetics14,15, we model song culture as a multi-generational phenotype, partly encoded genetically in an isolate founding population, influenced by environmental variables, and taking multiple generations to emerge. PMID:19412161

  11. An Adaptive Memory Interface Controller for Improving Bandwidth Utilization of Hybrid and Reconfigurable Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Tumeo, Antonino; Ferrandi, Fabrizio

    Emerging applications such as data mining, bioinformatics, knowledge discovery, social network analysis are irregular. They use data structures based on pointers or linked lists, such as graphs, unbalanced trees or unstructures grids, which generates unpredictable memory accesses. These data structures usually are large, but difficult to partition. These applications mostly are memory bandwidth bounded and have high synchronization intensity. However, they also have large amounts of inherent dynamic parallelism, because they potentially perform a task for each one of the element they are exploring. Several efforts are looking at accelerating these applications on hybrid architectures, which integrate general purpose processorsmore » with reconfigurable devices. Some solutions, which demonstrated significant speedups, include custom-hand tuned accelerators or even full processor architectures on the reconfigurable logic. In this paper we present an approach for the automatic synthesis of accelerators from C, targeted at irregular applications. In contrast to typical High Level Synthesis paradigms, which construct a centralized Finite State Machine, our approach generates dynamically scheduled hardware components. While parallelism exploitation in typical HLS-generated accelerators is usually bound within a single execution flow, our solution allows concurrently running multiple execution flow, thus also exploiting the coarser grain task parallelism of irregular applications. Our approach supports multiple, multi-ported and distributed memories, and atomic memory operations. Its main objective is parallelizing as many memory operations as possible, independently from their execution time, to maximize the memory bandwidth utilization. This significantly differs from current HLS flows, which usually consider a single memory port and require precise scheduling of memory operations. A key innovation of our approach is the generation of a memory interface controller, which dynamically maps concurrent memory accesses to multiple ports. We present a case study on a typical irregular kernel, Graph Breadth First search (BFS), exploring different tradeoffs in terms of parallelism and number of memories.« less

  12. Predicting social and communicative ability in school-age children with autism spectrum disorder: A pilot study of the Social Attribution Task, Multiple Choice.

    PubMed

    Burger-Caplan, Rebecca; Saulnier, Celine; Jones, Warren; Klin, Ami

    2016-11-01

    The Social Attribution Task, Multiple Choice is introduced as a measure of implicit social cognitive ability in children, addressing a key challenge in quantification of social cognitive function in autism spectrum disorder, whereby individuals can often be successful in explicit social scenarios, despite marked social adaptive deficits. The 19-question Social Attribution Task, Multiple Choice, which presents ambiguous stimuli meant to elicit social attribution, was administered to children with autism spectrum disorder (N = 23) and to age-matched and verbal IQ-matched typically developing children (N = 57). The Social Attribution Task, Multiple Choice performance differed between autism spectrum disorder and typically developing groups, with typically developing children performing significantly better than children with autism spectrum disorder. The Social Attribution Task, Multiple Choice scores were positively correlated with age (r = 0.474) while being independent from verbal IQ (r = 0.236). The Social Attribution Task, Multiple Choice was strongly correlated with Vineland Adaptive Behavior Scales Communication (r = 0.464) and Socialization (r = 0.482) scores, but not with Daily Living Skills scores (r = 0.116), suggesting that the implicit social cognitive ability underlying performance on the Social Attribution Task, Multiple Choice is associated with real-life social adaptive function. © The Author(s) 2016.

  13. Preliminary Analysis of Low-Thrust Gravity Assist Trajectories by An Inverse Method and a Global Optimization Technique.

    NASA Astrophysics Data System (ADS)

    de Pascale, P.; Vasile, M.; Casotto, S.

    The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through algebraic computation. By this method a low-thrust transfer satisfying the boundary conditions on position and velocity can be quickly assessed, with low computational effort since no numerical propagation is required. The hybrid global optimization algorithm is made of a double step. Through the evolutionary search a large number of optima, and eventually the global one, are located, while the deterministic step consists of a branching process that exhaustively partitions the domain in order to have an extensive characterization of such a complex space of solutions. Furthermore, the approach implements a novel direct constraint-handling technique allowing the treatment of mixed-integer nonlinear programming problems (MINLP) typical of multiple swingby trajectories. A low-thrust transfer to Mars is studied as a test bed for the low-thrust model, thus presenting the main characteristics of the different shapes proposed and the features of the possible sub-arcs segmentations between two planets with respect to different objective functions: minimum time and minimum fuel consumption transfers. Other various test cases are also shown and further optimized, proving the effective capability of the proposed tool.

  14. Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development.

    PubMed

    Morse, Anthony F; Cangelosi, Angelo

    2017-02-01

    Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills. Copyright © 2016 Cognitive Science Society, Inc.

  15. Agile Task Tracking Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duke, Roger T.; Crump, Thomas Vu

    The work was created to provide a tool for the purpose of improving the management of tasks associated with Agile projects. Agile projects are typically completed in an iterative manner with many short duration tasks being performed as part of iterations. These iterations are generally referred to as sprints. The objective of this work is to create a single tool that enables sprint teams to manage all of their tasks in multiple sprints and automatically produce all standard sprint performance charts with minimum effort. The format of the printed work is designed to mimic a standard Kanban board. The workmore » is developed as a single Excel file with worksheets capable of managing up to five concurrent sprints and up to one hundred tasks. It also includes a summary worksheet providing performance information from all active sprints. There are many commercial project management systems typically designed with features desired by larger organizations with many resources managing multiple programs and projects. The audience for this work is the small organizations and Agile project teams desiring an inexpensive, simple, user-friendly, task management tool. This work uses standard readily available software, Excel, requiring minimum data entry and automatically creating summary charts and performance data. It is formatted to print out and resemble standard flip charts and provide the visuals associated with this type of work.« less

  16. Rapidly Progressive Dementia

    PubMed Central

    Geschwind, Michael D.

    2016-01-01

    Purpose of Review: This article presents a practical and informative approach to the evaluation of a patient with a rapidly progressive dementia (RPD). Recent Findings: Prion diseases are the prototypical causes of RPD, but reversible causes of RPD might mimic prion disease and should always be considered in a differential diagnosis. Aside from prion diseases, the most common causes of RPD are atypical presentations of other neurodegenerative disorders, curable disorders including autoimmune encephalopathies, as well as some infections, and neoplasms. Numerous recent case reports suggest dural arterial venous fistulas sometimes cause RPDs. Summary: RPDs, in which patients typically develop dementia over weeks to months, require an alternative differential than the slowly progressive dementias that occur over a few years. Because of their rapid decline, patients with RPDs necessitate urgent evaluation and often require an extensive workup, typically with multiple tests being sent or performed concurrently. Jakob-Creutzfeldt disease, perhaps the prototypical RPD, is often the first diagnosis many neurologists consider when treating a patient with rapid cognitive decline. Many conditions other than prion disease, however, including numerous reversible or curable conditions, can present as an RPD. This chapter discusses some of the major etiologies for RPDs and offers an algorithm for diagnosis. PMID:27042906

  17. Evaluating and Evolving Metadata in Multiple Dialects

    NASA Technical Reports Server (NTRS)

    Kozimore, John; Habermann, Ted; Gordon, Sean; Powers, Lindsay

    2016-01-01

    Despite many long-term homogenization efforts, communities continue to develop focused metadata standards along with related recommendations and (typically) XML representations (aka dialects) for sharing metadata content. Different representations easily become obstacles to sharing information because each representation generally requires a set of tools and skills that are designed, built, and maintained specifically for that representation. In contrast, community recommendations are generally described, at least initially, at a more conceptual level and are more easily shared. For example, most communities agree that dataset titles should be included in metadata records although they write the titles in different ways.

  18. Engine out of the Chassis: Cell-Free Protein Synthesis and its Uses

    PubMed Central

    Rosenblum, Gabriel; Cooperman, Barry S.

    2013-01-01

    The translation machinery is the engine of life. Extracting the cytoplasmic milieu from a cell affords a lysate capable of producing proteins in concentrations reaching tens of micromolar. Such lysates, derivable from a variety of cells, allow the facile addition and subtraction of components that are directly or indirectly related to the translation machinery and/or the over-expressed protein. The flexible nature of such cell-free expression systems, when coupled with high throughput monitoring, can be especially suitable for protein engineering studies, allowing one to bypass multiple steps typically required using conventional in vivo protein expression. PMID:24161673

  19. Determination of Patterson group symmetry from sparse multi-crystal data sets in the presence of an indexing ambiguity.

    PubMed

    Gildea, Richard J; Winter, Graeme

    2018-05-01

    Combining X-ray diffraction data from multiple samples requires determination of the symmetry and resolution of any indexing ambiguity. For the partial data sets typical of in situ room-temperature experiments, determination of the correct symmetry is often not straightforward. The potential for indexing ambiguity in polar space groups is also an issue, although methods to resolve this are available if the true symmetry is known. Here, a method is presented to simultaneously resolve the determination of the Patterson symmetry and the indexing ambiguity for partial data sets. open access.

  20. MULTIGRAIN: a smoothed particle hydrodynamic algorithm for multiple small dust grains and gas

    NASA Astrophysics Data System (ADS)

    Hutchison, Mark; Price, Daniel J.; Laibe, Guillaume

    2018-05-01

    We present a new algorithm, MULTIGRAIN, for modelling the dynamics of an entire population of small dust grains immersed in gas, typical of conditions that are found in molecular clouds and protoplanetary discs. The MULTIGRAIN method is more accurate than single-phase simulations because the gas experiences a backreaction from each dust phase and communicates this change to the other phases, thereby indirectly coupling the dust phases together. The MULTIGRAIN method is fast, explicit and low storage, requiring only an array of dust fractions and their derivatives defined for each resolution element.

  1. A Prototype System for Retrieval of Gene Functional Information

    PubMed Central

    Folk, Lillian C.; Patrick, Timothy B.; Pattison, James S.; Wolfinger, Russell D.; Mitchell, Joyce A.

    2003-01-01

    Microarrays allow researchers to gather data about the expression patterns of thousands of genes simultaneously. Statistical analysis can reveal which genes show statistically significant results. Making biological sense of those results requires the retrieval of functional information about the genes thus identified, typically a manual gene-by-gene retrieval of information from various on-line databases. For experiments generating thousands of genes of interest, retrieval of functional information can become a significant bottleneck. To address this issue, we are currently developing a prototype system to automate the process of retrieval of functional information from multiple on-line sources. PMID:14728346

  2. Application of up-sampling and resolution scaling to Fresnel reconstruction of digital holograms.

    PubMed

    Williams, Logan A; Nehmetallah, Georges; Aylo, Rola; Banerjee, Partha P

    2015-02-20

    Fresnel transform implementation methods using numerical preprocessing techniques are investigated in this paper. First, it is shown that up-sampling dramatically reduces the minimum reconstruction distance requirements and allows maximal signal recovery by eliminating aliasing artifacts which typically occur at distances much less than the Rayleigh range of the object. Second, zero-padding is employed to arbitrarily scale numerical resolution for the purpose of resolution matching multiple holograms, where each hologram is recorded using dissimilar geometric or illumination parameters. Such preprocessing yields numerical resolution scaling at any distance. Both techniques are extensively illustrated using experimental results.

  3. One dose per day compared to multiple doses per day of gentamicin for treatment of suspected or proven sepsis in neonates.

    PubMed

    Rao, Shripada C; Srinivasjois, Ravisha; Moon, Kwi

    2016-12-06

    Animal studies and trials in older children and adults suggest that a 'one dose per day' regimen of gentamicin is superior to a 'multiple doses per day' regimen. To compare the efficacy and safety of one dose per day compared to multiple doses per day of gentamicin in suspected or proven sepsis in neonates. Eligible studies were identified by searching the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 3) in the Cochrane Library (searched 8 April 2016), MEDLINE (1966 to 8 April 2016), Embase (1980 to 8 April 2016), and CINAHL (December 1982 to 8 April 2016). All randomised or quasi-randomised controlled trials comparing one dose per day ('once a day') compared to multiple doses per day ('multiple doses a day') of gentamicin to newborn infants. Data collection and analysis was performed according to the standards of the Cochrane Neonatal Review Group. Eleven RCTs were included (N = 574) and 28 excluded. All except one study enrolled infants of more than 32 weeks' gestation. Limited information suggested that infants in both 'once a day' as well as 'multiple doses a day' regimens showed adequate clearance of sepsis (typical RR 1.00, 95% CI 0.84 to 1.19; typical RD 0.00, 95% CI -0.19 to 0.19; 3 trials; N = 37). 'Once a day' gentamicin regimen was associated with fewer failures to attain peak level of at least 5 µg/ml (typical RR 0.22, 95% CI 0.11 to 0.47; typical RD -0.13, 95% CI -0.19 to -0.08; number needed to treat for an additional beneficial outcome (NNTB) = 8; 9 trials; N = 422); and fewer failures to achieve trough levels of 2 µg/ml or less (typical RR 0.38, 95% CI 0.27 to 0.55; typical RD -0.22, 95% CI -0.29 to -0.15; NNTB = 4; 11 trials; N = 503). 'Once a day' gentamicin achieved higher peak levels (MD 2.58, 95% CI 2.26 to 2.89; 10 trials; N = 440) and lower trough levels (MD -0.57, 95% CI -0.69 to -0.44; 10 trials; N = 440) than 'multiple doses a day' regimen. There was no significant difference in ototoxicity between two groups (typical RR 1.69, 95% CI 0.18 to 16.25; typical RD 0.01, 95% CI -0.04 to 0.05; 5 trials; N = 214). Nephrotoxicity was not noted with either of the treatment regimens. Overall, the quality of evidence was considered to be moderate on GRADE analysis, given the small sample size and unclear/high risk of bias in some of the domains in a few of the included studies. There is insufficient evidence from the currently available RCTs to conclude whether a 'once a day' or a 'multiple doses a day' regimen of gentamicin is superior in treating proven neonatal sepsis. However, data suggest that pharmacokinetic properties of a 'once a day' gentamicin regimen are superior to a 'multiple doses a day' regimen in that it achieves higher peak levels while avoiding toxic trough levels. There was no change in nephrotoxicity or auditory toxicity. Based on the assessment of pharmacokinetics, a 'once a day regimen' may be superior in treating sepsis in neonates of more than 32 weeks' gestation.

  4. Fully automated analysis of multi-resolution four-channel micro-array genotyping data

    NASA Astrophysics Data System (ADS)

    Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.

    2006-03-01

    We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.

  5. Validating a biometric authentication system: sample size requirements.

    PubMed

    Dass, Sarat C; Zhu, Yongfang; Jain, Anil K

    2006-12-01

    Authentication systems based on biometric features (e.g., fingerprint impressions, iris scans, human face images, etc.) are increasingly gaining widespread use and popularity. Often, vendors and owners of these commercial biometric systems claim impressive performance that is estimated based on some proprietary data. In such situations, there is a need to independently validate the claimed performance levels. System performance is typically evaluated by collecting biometric templates from n different subjects, and for convenience, acquiring multiple instances of the biometric for each of the n subjects. Very little work has been done in 1) constructing confidence regions based on the ROC curve for validating the claimed performance levels and 2) determining the required number of biometric samples needed to establish confidence regions of prespecified width for the ROC curve. To simplify the analysis that address these two problems, several previous studies have assumed that multiple acquisitions of the biometric entity are statistically independent. This assumption is too restrictive and is generally not valid. We have developed a validation technique based on multivariate copula models for correlated biometric acquisitions. Based on the same model, we also determine the minimum number of samples required to achieve confidence bands of desired width for the ROC curve. We illustrate the estimation of the confidence bands as well as the required number of biometric samples using a fingerprint matching system that is applied on samples collected from a small population.

  6. Optimising the application of multiple-capture traps for invasive species management using spatial simulation.

    PubMed

    Warburton, Bruce; Gormley, Andrew M

    2015-01-01

    Internationally, invasive vertebrate species pose a significant threat to biodiversity, agricultural production and human health. To manage these species a wide range of tools, including traps, are used. In New Zealand, brushtail possums (Trichosurus vulpecula), stoats (Mustela ermine), and ship rats (Rattus rattus) are invasive and there is an ongoing demand for cost-effective non-toxic methods for controlling these pests. Recently, traps with multiple-capture capability have been developed which, because they do not require regular operator-checking, are purported to be more cost-effective than traditional single-capture traps. However, when pest populations are being maintained at low densities (as is typical of orchestrated pest management programmes) it remains uncertain if it is more cost-effective to use fewer multiple-capture traps or more single-capture traps. To address this uncertainty, we used an individual-based spatially explicit modelling approach to determine the likely maximum animal-captures per trap, given stated pest densities and defined times traps are left between checks. In the simulation, single- or multiple-capture traps were spaced according to best practice pest-control guidelines. For possums with maintenance densities set at the lowest level (i.e. 0.5/ha), 98% of all simulated possums were captured with only a single capacity trap set at each site. When possum density was increased to moderate levels of 3/ha, having a capacity of three captures per trap caught 97% of all simulated possums. Results were similar for stoats, although only two potential captures per site were sufficient to capture 99% of simulated stoats. For rats, which were simulated at their typically higher densities, even a six-capture capacity per trap site only resulted in 80% kill. Depending on target species, prevailing density and extent of immigration, the most cost-effective strategy for pest control in New Zealand might be to deploy several single-capture traps rather than investing in fewer, but more expense, multiple-capture traps.

  7. Huygens-Fresnel Acoustic Interference and the Development of Robust Time-Averaged Patterns from Traveling Surface Acoustic Waves

    NASA Astrophysics Data System (ADS)

    Devendran, Citsabehsan; Collins, David J.; Ai, Ye; Neild, Adrian

    2017-04-01

    Periodic pattern generation using time-averaged acoustic forces conventionally requires the intersection of counterpropagating wave fields, where suspended micro-objects in a microfluidic system collect along force potential minimizing nodal or antinodal lines. Whereas this effect typically requires either multiple transducer elements or whole channel resonance, we report the generation of scalable periodic patterning positions without either of these conditions. A single propagating surface acoustic wave interacts with the proximal channel wall to produce a knife-edge effect according to the Huygens-Fresnel principle, where these cylindrically propagating waves interfere with classical wave fronts emanating from the substrate. We simulate these conditions and describe a model that accurately predicts the lateral spacing of these positions in a robust and novel approach to acoustic patterning.

  8. Optical Survey of the Tumble Rates of Retired GEO Satellites

    DTIC Science & Technology

    2014-09-01

    objects while the sun- satellite -observer geometry was most favorable; typically over a one- to two-hour period, repeated multiple times over the course of...modeling and simulation of the optical characteristics of the satellite can help to resolve ambigu- ities. This process was validated on spacecraft for... satellite -observer geometry was most favorable; typically over a one- to two-hour period, repeated multiple times over the course of weeks. By

  9. [Intracranial plasmocytomas: biology, diagnosis, and treatment].

    PubMed

    Belov, A I; Gol'bin, D A

    2006-01-01

    Intracranial plasmocytomas are a rare abnormality in a neurosurgeon's practice. The plasmocytomas may originate from the skull bones or soft tissue intracranial structures; they may be solitary or occur as a manifestation of multiple myeloma, this type being typical of most intracranial plasmocytomas. Progression of solitary plasmocytoma to multiple myeloma is observed in a number of cases. Preoperative diagnosis involves computed tomography or magnetic resonance imaging; angiography is desirable. The final diagnosis of plasmocytoma is chiefly based on a morphological study. Special immunohistochemical studies yield very promising results; these are likely to be of high prognostic value. Intracranial plasmocytomas require a differential approach and a meticulous examination since the presence or absence of multiple myeloma radically affects prognosis. There are well-defined predictors; however, it is appropriate that craniobasal plasmocytomas show a worse prognosis than plasmocytomas of the skull vault and more commonly progress to multiple myeloma. Plasmocytomas respond to radiotherapy very well. The gold standard of treatment for plasmocytoma is its total removal and adjuvant radiation therapy; however, there is evidence for good results when it is partially removed and undergoes radiotherapy or after radical surgery without subsequent radiation. The role of chemotherapy has not been defined today.

  10. A Neutron Multiplicity Meter for Deep Underground Muon-Induced High Energy Neutron Measurements

    NASA Astrophysics Data System (ADS)

    Hennings-Yeomans, Raul; Akerib, Daniel

    2007-04-01

    The nature of dark matter is one of the most important outstanding issues in particle physics, cosmology and astrophysics. A leading hypothesis is that Weakly Interacting Massive Particles, or WIMPs, were produced in the early universe and make up the dark matter. WIMP searches must be performed underground to shield from cosmic rays, which produce secondary particles that could fake a WIMP signal. Nuclear recoils from fast neutrons in underground laboratories are one of the most challenging backgrounds to WIMP detection. We present, for the first time, the design of an instrument capable of measuring the high energy (>60,eV) muon-induced neutron flux deep underground. The instrument is based on applying the Gd-loaded liquid-scintillator technique to measure the rate of multiple low energy neutron events produced in a Pb target and from this measurement to infer the rate of high energy neutron events. This unique signature allows both for efficient tagging of neutron multiplicity events as well as rejection of random gamma backgrounds so effectively that typical low-background techniques are not required. We will also discuss the benefits of using a neutron multiplicity meter as a component of active shielding.

  11. Capacity for visual features in mental rotation

    PubMed Central

    Xu, Yangqing; Franconeri, Steven L.

    2015-01-01

    Although mental rotation is a core component of scientific reasoning, we still know little about its underlying mechanism. For instance - how much visual information can we rotate at once? Participants rotated a simple multi-part shape, requiring them to maintain attachments between features and moving parts. The capacity of this aspect of mental rotation was strikingly low – only one feature could remain attached to one part. Behavioral and eyetracking data showed that this single feature remained ‘glued’ via a singular focus of attention, typically on the object’s top. We argue that the architecture of the human visual system is not suited for keeping multiple features attached to multiple parts during mental rotation. Such measurement of the capacity limits may prove to be a critical step in dissecting the suite of visuospatial tools involved in mental rotation, leading to insights for improvement of pedagogy in science education contexts. PMID:26174781

  12. Capacity for Visual Features in Mental Rotation.

    PubMed

    Xu, Yangqing; Franconeri, Steven L

    2015-08-01

    Although mental rotation is a core component of scientific reasoning, little is known about its underlying mechanisms. For instance, how much visual information can someone rotate at once? We asked participants to rotate a simple multipart shape, requiring them to maintain attachments between features and moving parts. The capacity of this aspect of mental rotation was strikingly low: Only one feature could remain attached to one part. Behavioral and eye-tracking data showed that this single feature remained "glued" via a singular focus of attention, typically on the object's top. We argue that the architecture of the human visual system is not suited for keeping multiple features attached to multiple parts during mental rotation. Such measurement of capacity limits may prove to be a critical step in dissecting the suite of visuospatial tools involved in mental rotation, leading to insights for improvement of pedagogy in science-education contexts. © The Author(s) 2015.

  13. Inflatable actuators: an attempt for a common approach based on Treloar’s rubber elasticity theory

    NASA Astrophysics Data System (ADS)

    Tondu, Bertrand

    2018-01-01

    Inflatable actuators are defined as pressure hyperelastic vessels whose expansion is constrained for generating either movements in extension, or typical contractile movements of artificial muscles. By using Treloar’s theory of rubber elasticity, applied to thin-walled pressure vessels, we propose to determine in which conditions they can be considered as stable open-loop positioning actuators. Antagonism can be viewed as an extension of this open-loop stability principle applicable to artificial muscles as to extensible actuators. We especially show its relevance for multiple chambers pressurized cylinders and how Treloar’s theory can help to model their bending in a readable and relevant formal way. We also try to justify why we think that antagonism applied to extensible actuators can actually appear as the best way for designing miniaturized multiple degrees of freedom rubber made microactuators if, however, only a limited power is required.

  14. Statistical approaches to assessing single and multiple outcome measures in dry eye therapy and diagnosis.

    PubMed

    Tomlinson, Alan; Hair, Mario; McFadyen, Angus

    2013-10-01

    Dry eye is a multifactorial disease which would require a broad spectrum of test measures in the monitoring of its treatment and diagnosis. However, studies have typically reported improvements in individual measures with treatment. Alternative approaches involve multiple, combined outcomes being assessed by different statistical analyses. In order to assess the effect of various statistical approaches to the use of single and combined test measures in dry eye, this review reanalyzed measures from two previous studies (osmolarity, evaporation, tear turnover rate, and lipid film quality). These analyses assessed the measures as single variables within groups, pre- and post-intervention with a lubricant supplement, by creating combinations of these variables and by validating these combinations with the combined sample of data from all groups of dry eye subjects. The effectiveness of single measures and combinations in diagnosis of dry eye was also considered. Copyright © 2013. Published by Elsevier Inc.

  15. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  16. The Overgrid Interface for Computational Simulations on Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Computational simulations using overset grids typically involve multiple steps and a variety of software modules. A graphical interface called OVERGRID has been specially designed for such purposes. Data required and created by the different steps include geometry, grids, domain connectivity information and flow solver input parameters. The interface provides a unified environment for the visualization, processing, generation and diagnosis of such data. General modules are available for the manipulation of structured grids and unstructured surface triangulations. Modules more specific for the overset approach include surface curve generators, hyperbolic and algebraic surface grid generators, a hyperbolic volume grid generator, Cartesian box grid generators, and domain connectivity: pre-processing tools. An interface provides automatic selection and viewing of flow solver boundary conditions, and various other flow solver inputs. For problems involving multiple components in relative motion, a module is available to build the component/grid relationships and to prescribe and animate the dynamics of the different components.

  17. Delayed matching to two-picture samples by individuals with and without disabilities: an analysis of the role of naming.

    PubMed

    Gutowski, Stanley J; Stromer, Robert

    2003-01-01

    Delayed matching to complex, two-picture samples (e.g., cat-dog) may be improved when the samples occasion differential verbal behavior. In Experiment 1, individuals with mental retardation matched picture comparisons to identical single-picture samples or to two-picture samples, one of which was identical to a comparison. Accuracy scores were typically high on single-picture trials under both simultaneous and delayed matching conditions. Scores on two-picture trials were also high during the simultaneous condition but were lower during the delay condition. However, scores improved on delayed two-picture trials when each of the sample pictures was named aloud before comparison responding. Experiment 2 replicated these results with preschoolers with typical development and a youth with mental retardation. Sample naming also improved the preschoolers' matching when the samples were pairs of spoken names and the correct comparison picture matched one of the names. Collectively, the participants could produce the verbal behavior that might have improved performance, but typically did not do so unless the procedure required it. The success of the naming intervention recommends it for improving the observing and remembering of multiple elements of complex instructional stimuli.

  18. Visual search in divided areas: dividers initially interfere with and later facilitate visual search.

    PubMed

    Nakashima, Ryoichi; Yokosawa, Kazuhiko

    2013-02-01

    A common search paradigm requires observers to search for a target among undivided spatial arrays of many items. Yet our visual environment is populated with items that are typically arranged within smaller (subdivided) spatial areas outlined by dividers (e.g., frames). It remains unclear how dividers impact visual search performance. In this study, we manipulated the presence and absence of frames and the number of frames subdividing search displays. Observers searched for a target O among Cs, a typically inefficient search task, and for a target C among Os, a typically efficient search. The results indicated that the presence of divider frames in a search display initially interferes with visual search tasks when targets are quickly detected (i.e., efficient search), leading to early interference; conversely, frames later facilitate visual search in tasks in which targets take longer to detect (i.e., inefficient search), leading to late facilitation. Such interference and facilitation appear only for conditions with a specific number of frames. Relative to previous studies of grouping (due to item proximity or similarity), these findings suggest that frame enclosures of multiple items may induce a grouping effect that influences search performance.

  19. Aerosol analysis with the Coastal Zone Color Scanner: a simple method for including multiple scattering effects.

    PubMed

    Gordon, H R; Castaño, D J

    1989-04-01

    For measurement of aerosols over the ocean, the total radiance L(t) backscattered from the top of a stratified atmosphere which contains both stratospheric and tropospheric aerosols of various types has been computed. A similar computation is carried out for an aerosol-free atmosphere yielding the Rayleigh scattered radiance L(r). The difference L(t) - L(r) is shown to be linearly related to the radiance L(as), which the aerosol would produce in the single scattering approximation. This greatly simplifies the application of aerosol models to aerosol analysis by satellite since adding to, or in some way changing, the aerosol model requires no additional multiple scattering computations. In fact, the only multiple computations required for aerosol analysis are those for determining L(r), which can be performed once and for all. The computations are explicitly applied to Band 4 of the CZCS, which, because of its high radiometric sensitivity and excellent calibration, is ideal for studying aerosols over the ocean. Specifically, the constant A in the relationship L(as) = A(-1)(L(t) - L(r)) is given as a function of position along the scan for four typical orbital-solar position scenarios. The computations show that L(as) can be retrieved from L(t) - L(r) with an average error of no more than 5-7% except at the very edges of the scan.

  20. Prediction of lethal/effective concentration/dose in the presence of multiple auxiliary covariates and components of variance

    USGS Publications Warehouse

    Gutreuter, S.; Boogaard, M.A.

    2007-01-01

    Predictors of the percentile lethal/effective concentration/dose are commonly used measures of efficacy and toxicity. Typically such quantal-response predictors (e.g., the exposure required to kill 50% of some population) are estimated from simple bioassays wherein organisms are exposed to a gradient of several concentrations of a single agent. The toxicity of an agent may be influenced by auxiliary covariates, however, and more complicated experimental designs may introduce multiple variance components. Prediction methods lag examples of those cases. A conventional two-stage approach consists of multiple bivariate predictions of, say, medial lethal concentration followed by regression of those predictions on the auxiliary covariates. We propose a more effective and parsimonious class of generalized nonlinear mixed-effects models for prediction of lethal/effective dose/concentration from auxiliary covariates. We demonstrate examples using data from a study regarding the effects of pH and additions of variable quantities 2???,5???-dichloro-4???- nitrosalicylanilide (niclosamide) on the toxicity of 3-trifluoromethyl-4- nitrophenol to larval sea lamprey (Petromyzon marinus). The new models yielded unbiased predictions and root-mean-squared errors (RMSEs) of prediction for the exposure required to kill 50 and 99.9% of some population that were 29 to 82% smaller, respectively, than those from the conventional two-stage procedure. The model class is flexible and easily implemented using commonly available software. ?? 2007 SETAC.

  1. The effects of anterior arcuate and dorsomedial frontal cortex lesions on visually guided eye movements: 2. Paired and multiple targets.

    PubMed

    Schiller, P H; Chou, I

    2000-01-01

    This study examined the effects of anterior arcuate and dorsomedial frontal cortex lesions on the execution of saccadic eye movements made to paired and multiple targets in rhesus monkeys. Identical paired targets were presented with various temporal asynchronies to determine the temporal offset required to yield equal probability choices to either target. In the intact animal equal probability choices were typically obtained when the targets appeared simultaneously. After unilateral anterior arcuate lesions a major shift arose in the temporal offset required to obtain equal probability choices for paired targets that necessitated presenting the target in the hemifield contralateral to the lesion more than 100 ms prior to the target in the ipsilateral hemifield. This deficit was still pronounced 1 year after the lesion. Dorsomedial frontal cortex lesions produced much smaller but significant shifts in target selection that recovered more rapidly. Paired lesions produced deficits similar to those observed with anterior arcuate lesions alone. Major deficits were also observed on a multiple target temporal discrimination task after anterior arcuate but not after dorsomedial frontal cortex lesions. These results suggest that the frontal eye fields that reside in anterior bank of the arcuate sulcus play an important role in temporal processing and in target selection. Dorsomedial frontal cortex, that contains the medial eye fields, plays a much less important role in the execution of these tasks.

  2. Visual Search in Typically Developing Toddlers and Toddlers with Fragile X or Williams Syndrome

    ERIC Educational Resources Information Center

    Scerif, Gaia; Cornish, Kim; Wilding, John; Driver, Jon; Karmiloff-Smith, Annette

    2004-01-01

    Visual selective attention is the ability to attend to relevant visual information and ignore irrelevant stimuli. Little is known about its typical and atypical development in early childhood. Experiment 1 investigates typically developing toddlers' visual search for multiple targets on a touch-screen. Time to hit a target, distance between…

  3. STATISTICAL METHODOLOGY FOR THE SIMULTANEOUS ANALYSIS OF MULTIPLE TYPES OF OUTCOMES IN NONLINEAR THRESHOLD MODELS.

    EPA Science Inventory

    Multiple outcomes are often measured on each experimental unit in toxicology experiments. These multiple observations typically imply the existence of correlation between endpoints, and a statistical analysis that incorporates it may result in improved inference. When both disc...

  4. Best Practice for the Administration of Daratumumab in Multiple Myeloma: Australian Myeloma Nurse Expert Opinion

    PubMed Central

    King, Tracy; Jagger, Jacqueline; Wood, Jodie; Woodrow, Carmel; Snowden, Alicia; Haines, Sally; Crosbie, Christina; Houdyk, Kristen

    2018-01-01

    Patients with multiple myeloma (MM) are typically of an advanced age and may have significant co-existing medical conditions. They have often had multiple lines of therapy and as such experience disease-related effects alongside associated treatment toxicities. Daratumumab is a monoclonal antibody approved for the treatment of MM in the relapsed/refractory setting. Clinical studies found that daratumumab showed good tolerability as a monotherapy and in combination with current standard therapies. However, the administration of daratumumab does require specific management considerations. It is administered as an intravenous infusion and infusion-related reactions (IRRs) may occur. Daratumumab also interferes with routine blood transfusion tests, giving false positives for the indirect antiglobulin test. This article highlights key nursing care considerations and practical management aspects to improve the treatment experience of patients receiving daratumumab infusions. Pretreatment aspects, patient education, pre- and post-medication, daratumumab administration, and the management of IRRs are discussed. An IRR management sheet that could be used by nurses and a patient information sheet are located at the end of this article.

  5. Statistically Comparing the Performance of Multiple Automated Raters across Multiple Items

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Boyer, Michelle

    2017-01-01

    Automated scoring systems are typically evaluated by comparing the performance of a single automated rater item-by-item to human raters. This presents a challenge when the performance of multiple raters needs to be compared across multiple items. Rankings could depend on specifics of the ranking procedure; observed differences could be due to…

  6. Galactic optical cloaking of visible baryonic matter

    NASA Astrophysics Data System (ADS)

    Smolyaninov, Igor I.

    2018-05-01

    Three-dimensional gravitational cloaking is known to require exotic matter and energy sources, which makes it arguably physically unrealizable. On the other hand, typical astronomical observations are performed using one-dimensional paraxial line of sight geometries. We demonstrate that unidirectional line of sight gravitational cloaking does not require exotic matter, and it may occur in multiple natural astronomical scenarios that involve gravitational lensing. In particular, recently discovered double gravitational lens SDSSJ 0 9 4 6 +1 0 0 6 together with the Milky Way appear to form a natural paraxial cloak. A natural question to ask, then, is how much matter in the Universe may be hidden from view by such natural gravitational cloaks? It is estimated that the total volume hidden from an observer by gravitational cloaking may reach about 1% of the total volume of the visible Universe.

  7. Optimal approaches for balancing invasive species eradication and endangered species management.

    PubMed

    Lampert, Adam; Hastings, Alan; Grosholz, Edwin D; Jardine, Sunny L; Sanchirico, James N

    2014-05-30

    Resolving conflicting ecosystem management goals-such as maintaining fisheries while conserving marine species or harvesting timber while preserving habitat-is a widely recognized challenge. Even more challenging may be conflicts between two conservation goals that are typically considered complementary. Here, we model a case where eradication of an invasive plant, hybrid Spartina, threatens the recovery of an endangered bird that uses Spartina for nesting. Achieving both goals requires restoration of native Spartina. We show that the optimal management entails less intensive treatment over longer time scales to fit with the time scale of natural processes. In contrast, both eradication and restoration, when considered separately, would optimally proceed as fast as possible. Thus, managers should simultaneously consider multiple, potentially conflicting goals, which may require flexibility in the timing of expenditures. Copyright © 2014, American Association for the Advancement of Science.

  8. Iron Containing Metal-Organic Frameworks: Structure, Synthesis, and Applications in Environmental Remediation.

    PubMed

    Liu, Xiaocheng; Zhou, Yaoyu; Zhang, Jiachao; Tang, Lin; Luo, Lin; Zeng, Guangming

    2017-06-21

    Metal-organic frameworks (MOFs) with Fe content are gradually developing into an independent branch in environmental remediation, requiring economical, effective, low-toxicity strategies to the complete procedure. In this review, recent advancements in the structure, synthesis, and environmental application focusing on the mechanism are presented. The unique structure of novel design proposed specific characteristics of different iron-containing MOFs with potential innovation. Synthesis of typical MILs, NH 2 -MILs and MILs based materials reveal the basis and defect of the current method, indicating the optimal means for the actual requirements. The adsorption of various contamination with multiple interaction as well as the catalytic degradation over radicals or electron-hole pairs are reviewed. This review implied considerable prospects of iron-containing MOFs in the field of environment and a more comprehensive cognition into the challenges and potential improvement.

  9. Gas mixture studies for streamer operated Resistive Plate Chambers

    NASA Astrophysics Data System (ADS)

    Paoloni, A.; Longhin, A.; Mengucci, A.; Pupilli, F.; Ventura, M.

    2016-06-01

    Resistive Plate Chambers operated in streamer mode are interesting detectors in neutrino and astro-particle physics applications (like OPERA and ARGO experiments). Such experiments are typically characterized by large area apparatuses with no stringent requirements on detector aging and rate capabilities. In this paper, results of cosmic ray tests performed on a RPC prototype using different gas mixtures are presented, the principal aim being the optimization of the TetraFluoroPropene concentration in Argon-based mixtures. The introduction of TetraFluoroPropene, besides its low Global Warming Power, is helpful because it simplifies safety requirements allowing to remove also isobutane from the mixture. Results obtained with mixtures containing SF6, CF4, CO2, N2 and He are also shown, presented both in terms of detectors properties (efficiency, multiple-streamer probability and time resolution) and in terms of streamer characteristics.

  10. Multi-species biofilms defined from drinking water microorganisms provide increased protection against chlorine disinfection.

    PubMed

    Schwering, Monika; Song, Joanna; Louie, Marie; Turner, Raymond J; Ceri, Howard

    2013-09-01

    A model biofilm, formed of multiple species from environmental drinking water, including opportunistic pathogens, was created to explore the tolerance of multi-species biofilms to chlorine levels typical of water-distribution systems. All species, when grown planktonically, were killed by concentrations of chlorine within the World Health Organization guidelines (0.2-5.0 mg l(-1)). Higher concentrations (1.6-40-fold) of chlorine were required to eradicate biofilm populations of these strains, ~70% of biofilms tested were not eradicated by 5.0 mg l(-1) chlorine. Pathogenic bacteria within the model multi-species biofilms had an even more substantial increase in chlorine tolerance; on average ~700-1100 mg l(-1) chlorine was required to eliminate pathogens from the biofilm, 50-300-fold higher than for biofilms comprising single species. Confocal laser scanning microscopy of biofilms showed distinct 3D structures and multiple cell morphologies and arrangements. Overall, this study showed a substantial increase in the chlorine tolerance of individual species with co-colonization in a multi-species biofilm that was far beyond that expected as a result of biofilm growth on its own.

  11. Joint Sparse Recovery With Semisupervised MUSIC

    NASA Astrophysics Data System (ADS)

    Wen, Zaidao; Hou, Biao; Jiao, Licheng

    2017-05-01

    Discrete multiple signal classification (MUSIC) with its low computational cost and mild condition requirement becomes a significant noniterative algorithm for joint sparse recovery (JSR). However, it fails in rank defective problem caused by coherent or limited amount of multiple measurement vectors (MMVs). In this letter, we provide a novel sight to address this problem by interpreting JSR as a binary classification problem with respect to atoms. Meanwhile, MUSIC essentially constructs a supervised classifier based on the labeled MMVs so that its performance will heavily depend on the quality and quantity of these training samples. From this viewpoint, we develop a semisupervised MUSIC (SS-MUSIC) in the spirit of machine learning, which declares that the insufficient supervised information in the training samples can be compensated from those unlabeled atoms. Instead of constructing a classifier in a fully supervised manner, we iteratively refine a semisupervised classifier by exploiting the labeled MMVs and some reliable unlabeled atoms simultaneously. Through this way, the required conditions and iterations can be greatly relaxed and reduced. Numerical experimental results demonstrate that SS-MUSIC can achieve much better recovery performances than other MUSIC extended algorithms as well as some typical greedy algorithms for JSR in terms of iterations and recovery probability.

  12. Frequency Control of Single Quantum Emitters in Integrated Photonic Circuits

    NASA Astrophysics Data System (ADS)

    Schmidgall, Emma R.; Chakravarthi, Srivatsa; Gould, Michael; Christen, Ian R.; Hestroffer, Karine; Hatami, Fariba; Fu, Kai-Mei C.

    2018-02-01

    Generating entangled graph states of qubits requires high entanglement rates, with efficient detection of multiple indistinguishable photons from separate qubits. Integrating defect-based qubits into photonic devices results in an enhanced photon collection efficiency, however, typically at the cost of a reduced defect emission energy homogeneity. Here, we demonstrate that the reduction in defect homogeneity in an integrated device can be partially offset by electric field tuning. Using photonic device-coupled implanted nitrogen vacancy (NV) centers in a GaP-on-diamond platform, we demonstrate large field-dependent tuning ranges and partial stabilization of defect emission energies. These results address some of the challenges of chip-scale entanglement generation.

  13. Frequency Control of Single Quantum Emitters in Integrated Photonic Circuits.

    PubMed

    Schmidgall, Emma R; Chakravarthi, Srivatsa; Gould, Michael; Christen, Ian R; Hestroffer, Karine; Hatami, Fariba; Fu, Kai-Mei C

    2018-02-14

    Generating entangled graph states of qubits requires high entanglement rates with efficient detection of multiple indistinguishable photons from separate qubits. Integrating defect-based qubits into photonic devices results in an enhanced photon collection efficiency, however, typically at the cost of a reduced defect emission energy homogeneity. Here, we demonstrate that the reduction in defect homogeneity in an integrated device can be partially offset by electric field tuning. Using photonic device-coupled implanted nitrogen vacancy (NV) centers in a GaP-on-diamond platform, we demonstrate large field-dependent tuning ranges and partial stabilization of defect emission energies. These results address some of the challenges of chip-scale entanglement generation.

  14. Multifuctional integrated sensors (MFISES).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Homeijer, Brian D.; Roozeboom, Clifton

    2015-10-01

    Many emerging IoT applications require sensing of multiple physical and environmental parameters for: completeness of information, measurement validation, unexpected demands, improved performance. For example, a typical outdoor weather station measures temperature, humidity, barometric pressure, light intensity, rainfall, wind speed and direction. Existing sensor technologies do not directly address the demand for cost, size, and power reduction in multi-paramater sensing applications. Industry sensor manufacturers have developed integrated sensor systems for inertial measurements that combine accelerometers, gyroscopes, and magnetometers, but do not address environmental sensing functionality. In existing research literature, a technology gap exists between the functionality of MEMS sensors and themore » real world applications of the sensors systems.« less

  15. Who's your daddy? Using RADseq to explore survival and paternity in the clownfish, Amphiprion clarkii.

    NASA Astrophysics Data System (ADS)

    Stuart, M. R.; Pinsky, M. L.

    2016-02-01

    The ability to use DNA to identify individuals and their offspring has begun to revolutionize marine ecology. However, genetic mark-recapture and parentage studies typically require large numbers of individuals and associated high genotyping costs. Here, we describe a rapid and relatively low-cost protocol for genotyping non-model organisms at thousands of Single Nucleotide Polymorphisms (SNPs) using massively parallel sequencing. We apply the approach to a population of yellowtail clownfish, Amphiprion clarkii, to detect genetic mark-recaptures and parent-offspring relationships. We test multiple bioinformatic approaches and describe how this method could be applied to a wide variety of marine organisms.

  16. Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection

    NASA Astrophysics Data System (ADS)

    Brasche, L. J. H.; Lopez, R.; Eisenmann, D.

    2006-03-01

    Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.

  17. An Eye-Tracking Study of Multiple Feature Value Category Structure Learning: The Role of Unique Features

    PubMed Central

    Liu, Zhiya; Song, Xiaohong; Seger, Carol A.

    2015-01-01

    We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting. PMID:26274332

  18. An Eye-Tracking Study of Multiple Feature Value Category Structure Learning: The Role of Unique Features.

    PubMed

    Liu, Zhiya; Song, Xiaohong; Seger, Carol A

    2015-01-01

    We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting.

  19. 14 CFR Appendix D to Part 151 - Appendix D to Part 151

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...: Typical Eligible Items 1. Basic types of pavement listed as eligible under § 151.77. 2. Taxiway providing... storage hangars and/or multiple-unit tee hangars. Typical Ineligible Items 1. Basic types of pavement...

  20. A View on Future Building System Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael

    This chapter presents what a future environment for building system modeling and simulation may look like. As buildings continue to require increased performance and better comfort, their energy and control systems are becoming more integrated and complex. We therefore focus in this chapter on the modeling, simulation and analysis of building energy and control systems. Such systems can be classified as heterogeneous systems because they involve multiple domains, such as thermodynamics, fluid dynamics, heat and mass transfer, electrical systems, control systems and communication systems. Also, they typically involve multiple temporal and spatial scales, and their evolution can be described bymore » coupled differential equations, discrete equations and events. Modeling and simulating such systems requires a higher level of abstraction and modularisation to manage the increased complexity compared to what is used in today's building simulation programs. Therefore, the trend towards more integrated building systems is likely to be a driving force for changing the status quo of today's building simulation programs. Thischapter discusses evolving modeling requirements and outlines a path toward a future environment for modeling and simulation of heterogeneous building systems.A range of topics that would require many additional pages of discussion has been omitted. Examples include computational fluid dynamics for air and particle flow in and around buildings, people movement, daylight simulation, uncertainty propagation and optimisation methods for building design and controls. For different discussions and perspectives on the future of building modeling and simulation, we refer to Sahlin (2000), Augenbroe (2001) and Malkawi and Augenbroe (2004).« less

  1. A Tool for the Automated Collection of Space Utilization Data: Three Dimensional Space Utilization Monitor

    NASA Technical Reports Server (NTRS)

    Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.

    2015-01-01

    Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP), in collaboration with the Behavioral Health and Performance (BHP) Element, is conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within that volume. NASA is looking for innovative methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods for collecting such data exist yet many are obtrusive and require significant post-processing. Example technologies used in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multiple camera filmography. However due to constraints of space operations many such methods are infeasible, such as inertial tracking systems which typically rely upon a gravity vector to normalize sensor readings, and traditional IR systems which are large and require extensive calibration. However multiple technologies have not yet been applied to space operations for these explicit purposes. Two of these include 3-Dimensional Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) and depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR).

  2. Integrating different tracking systems in football: multiple camera semi-automatic system, local position measurement and GPS technologies.

    PubMed

    Buchheit, Martin; Allen, Adam; Poon, Tsz Kit; Modonutti, Mattia; Gregson, Warren; Di Salvo, Valter

    2014-12-01

    Abstract During the past decade substantial development of computer-aided tracking technology has occurred. Therefore, we aimed to provide calibration equations to allow the interchangeability of different tracking technologies used in soccer. Eighty-two highly trained soccer players (U14-U17) were monitored during training and one match. Player activity was collected simultaneously with a semi-automatic multiple-camera (Prozone), local position measurement (LPM) technology (Inmotio) and two global positioning systems (GPSports and VX). Data were analysed with respect to three different field dimensions (small, <30 m 2 to full-pitch, match). Variables provided by the systems were compared, and calibration equations (linear regression models) between each system were calculated for each field dimension. Most metrics differed between the 4 systems with the magnitude of the differences dependant on both pitch size and the variable of interest. Trivial-to-small between-system differences in total distance were noted. However, high-intensity running distance (>14.4 km · h -1 ) was slightly-to-moderately greater when tracked with Prozone, and accelerations, small-to-very largely greater with LPM. For most of the equations, the typical error of the estimate was of a moderate magnitude. Interchangeability of the different tracking systems is possible with the provided equations, but care is required given their moderate typical error of the estimate.

  3. A Cognitive Diagnosis Model for Cognitively Based Multiple-Choice Options

    ERIC Educational Resources Information Center

    de la Torre, Jimmy

    2009-01-01

    Cognitive or skills diagnosis models are discrete latent variable models developed specifically for the purpose of identifying the presence or absence of multiple fine-grained skills. However, applications of these models typically involve dichotomous or dichotomized data, including data from multiple-choice (MC) assessments that are scored as…

  4. The Effects of Multiple Exemplar Instruction on the Relation between Listener and Intraverbal Categorization Repertoires

    ERIC Educational Resources Information Center

    Lechago, Sarah A.; Carr, James E.; Kisamore, April N.; Grow, Laura L.

    2015-01-01

    We evaluated the effects of multiple exemplar instruction (MEI) on the relation between listener and intraverbal categorization repertoires of six typically developing preschool-age children using a nonconcurrent multiple-probe design across participants. After failing to emit intraverbal categorization responses following listener categorization…

  5. A texture-based framework for improving CFD data visualization in a virtual environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bivins, Gerrick O'Ron

    2005-01-01

    In the field of computational fluid dynamics (CFD) accurate representations of fluid phenomena can be simulated hut require large amounts of data to represent the flow domain. Most datasets generated from a CFD simulation can be coarse, ~10,000 nodes or cells, or very fine with node counts on the order of 1,000,000. A typical dataset solution can also contain multiple solutions for each node, pertaining to various properties of the flow at a particular node. Scalar properties such as density, temperature, pressure, and velocity magnitude are properties that are typically calculated and stored in a dataset solution. Solutions are notmore » limited to just scalar properties. Vector quantities, such as velocity, are also often calculated and stored for a CFD simulation. Accessing all of this data efficiently during runtime is a key problem for visualization in an interactive application. Understanding simulation solutions requires a post-processing tool to convert the data into something more meaningful. Ideally, the application would present an interactive visual representation of the numerical data for any dataset that was simulated while maintaining the accuracy of the calculated solution. Most CFD applications currently sacrifice interactivity for accuracy, yielding highly detailed flow descriptions hut limiting interaction for investigating the field.« less

  6. A texture-based frameowrk for improving CFD data visualization in a virtual environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bivins, Gerrick O'Ron

    2005-01-01

    In the field of computational fluid dynamics (CFD) accurate representations of fluid phenomena can be simulated but require large amounts of data to represent the flow domain. Most datasets generated from a CFD simulation can be coarse, ~ 10,000 nodes or cells, or very fine with node counts on the order of 1,000,000. A typical dataset solution can also contain multiple solutions for each node, pertaining to various properties of the flow at a particular node. Scalar properties such as density, temperature, pressure, and velocity magnitude are properties that are typically calculated and stored in a dataset solution. Solutions aremore » not limited to just scalar properties. Vector quantities, such as velocity, are also often calculated and stored for a CFD simulation. Accessing all of this data efficiently during runtime is a key problem for visualization in an interactive application. Understanding simulation solutions requires a post-processing tool to convert the data into something more meaningful. Ideally, the application would present an interactive visual representation of the numerical data for any dataset that was simulated while maintaining the accuracy of the calculated solution. Most CFD applications currently sacrifice interactivity for accuracy, yielding highly detailed flow descriptions but limiting interaction for investigating the field.« less

  7. The use of a personal digital assistant for wireless entry of data into a database via the Internet.

    PubMed

    Fowler, D L; Hogle, N J; Martini, F; Roh, M S

    2002-01-01

    Researchers typically record data on a worksheet and at some later time enter it into the database. Wireless data entry and retrieval using a personal digital assistant (PDA) at the site of patient contact can simplify this process and improve efficiency. A surgeon and a nurse coordinator provided the content for the database. The computer programmer created the database, placed the pages of the database on the PDA screen, and researched and installed security measures. Designing the database took 6 months. Meeting Health Insurance Portability and Accountability Act of 1996 (HIPAA) requirements for patient confidentiality, satisfying institutional Information Services requirements, and ensuring connectivity required an additional 8 months before the functional system was complete. It is now possible to achieve wireless entry and retrieval of data using a PDA. Potential advantages include collection and entry of data at the same time, easy entry of data from multiple sites, and retrieval of data at the patient's bedside.

  8. Implementation of system intelligence in a 3-tier telemedicine/PACS hierarchical storage management system

    NASA Astrophysics Data System (ADS)

    Chao, Woodrew; Ho, Bruce K. T.; Chao, John T.; Sadri, Reza M.; Huang, Lu J.; Taira, Ricky K.

    1995-05-01

    Our tele-medicine/PACS archive system is based on a three-tier distributed hierarchical architecture, including magnetic disk farms, optical jukebox, and tape jukebox sub-systems. The hierarchical storage management (HSM) architecture, built around a low cost high performance platform [personal computers (PC) and Microsoft Windows NT], presents a very scaleable and distributed solution ideal for meeting the needs of client/server environments such as tele-medicine, tele-radiology, and PACS. These image based systems typically require storage capacities mirroring those of film based technology (multi-terabyte with 10+ years storage) and patient data retrieval times at near on-line performance as demanded by radiologists. With the scaleable architecture, storage requirements can be easily configured to meet the needs of the small clinic (multi-gigabyte) to those of a major hospital (multi-terabyte). The patient data retrieval performance requirement was achieved by employing system intelligence to manage migration and caching of archived data. Relevant information from HIS/RIS triggers prefetching of data whenever possible based on simple rules. System intelligence embedded in the migration manger allows the clustering of patient data onto a single tape during data migration from optical to tape medium. Clustering of patient data on the same tape eliminates multiple tape loading and associated seek time during patient data retrieval. Optimal tape performance can then be achieved by utilizing the tape drives high performance data streaming capabilities thereby reducing typical data retrieval delays associated with streaming tape devices.

  9. A Novel Electrocardiogram Segmentation Algorithm Using a Multiple Model Adaptive Estimator

    DTIC Science & Technology

    2002-03-01

    2-5 Figure 2-3. Typical Pulse Oximeter Placement [20].....................................................2-5 Figure 2-4...the heart contracts and then decreases when the heart relaxes. The pulse oximeter is typically place on a toe, finger, or earlobe as shown in Figure...2-3. Figure 2-2. Absorption as Light Passes Through the Body [24]. Figure 2-3. Typical Pulse Oximeter Placement [19]. The pulse

  10. Interface cloning and sharing: Interaction designs for conserving labor and maintaining state across 24X7 sensor operations teams

    NASA Astrophysics Data System (ADS)

    Ganter, John H.; Reeves, Paul C.

    2017-05-01

    Processing remote sensing data is the epitome of computation, yet real-time collection systems remain human-labor intensive. Operator labor is consumed by both overhead tasks (cost) and value-added production (benefit). In effect, labor is taxed and then lost. When an operator comes on-shift, they typically duplicate setup work that their teammates have already performed many times. "Pass down" of state information can be difficult if security restrictions require total logouts and blank screens - hours or even days of valuable history and context are lost. As work proceeds, duplicative effort is common because it is typically easier for operators to "do it over" rather than share what others have already done. As we begin a major new system version, we are refactoring the user interface to reduce time and motion losses. Working with users, we are developing "click budgets" to streamline interface use. One basic function is shared clipboards to reduce the use of sticky notes and verbal communication of data strings. We illustrate two additional designs to share work: window copying and window sharing. Copying (technically, shallow or deep object cloning) allows any system user to duplicate a window and configuration for themselves or another to use. Sharing allows a window to have multiple users: shareholders with read-write functionality and viewers with read-only. These solutions would allow windows to persist across multiple shifts, with a rotating cast of shareholders and viewers. Windows thus become durable objects of shared effort and persistent state. While these are low-tech functions, the cumulative labor savings in a 24X7 crew position (525,000 minutes/year spread over multiple individuals) would be significant. New design and implementation is never free and these investments typically do not appeal to government acquisition officers with short-term acquisition-cost concerns rather than a long-term O and M (operations and maintenance) perspective. We share some successes in educating some officers, in collaboration with system users, about the human capital involved in operating the systems they are acquiring.

  11. OPTIMIZATION OF MODERN DISPERSIVE RAMAN SPECTROMETERS FOR MOLECULAR SPECIATION OF ORGANICS IN WATER

    EPA Science Inventory

    Pesticides and industrial chemicals are typically complex organic molecules with multiple heteroatoms that can ionize in water. However, models for understanding the behavior of these chemicals in the environment typically assume that they exist exclusively as neutral species --...

  12. Lessons Learned In Developing Multiple Distributed Planning Systems for the International Space Station

    NASA Technical Reports Server (NTRS)

    Maxwell, Theresa G.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    The planning processes for the International Space Station (ISS) Program are quite complex. Detailed mission planning for ISS on-orbit operations is a distributed function. Pieces of the on-orbit plan are developed by multiple planning organizations, located around the world, based on their respective expertise and responsibilities. The "pieces" are then integrated to yield the final detailed plan that will be executed onboard the ISS. Previous space programs have not distributed the planning and scheduling functions to this extent. Major ISS planning organizations are currently located in the United States (at both the NASA Johnson Space Center (JSC) and NASA Marshall Space Flight Center (MSFC)), in Russia, in Europe, and in Japan. Software systems have been developed by each of these planning organizations to support their assigned planning and scheduling functions. Although there is some cooperative development and sharing of key software components, each planning system has been tailored to meet the unique requirements and operational environment of the facility in which it operates. However, all the systems must operate in a coordinated fashion in order to effectively and efficiently produce a single integrated plan of ISS operations, in accordance with the established planning processes. This paper addresses lessons learned during the development of these multiple distributed planning systems, from the perspective of the developer of one of the software systems. The lessons focus on the coordination required to allow the multiple systems to operate together, rather than on the problems associated with the development of any particular system. Included in the paper is a discussion of typical problems faced during the development and coordination process, such as incompatible development schedules, difficulties in defining system interfaces, technical coordination and funding for shared tools, continually evolving planning concepts/requirements, programmatic and budget issues, and external influences. Techniques that mitigated some of these problems will also be addressed, along with recommendations for any future programs involving the development of multiple planning and scheduling systems. Many of these lessons learned are not unique to the area of planning and scheduling systems, so may be applied to other distributed ground systems that must operate in concert to successfully support space mission operations.

  13. Lessons Learned in Developing Multiple Distributed Planning Systems for the International Space Station

    NASA Technical Reports Server (NTRS)

    Maxwell, Theresa G.

    2002-01-01

    The planning processes for the International Space Station (ISS) Program are quite complex. Detailed mission planning for ISS on-orbit operations is a distributed function. Pieces of the on-orbit plan are developed by multiple planning organizations, located around the world, based on their respective expertise and responsibilities. The pieces are then integrated to yield the final detailed plan that will be executed onboard the ISS. Previous space programs have not distributed the planning and scheduling functions to this extent. Major ISS planning organizations are currently located in the United States (at both the NASA Johnson Space Center (JSC) and NASA Marshall Space Flight Center (MSFC)), in Russia, in Europe, and in Japan. Software systems have been developed by each of these planning organizations to support their assigned planning and scheduling functions. Although there is some cooperative development and sharing of key software components, each planning system has been tailored to meet the unique requirements and operational environment of the facility in which it operates. However, all the systems must operate in a coordinated fashion in order to effectively and efficiently produce a single integrated plan of ISS operations, in accordance with the established planning processes. This paper addresses lessons learned during the development of these multiple distributed planning systems, from the perspective of the developer of one of the software systems. The lessons focus on the coordination required to allow the multiple systems to operate together, rather than on the problems associated with the development of any particular system. Included in the paper is a discussion of typical problems faced during the development and coordination process, such as incompatible development schedules, difficulties in defining system interfaces, technical coordination and funding for shared tools, continually evolving planning concepts/requirements, programmatic and budget issues, and external influences. Techniques that mitigated some of these problems will also be addressed, along with recommendations for any future programs involving the development of multiple planning and scheduling systems. Many of these lessons learned are not unique to the area of planning and scheduling systems, so may be applied to other distributed ground systems that must operate in concert to successfully support space mission operations.

  14. Spatially extended hybrid methods: a review

    PubMed Central

    2018-01-01

    Many biological and physical systems exhibit behaviour at multiple spatial, temporal or population scales. Multiscale processes provide challenges when they are to be simulated using numerical techniques. While coarser methods such as partial differential equations are typically fast to simulate, they lack the individual-level detail that may be required in regions of low concentration or small spatial scale. However, to simulate at such an individual level throughout a domain and in regions where concentrations are high can be computationally expensive. Spatially coupled hybrid methods provide a bridge, allowing for multiple representations of the same species in one spatial domain by partitioning space into distinct modelling subdomains. Over the past 20 years, such hybrid methods have risen to prominence, leading to what is now a very active research area across multiple disciplines including chemistry, physics and mathematics. There are three main motivations for undertaking this review. Firstly, we have collated a large number of spatially extended hybrid methods and presented them in a single coherent document, while comparing and contrasting them, so that anyone who requires a multiscale hybrid method will be able to find the most appropriate one for their need. Secondly, we have provided canonical examples with algorithms and accompanying code, serving to demonstrate how these types of methods work in practice. Finally, we have presented papers that employ these methods on real biological and physical problems, demonstrating their utility. We also consider some open research questions in the area of hybrid method development and the future directions for the field. PMID:29491179

  15. Conditional spectrum computation incorporating multiple causal earthquakes and ground-motion prediction models

    USGS Publications Warehouse

    Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas

    2013-01-01

    The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

  16. Human Behavior, Learning, and the Developing Brain: Typical Development

    ERIC Educational Resources Information Center

    Coch, Donna, Ed.; Fischer, Kurt W., Ed.; Dawson, Geraldine, Ed.

    2010-01-01

    This volume brings together leading authorities from multiple disciplines to examine the relationship between brain development and behavior in typically developing children. Presented are innovative cross-sectional and longitudinal studies that shed light on brain-behavior connections in infancy and toddlerhood through adolescence. Chapters…

  17. Confessions of a robot lobotomist

    NASA Technical Reports Server (NTRS)

    Gottshall, R. Marc

    1994-01-01

    Since its inception, numerically controlled (NC) machining methods have been used throughout the aerospace industry to mill, drill, and turn complex shapes by sequentially stepping through motion programs. However, the recent demand for more precision, faster feeds, exotic sensors, and branching execution have existing computer numerical control (CNC) and distributed numerical control (DNC) systems running at maximum controller capacity. Typical disadvantages of current CNC's include fixed memory capacities, limited communication ports, and the use of multiple control languages. The need to tailor CNC's to meet specific applications, whether it be expanded memory, additional communications, or integrated vision, often requires replacing the original controller supplied with the commercial machine tool with a more powerful and capable system. This paper briefly describes the process and equipment requirements for new controllers and their evolutionary implementation in an aerospace environment. The process of controller retrofit with currently available machines is examined, along with several case studies and their computational and architectural implications.

  18. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    NASA Technical Reports Server (NTRS)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  19. Arduino: a low-cost multipurpose lab equipment.

    PubMed

    D'Ausilio, Alessandro

    2012-06-01

    Typical experiments in psychological and neurophysiological settings often require the accurate control of multiple input and output signals. These signals are often generated or recorded via computer software and/or external dedicated hardware. Dedicated hardware is usually very expensive and requires additional software to control its behavior. In the present article, I present some accuracy tests on a low-cost and open-source I/O board (Arduino family) that may be useful in many lab environments. One of the strengths of Arduinos is the possibility they afford to load the experimental script on the board's memory and let it run without interfacing with computers or external software, thus granting complete independence, portability, and accuracy. Furthermore, a large community has arisen around the Arduino idea and offers many hardware add-ons and hundreds of free scripts for different projects. Accuracy tests show that Arduino boards may be an inexpensive tool for many psychological and neurophysiological labs.

  20. A tool to estimate bar patterns and flow conditions in estuaries when limited data is available

    NASA Astrophysics Data System (ADS)

    Leuven, J.; Verhoeve, S.; Bruijns, A. J.; Selakovic, S.; van Dijk, W. M.; Kleinhans, M. G.

    2017-12-01

    The effects of human interventions, natural evolution of estuaries and rising sea-level on food security and flood safety are largely unknown. In addition, ecologists require quantified habitat area to study future evolution of estuaries, but they lack predictive capability of bathymetry and hydrodynamics. For example, crucial input required for ecological models are values of intertidal area, inundation time, peak flow velocities and salinity. While numerical models can reproduce these spatial patterns, their computational times are long and for each case a new model must be developed. Therefore, we developed a comprehensive set of relations that accurately predict the hydrodynamics and the patterns of channels and bars, using a combination of the empirical relations derived from approximately 50 estuaries and theory for bars and estuaries. The first step is to predict local tidal prisms, which is the tidal prism that flows through a given cross-section. Second, the channel geometry is predicted from tidal prism and hydraulic geometry relations. Subsequently, typical flow velocities can be estimated from the channel geometry and tidal prism. Then, an ideal estuary shape is fitted to the measured planform: the deviation from the ideal shape, which is defined as the excess width, gives a measure of the locations where tidal bars form and their summed width (Leuven et al., 2017). From excess width, typical hypsometries can be predicted per cross-section. In the last step, flow velocities are calculated for the full range of occurring depths and salinity is calculated based on the estuary shape. Here, we will present a prototype tool that predicts equilibrium bar patterns and typical flow conditions. The tool is easy to use because the only input required is the estuary outline and tidal amplitude. Therefore it can be used by policy makers and researchers from multiple disciplines, such as ecologists, geologists and hydrologists, for example for paleogeographic reconstructions.

  1. Energetic electron injections and dipolarization events in Mercury's magnetotail: Substorm dynamics

    NASA Astrophysics Data System (ADS)

    Dewey, R. M.; Slavin, J. A.; Raines, J. M.; Imber, S.; Baker, D. N.; Lawrence, D. J.

    2017-12-01

    Despite its small size, Mercury's terrestrial-like magnetosphere experiences brief, yet intense, substorm intervals characterized by features similar to at Earth: loading/unloading of the tail lobes with open magnetic flux, dipolarization of the magnetic field at the inner edge of the plasma sheet, and, the focus of this presentation, energetic electron injection. We use the Gamma-Ray Spectrometer's high-time resolution (10 ms) energetic electron measurements to determine the relationship between substorm activity and energetic electron injections coincident with dipolarization fronts in the magnetotail. These dipolarizations were detected on the basis of their rapid ( 2 s) increase in the northward component of the tail magnetic field (ΔBz 30 nT), which typically persists for 10 s. We estimate the typical flow channel to be 0.15 RM, planetary convection speed of 750 km/s, cross-tail potential drop of 7 kV, and flux transport of 0.08 MWb for each dipolarization event, suggesting multiple simultaneous and sequential dipolarizations are required to unload the >1 MWb of magnetic flux typically returned to the dayside magnetosphere during a substorm interval. Indeed, while we observe most dipolarization-injections to be isolated or in small chains of events (i.e., 1-3 events), intervals of sawtooth-like injections with >20 sequential events are also present. The typical separation between dipolarization-injection events is 10 s. Magnetotail dipolarization, in addition to being a powerful source of electron acceleration, also plays a significant role in the substorm process at Mercury.

  2. Multi-criteria clinical decision support: A primer on the use of multiple criteria decision making methods to promote evidence-based, patient-centered healthcare.

    PubMed

    Dolan, James G

    2010-01-01

    Current models of healthcare quality recommend that patient management decisions be evidence-based and patient-centered. Evidence-based decisions require a thorough understanding of current information regarding the natural history of disease and the anticipated outcomes of different management options. Patient-centered decisions incorporate patient preferences, values, and unique personal circumstances into the decision making process and actively involve both patients along with health care providers as much as possible. Fundamentally, therefore, evidence-based, patient-centered decisions are multi-dimensional and typically involve multiple decision makers.Advances in the decision sciences have led to the development of a number of multiple criteria decision making methods. These multi-criteria methods are designed to help people make better choices when faced with complex decisions involving several dimensions. They are especially helpful when there is a need to combine "hard data" with subjective preferences, to make trade-offs between desired outcomes, and to involve multiple decision makers. Evidence-based, patient-centered clinical decision making has all of these characteristics. This close match suggests that clinical decision support systems based on multi-criteria decision making techniques have the potential to enable patients and providers to carry out the tasks required to implement evidence-based, patient-centered care effectively and efficiently in clinical settings.The goal of this paper is to give readers a general introduction to the range of multi-criteria methods available and show how they could be used to support clinical decision-making. Methods discussed include the balance sheet, the even swap method, ordinal ranking methods, direct weighting methods, multi-attribute decision analysis, and the analytic hierarchy process (AHP).

  3. Multi-criteria clinical decision support: A primer on the use of multiple criteria decision making methods to promote evidence-based, patient-centered healthcare

    PubMed Central

    Dolan, James G.

    2010-01-01

    Current models of healthcare quality recommend that patient management decisions be evidence-based and patient-centered. Evidence-based decisions require a thorough understanding of current information regarding the natural history of disease and the anticipated outcomes of different management options. Patient-centered decisions incorporate patient preferences, values, and unique personal circumstances into the decision making process and actively involve both patients along with health care providers as much as possible. Fundamentally, therefore, evidence-based, patient-centered decisions are multi-dimensional and typically involve multiple decision makers. Advances in the decision sciences have led to the development of a number of multiple criteria decision making methods. These multi-criteria methods are designed to help people make better choices when faced with complex decisions involving several dimensions. They are especially helpful when there is a need to combine “hard data” with subjective preferences, to make trade-offs between desired outcomes, and to involve multiple decision makers. Evidence-based, patient-centered clinical decision making has all of these characteristics. This close match suggests that clinical decision support systems based on multi-criteria decision making techniques have the potential to enable patients and providers to carry out the tasks required to implement evidence-based, patient-centered care effectively and efficiently in clinical settings. The goal of this paper is to give readers a general introduction to the range of multi-criteria methods available and show how they could be used to support clinical decision-making. Methods discussed include the balance sheet, the even swap method, ordinal ranking methods, direct weighting methods, multi-attribute decision analysis, and the analytic hierarchy process (AHP) PMID:21394218

  4. Operational Issues: What Science in Available?

    NASA Technical Reports Server (NTRS)

    Rosekind, Mark R.; Neri, David F.

    1997-01-01

    Flight/duty/rest considerations involve two highly complex factors: the diverse demands of aviation operations and human physiology (especially sleep and circadian rhythms). Several core operational issues related to fatigue have been identified, such as minimum rest requirements, duty length, flight time considerations, crossing multiple time zones, and night flying. Operations also can involve on-call reserve status and callout, delays due to unforeseen circumstances (e.g., weather, mechanical), and on-demand flights. Over 40 years of scientific research is now available to apply to these complex issues of flight/duty/rest requirements. This research involves controlled 'laboratory studies, simulations, and data collected during regular flight operations. When flight/duty/rest requirements are determined they are typically based on a variety of considerations, such as operational demand, safety, economic, etc. Rarely has the available, state-of-the-art science been a consideration along with these other factors when determining flight/duty/rest requirements. While the complexity of the operational demand and human physiology precludes an absolute solution, there is an opportunity to take full advantage of the current scientific data. Incorporating these data in a rational operational manner into flight/duty/rest requirements can improve flight crew performance, alertness, and ultimately, aviation safety.

  5. Control law synthesis and optimization software for large order aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Pototzky, A.; Noll, Thomas

    1989-01-01

    A flexible aircraft or space structure with active control is typically modeled by a large-order state space system of equations in order to accurately represent the rigid and flexible body modes, unsteady aerodynamic forces, actuator dynamics and gust spectra. The control law of this multi-input/multi-output (MIMO) system is expected to satisfy multiple design requirements on the dynamic loads, responses, actuator deflection and rate limitations, as well as maintain certain stability margins, yet should be simple enough to be implemented on an onboard digital microprocessor. A software package for performing an analog or digital control law synthesis for such a system, using optimal control theory and constrained optimization techniques is described.

  6. Analysis of the Quality of Parabolic Flight

    NASA Technical Reports Server (NTRS)

    Lambot, Thomas; Ord, Stephan F.

    2016-01-01

    Parabolic flight allows researchers to conduct several micro-gravity experiments, each with up to 20 seconds of micro-gravity, in the course of a single day. However, the quality of the flight environment can vary greatly over the course of a single parabola, thus affecting the experimental results. Researchers therefore require knowledge of the actual flight environment as a function of time. The NASA Flight Opportunities program (FO) has reviewed the acceleration data for over 400 parabolas and investigated the level of micro-gravity quality. It was discovered that a typical parabola can be segmented into multiple phases with different qualities and durations. The knowledge of the microgravity characteristics within the parabola will prove useful when planning an experiment.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, D.J.; Warner, J.A.; LeBarron, N.

    Processes that use energetic ions for large substrates require that the time-averaged erosion effects from the ion flux be uniform across the surface. A numerical model has been developed to determine this flux and its effects on surface etching of a silica/photoresist combination. The geometry of the source and substrate is very similar to a typical deposition geometry with single or planetary substrate rotation. The model was used to tune an inert ion-etching process that used single or multiple Kaufman sources to less than 3% uniformity over a 30-cm aperture after etching 8 {micro}m of material. The same model canmore » be used to predict uniformity for ion-assisted deposition (IAD).« less

  8. Proteomic analysis of formalin-fixed paraffin embedded tissue by MALDI imaging mass spectrometry

    PubMed Central

    Casadonte, Rita; Caprioli, Richard M

    2012-01-01

    Archived formalin-fixed paraffin-embedded (FFPE) tissue collections represent a valuable informational resource for proteomic studies. Multiple FFPE core biopsies can be assembled in a single block to form tissue microarrays (TMAs). We describe a protocol for analyzing protein in FFPE -TMAs using matrix-assisted laser desorption/ionization (MAL DI) imaging mass spectrometry (IMS). The workflow incorporates an antigen retrieval step following deparaffinization, in situ trypsin digestion, matrix application and then mass spectrometry signal acquisition. The direct analysis of FFPE -TMA tissue using IMS allows direct analysis of multiple tissue samples in a single experiment without extraction and purification of proteins. The advantages of high speed and throughput, easy sample handling and excellent reproducibility make this technology a favorable approach for the proteomic analysis of clinical research cohorts with large sample numbers. For example, TMA analysis of 300 FFPE cores would typically require 6 h of total time through data acquisition, not including data analysis. PMID:22011652

  9. Computational Precision of Mental Inference as Critical Source of Human Choice Suboptimality.

    PubMed

    Drugowitsch, Jan; Wyart, Valentin; Devauchelle, Anne-Dominique; Koechlin, Etienne

    2016-12-21

    Making decisions in uncertain environments often requires combining multiple pieces of ambiguous information from external cues. In such conditions, human choices resemble optimal Bayesian inference, but typically show a large suboptimal variability whose origin remains poorly understood. In particular, this choice suboptimality might arise from imperfections in mental inference rather than in peripheral stages, such as sensory processing and response selection. Here, we dissociate these three sources of suboptimality in human choices based on combining multiple ambiguous cues. Using a novel quantitative approach for identifying the origin and structure of choice variability, we show that imperfections in inference alone cause a dominant fraction of suboptimal choices. Furthermore, two-thirds of this suboptimality appear to derive from the limited precision of neural computations implementing inference rather than from systematic deviations from Bayes-optimal inference. These findings set an upper bound on the accuracy and ultimate predictability of human choices in uncertain environments. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Embedding Human Expert Cognition Into Autonomous UAS Trajectory Planning.

    PubMed

    Narayan, Pritesh; Meyer, Patrick; Campbell, Duncan

    2013-04-01

    This paper presents a new approach for the inclusion of human expert cognition into autonomous trajectory planning for unmanned aerial systems (UASs) operating in low-altitude environments. During typical UAS operations, multiple objectives may exist; therefore, the use of multicriteria decision aid techniques can potentially allow for convergence to trajectory solutions which better reflect overall mission requirements. In that context, additive multiattribute value theory has been applied to optimize trajectories with respect to multiple objectives. A graphical user interface was developed to allow for knowledge capture from a human decision maker (HDM) through simulated decision scenarios. The expert decision data gathered are converted into value functions and corresponding criteria weightings using utility additive theory. The inclusion of preferences elicited from HDM data within an automated decision system allows for the generation of trajectories which more closely represent the candidate HDM decision preferences. This approach has been demonstrated in this paper through simulation using a fixed-wing UAS operating in low-altitude environments.

  11. Nuptial feeding of spermless spermatophores in the Hawaiian swordtail cricket, Laupala pacifica (Gryllidae: Triginodiinae)

    NASA Astrophysics Data System (ADS)

    Decarvalho, Tagide N.; Shaw, Kerry L.

    2005-10-01

    Crickets in the genus Laupala (subfamily Trigonidiinae) have an elaborate courtship system, defined by a highly ritualized serial transfer of multiple spermatophores. Males produce multiple “micro” spermatophores followed by a final “macro” spermatophore during a single mating bout. Remarkably, the microspermatophores of L. cerasina, the first species whose mating system was studied in detail, were discovered to be spermless. However, in a study of another species, L. pacifica, sperm transfer was reported after every copulation suggesting that L. pacifica microspermatophores contain sperm. The presence or absence of sperm in the microspermatophore has important implications for the evolution of this exaggerated courtship system and the origin of nuptial gifts. In this study, we systematically examined L. pacifica spermatophore contents for sperm using a fluorescent nuclear stain. We detected sperm only in macrospermatophores. This finding suggests that spermless microspermatophores are typical for Laupala; thus, to determine the origin of this highly modified phenotype will require comparative analyses with closely related outgroups that exhibit less exaggerated courtship systems.

  12. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    NASA Astrophysics Data System (ADS)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  13. Ambulatory rehabilitation in multiple sclerosis.

    PubMed

    Kelleher, Kevin John; Spence, William; Solomonidis, Stephan; Apatsidis, Dimitrios

    2009-01-01

    Multiple sclerosis (MS) is an autoimmunogenic disease involving demyelination within the central nervous system. Many of the typical impairments associated with MS can affect gait patterns. With walking ability being one of the most decisive factors when assessing quality of life and independent living, this review focuses on matters, which are considered of significance for maintaining and supporting ambulation. This article is an attempt to describe current research and available interventions that the caring healthcare professional can avail of and to review the present trends in research to further these available options. Evidence-based rehabilitation techniques are of interest in the care of patients with MS, given the various existing modalities of treatment. In this review, we summarise the primary factors affecting ambulation and highlight available treatment methods. We review studies that have attempted to characterise gait deficits within this patient population. Finally, as ambulatory rehabilitation requires multidisciplinary interventions, we examine approaches, which may serve to support and maintain ambulation within this patient group for as long as possible.

  14. Interprofessional education about patient decision support in specialty care.

    PubMed

    Politi, Mary C; Pieterse, Arwen H; Truant, Tracy; Borkhoff, Cornelia; Jha, Vikram; Kuhl, Laura; Nicolai, Jennifer; Goss, Claudia

    2011-11-01

    Specialty care involves services provided by health professionals who focus on treating diseases affecting one body system. In contrast to primary care - aimed at providing continuous, comprehensive care - specialty care often involves intermittent episodes of care focused around specific medical conditions. In addition, it typically includes multiple providers who have unique areas of expertise that are important in supporting patients' care. Interprofessional care involves multiple professionals from different disciplines collaborating to provide an integrated approach to patient care. For patients to experience continuity of care across interprofessional providers, providers need to communicate and maintain a shared sense of responsibility to their patients. In this article, we describe challenges inherent in providing interprofessional patient decision support in specialty care. We propose ways for providers to engage in interprofessional decision support and discuss promising approaches to teaching an interprofessional decision support to specialty care providers. Additional evaluation and empirical research are required before further recommendations can be made about education for interprofessional decision support in specialty care.

  15. A coupling of homology modeling with multiple molecular dynamics simulation for identifying representative conformation of GPCR structures: a case study on human bombesin receptor subtype-3.

    PubMed

    Nowroozi, Amin; Shahlaei, Mohsen

    2017-02-01

    In this study, a computational pipeline was therefore devised to overcome homology modeling (HM) bottlenecks. The coupling of HM with molecular dynamics (MD) simulation is useful in that it tackles the sampling deficiency of dynamics simulations by providing good-quality initial guesses for the native structure. Indeed, HM also relaxes the severe requirement of force fields to explore the huge conformational space of protein structures. In this study, the interaction between the human bombesin receptor subtype-3 and MK-5046 was investigated integrating HM, molecular docking, and MD simulations. To improve conformational sampling in typical MD simulations of GPCRs, as in other biomolecules, multiple trajectories with different initial conditions can be employed rather than a single long trajectory. Multiple MD simulations of human bombesin receptor subtype-3 with different initial atomic velocities are applied to sample conformations in the vicinity of the structure generated by HM. The backbone atom conformational space distribution of replicates is analyzed employing principal components analysis. As a result, the averages of structural and dynamic properties over the twenty-one trajectories differ significantly from those obtained from individual trajectories.

  16. Microfluidic CODES: a scalable multiplexed electronic sensor for orthogonal detection of particles in microfluidic channels.

    PubMed

    Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih

    2016-04-21

    Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.

  17. Multiple-try differential evolution adaptive Metropolis for efficient solution of highly parameterized models

    NASA Astrophysics Data System (ADS)

    Eric, L.; Vrugt, J. A.

    2010-12-01

    Spatially distributed hydrologic models potentially contain hundreds of parameters that need to be derived by calibration against a historical record of input-output data. The quality of this calibration strongly determines the predictive capability of the model and thus its usefulness for science-based decision making and forecasting. Unfortunately, high-dimensional optimization problems are typically difficult to solve. Here we present our recent developments to the Differential Evolution Adaptive Metropolis (DREAM) algorithm (Vrugt et al., 2009) to warrant efficient solution of high-dimensional parameter estimation problems. The algorithm samples from an archive of past states (Ter Braak and Vrugt, 2008), and uses multiple-try Metropolis sampling (Liu et al., 2000) to decrease the required burn-in time for each individual chain and increase efficiency of posterior sampling. This approach is hereafter referred to as MT-DREAM. We present results for 2 synthetic mathematical case studies, and 2 real-world examples involving from 10 to 240 parameters. Results for those cases show that our multiple-try sampler, MT-DREAM, can consistently find better solutions than other Bayesian MCMC methods. Moreover, MT-DREAM is admirably suited to be implemented and ran on a parallel machine and is therefore a powerful method for posterior inference.

  18. 12. TYPICAL CONCRETELINED CANAL/FLUME TRANSITION (LOCATED JUST WEST OF HIGHWAY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. TYPICAL CONCRETE-LINED CANAL/FLUME TRANSITION (LOCATED JUST WEST OF HIGHWAY 190 CROSSOVER). WATER CONVEYANCE SYSTEM IS COMPRISED OF MULTIPLE INTERSET CONCRETE-LINED CANAL AND FLUME SECTIONS. VIEW TO WEST. - Tule River Hydroelectric Project, Water Conveyance System, Middle Fork Tule River, Springville, Tulare County, CA

  19. Conjoin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sjaardema, Gregory

    2010-08-06

    Conjoin is a code for joining sequentially in time multiple exodusII database files. It is used to create a single results or restart file from multiple results or restart files which typically arise as the result of multiple restarted analyses. The resulting output file will be the union of the input files with a status variable indicating the status of each element at the various time planes.Combining multiple exodusII files arising from a restarted analysis or combining multiple exodusII files arising from a finite element analysis with dynamic topology changes.

  20. High-precision measurement of variations in calcium isotope ratios in urine by multiple collector inductively coupled plasma mass spectrometry

    USGS Publications Warehouse

    Morgan, J.L.L.; Gordon, G.W.; Arrua, R.C.; Skulan, J.L.; Anbar, A.D.; Bullen, T.D.

    2011-01-01

    We describe a new chemical separation method to isolate Ca from other matrix elements in biological samples, developed with the long-term goal of making high-precision measurement of natural stable Ca isotope variations a clinically applicable tool to assess bone mineral balance. A new two-column procedure utilizing HBr achieves the purity required to accurately and precisely measure two Ca isotope ratios (44Ca/42Ca and 44Ca/43Ca) on a Neptune multiple collector inductively coupled plasma mass spectrometer (MC-ICPMS) in urine. Purification requirements for Sr, Ti, and K (Ca/Sr > 10000; Ca/Ti > 10000000; and Ca/K > 10) were determined by addition of these elements to Ca standards of known isotopic composition. Accuracy was determined by (1) comparing Ca isotope results for samples and standards to published data obtained using thermal ionization mass spectrometry (TIMS), (2) adding a Ca standard of known isotopic composition to a urine sample purified of Ca, and (3) analyzing mixtures of urine samples and standards in varying proportions. The accuracy and precision of δ44/42Ca measurements of purified samples containing 25 μg of Ca can be determined with typical errors less than ±0.2‰ (2σ).

  1. Qualis-SIS: automated standard curve generation and quality assessment for multiplexed targeted quantitative proteomic experiments with labeled standards.

    PubMed

    Mohammed, Yassene; Percy, Andrew J; Chambers, Andrew G; Borchers, Christoph H

    2015-02-06

    Multiplexed targeted quantitative proteomics typically utilizes multiple reaction monitoring and allows the optimized quantification of a large number of proteins. One challenge, however, is the large amount of data that needs to be reviewed, analyzed, and interpreted. Different vendors provide software for their instruments, which determine the recorded responses of the heavy and endogenous peptides and perform the response-curve integration. Bringing multiplexed data together and generating standard curves is often an off-line step accomplished, for example, with spreadsheet software. This can be laborious, as it requires determining the concentration levels that meet the required accuracy and precision criteria in an iterative process. We present here a computer program, Qualis-SIS, that generates standard curves from multiplexed MRM experiments and determines analyte concentrations in biological samples. Multiple level-removal algorithms and acceptance criteria for concentration levels are implemented. When used to apply the standard curve to new samples, the software flags each measurement according to its quality. From the user's perspective, the data processing is instantaneous due to the reactivity paradigm used, and the user can download the results of the stepwise calculations for further processing, if necessary. This allows for more consistent data analysis and can dramatically accelerate the downstream data analysis.

  2. Efficient purification of ethene by an ethane-trapping metal-organic framework

    PubMed Central

    Liao, Pei-Qin; Zhang, Wei-Xiong; Zhang, Jie-Peng; Chen, Xiao-Ming

    2015-01-01

    Separating ethene (C2H4) from ethane (C2H6) is of paramount importance and difficulty. Here we show that C2H4 can be efficiently purified by trapping the inert C2H6 in a judiciously designed metal-organic framework. Under ambient conditions, passing a typical cracked gas mixture (15:1 C2H4/C2H6) through 1 litre of this C2H6 selective adsorbent directly produces 56 litres of C2H4 with 99.95%+ purity (required by the C2H4 polymerization reactor) at the outlet, with a single breakthrough operation, while other C2H6 selective materials can only produce ca. ⩽ litre, and conventional C2H4 selective adsorbents require at least four adsorption–desorption cycles to achieve the same C2H4 purity. Single-crystal X-ray diffraction and computational simulation studies showed that the exceptional C2H6 selectivity arises from the proper positioning of multiple electronegative and electropositive functional groups on the ultramicroporous pore surface, which form multiple C–H···N hydrogen bonds with C2H6 instead of the more polar competitor C2H4. PMID:26510376

  3. A distributed database view of network tracking systems

    NASA Astrophysics Data System (ADS)

    Yosinski, Jason; Paffenroth, Randy

    2008-04-01

    In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a global track picture. Generating this global track picture at a central location is fairly straightforward, but the single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the development of decentralized methods. In many decentralized tracking systems, trackers communicate with their peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical. Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist; we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight' layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network conditions. We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a well known technique for resolving transactions across a lossy network, we describe several ways in which one may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs. maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for only track initiation transactions. Finally, we present simulation results contrasting the performance of such a system with that of more traditional decentralized tracking implementations.

  4. Avian movements and wetland connectivity in landscape conservation

    USGS Publications Warehouse

    Haig, Susan M.; Mehlman, D.W.; Oring, L.W.

    1998-01-01

    The current conservation crisis calls for research and management to be carried out on a long-term, multi-species basis at large spatial scales. Unfortunately, scientists, managers, and agencies often are stymied in their effort to conduct these large-scale studies because of a lack of appropriate technology, methodology, and funding. This issue is of particular concern in wetland conservation, for which the standard landscape approach may include consideration of a large tract of land but fail to incorporate the suite of wetland sites frequently used by highly mobile organisms such as waterbirds (e.g., shorebirds, wading birds, waterfowl). Typically, these species have population dynamics that require use of multiple wetlands, but this aspect of their life history has often been ignored in planning for their conservation. We outline theoretical, empirical, modeling, and planning problems associated with this issue and suggest solutions to some current obstacles. These solutions represent a tradeoff between typical in-depth single-species studies and more generic multi-species studies. They include studying within- and among-season movements of waterbirds on a spatial scale appropriate to both widely dispersing and more stationary species; multi-species censuses at multiple sites; further development and use of technology such as satellite transmitters and population-specific molecular markers; development of spatially explicit population models that consider within-season movements of waterbirds; and recognition from funding agencies that landscape-level issues cannot adequately be addressed without support for these types of studies.

  5. Automatic Mesh Generation of Hybrid Mesh on Valves in Multiple Positions in Feedline Systems

    NASA Technical Reports Server (NTRS)

    Ross, Douglass H.; Ito, Yasushi; Dorothy, Fredric W.; Shih, Alan M.; Peugeot, John

    2010-01-01

    Fluid flow simulations through a valve often require evaluation of the valve in multiple opening positions. A mesh has to be generated for the valve for each position and compounding. The problem is the fact that the valve is typically part of a larger feedline system. In this paper, we propose to develop a system to create meshes for feedline systems with parametrically controlled valve openings. Herein we outline two approaches to generate the meshes for a valve in a feedline system at multiple positions. There are two issues that must be addressed. The first is the creation of the mesh on the valve for multiple positions. The second is the generation of the mesh for the total feedline system including the valve. For generation of the mesh on the valve, we will describe the use of topology matching and mesh generation parameter transfer. For generation of the total feedline system, we will describe two solutions that we have implemented. In both cases the valve is treated as a component in the feedline system. In the first method the geometry of the valve in the feedline system is replaced with a valve at a different opening position. Geometry is created to connect the valve to the feedline system. Then topology for the valve is created and the portion of the topology for the valve is topology matched to the standard valve in a different position. The mesh generation parameters are transferred and then the volume mesh for the whole feedline system is generated. The second method enables the user to generate the volume mesh on the valve in multiple open positions external to the feedline system, to insert it into the volume mesh of the feedline system, and to reduce the amount of computer time required for mesh generation because only two small volume meshes connecting the valve to the feedline mesh need to be updated.

  6. Effect of Multiple Testing Adjustment in Differential Item Functioning Detection

    ERIC Educational Resources Information Center

    Kim, Jihye; Oshima, T. C.

    2013-01-01

    In a typical differential item functioning (DIF) analysis, a significance test is conducted for each item. As a test consists of multiple items, such multiple testing may increase the possibility of making a Type I error at least once. The goal of this study was to investigate how to control a Type I error rate and power using adjustment…

  7. SU-E-T-504: Intensity-Modulated Radiosurgery Treatments Derived by Optimizing Delivery of Sphere Packing Treatment Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hermansen, M; Bova, F; John, T St.

    2015-06-15

    Purpose To minimize the number of monitor units required to deliver a sphere packing stereotactic radiosurgery (SRS) plan by eliminating overlaps of individual beam projections. Methods An algorithm was written in C{sup ++} to calculate SRS treatment doses using sphere packing. Three fixed beams were used to approximate each arc in a typical SRS treatment plan. For cases involving multiple isocenters, at each gantry and table angle position beams directed to individual spheres overlap to produce regions of high dose, resulting in intensity modulated beams. These high dose regions were dampened by post-processing of the combined beam profile. The post-processmore » dampening involves removing the excess overlapping fluence from all but the highest contributing beam. The dampened beam profiles at each table and gantry angle position were then summed to produce the new total dose distribution. Results Delivery times for even the most complex multiple sphere plans can be reduced to consistent times of about 20 to 30 minutes. The total MUs required to deliver the plan can also be reduced by as much as 85% of the original plan’s MUs. Conclusion Regions of high dose are removed. Dampening overlapping radiation fluence can produce the new beam profiles that have more uniform dose distributions using less MUs. This results in a treatment that requires significantly fewer intensity values than traditional IMRT or VAMT planning.« less

  8. Target-directed catalytic metallodrugs

    PubMed Central

    Joyner, J.C.; Cowan, J.A.

    2013-01-01

    Most drugs function by binding reversibly to specific biological targets, and therapeutic effects generally require saturation of these targets. One means of decreasing required drug concentrations is incorporation of reactive metal centers that elicit irreversible modification of targets. A common approach has been the design of artificial proteases/nucleases containing metal centers capable of hydrolyzing targeted proteins or nucleic acids. However, these hydrolytic catalysts typically provide relatively low rate constants for target inactivation. Recently, various catalysts were synthesized that use oxidative mechanisms to selectively cleave/inactivate therapeutic targets, including HIV RRE RNA or angiotensin converting enzyme (ACE). These oxidative mechanisms, which typically involve reactive oxygen species (ROS), provide access to comparatively high rate constants for target inactivation. Target-binding affinity, co-reactant selectivity, reduction potential, coordination unsaturation, ROS products (metal-associated vs metal-dissociated; hydroxyl vs superoxide), and multiple-turnover redox chemistry were studied for each catalyst, and these parameters were related to the efficiency, selectivity, and mechanism(s) of inactivation/cleavage of the corresponding target for each catalyst. Important factors for future oxidative catalyst development are 1) positioning of catalyst reduction potential and redox reactivity to match the physiological environment of use, 2) maintenance of catalyst stability by use of chelates with either high denticity or other means of stabilization, such as the square planar geometric stabilization of Ni- and Cu-ATCUN complexes, 3) optimal rate of inactivation of targets relative to the rate of generation of diffusible ROS, 4) targeting and linker domains that afford better control of catalyst orientation, and 5) general bio-availability and drug delivery requirements. PMID:23828584

  9. How much medicine do spine surgeons need to know to better select and care for patients?

    PubMed Central

    Epstein, Nancy E.

    2012-01-01

    Background: Although we routinely utilize medical consultants for preoperative clearance and postoperative patient follow-up, we as spine surgeons need to know more medicine to better select and care for our patients. Methods: This study provides additional medical knowledge to facilitate surgeons’ “cross-talk” with medical colleagues who are concerned about how multiple comorbid risk factors affect their preoperative clearance, and impact patients’ postoperative outcomes. Results: Within 6 months of an acute myocardial infarction (MI), patients undergoing urological surgery encountered a 40% mortality rate: similar rates may likely apply to patients undergoing spinal surgery. Within 6 weeks to 2 months of placing uncoated cardiac, carotid, or other stents, endothelialization is typically complete; as anti-platelet therapy may often be discontinued, spinal surgery can then be more safely performed. Coated stents, however, usually require 6 months to 1 year for endothelialization to occur; thus spinal surgery is often delayed as anti-platelet therapy must typically be continued to avoid thrombotic complications (e.g., stroke/MI). Diabetes and morbid obesity both increase the risk of postoperative infection, and poor wound healing, while the latter increases the risk of phlebitis/pulmonary embolism. Both hypercoagluation and hypocoagulation syndromes may require special preoperative testing/medications and/or transfusions of specific hematological factors. Pulmonary disease, neurological disorders, and major psychiatric pathology may also require further evaluations/therapy, and may even preclude successful surgical intervention. Conclusions: Although we as spinal surgeons utilize medical consultants for preoperative clearance and postoperative care, we need to know more medicine to better select and care for our patients. PMID:23248752

  10. MOSAIC--A Modular Approach to Data Management in Epidemiological Studies.

    PubMed

    Bialke, M; Bahls, T; Havemann, C; Piegsa, J; Weitmann, K; Wegner, T; Hoffmann, W

    2015-01-01

    In the context of an increasing number of multi-centric studies providing data from different sites and sources the necessity for central data management (CDM) becomes undeniable. This is exacerbated by a multiplicity of featured data types, formats and interfaces. In relation to methodological medical research the definition of central data management needs to be broadened beyond the simple storage and archiving of research data. This paper highlights typical requirements of CDM for cohort studies and registries and illustrates how orientation for CDM can be provided by addressing selected data management challenges. Therefore in the first part of this paper a short review summarises technical, organisational and legal challenges for CDM in cohort studies and registries. A deduced set of typical requirements of CDM in epidemiological research follows. In the second part the MOSAIC project is introduced (a modular systematic approach to implement CDM). The modular nature of MOSAIC contributes to manage both technical and organisational challenges efficiently by providing practical tools. A short presentation of a first set of tools, aiming for selected CDM requirements in cohort studies and registries, comprises a template for comprehensive documentation of data protection measures, an interactive reference portal for gaining insights and sharing experiences, supplemented by modular software tools for generation and management of generic pseudonyms, for participant management and for sophisticated consent management. Altogether, work within MOSAIC addresses existing challenges in epidemiological research in the context of CDM and facilitates the standardized collection of data with pre-programmed modules and provided document templates. The necessary effort for in-house programming is reduced, which accelerates the start of data collection.

  11. A computer program for uncertainty analysis integrating regression and Bayesian methods

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  12. Ocelots on Barro Colorado Island are infected with feline immunodeficiency virus but not other common feline and canine viruses.

    PubMed

    Franklin, Samuel P; Kays, Roland W; Moreno, Ricardo; TerWee, Julie A; Troyer, Jennifer L; VandeWoude, Sue

    2008-07-01

    Transmission of pathogens from domestic animals to wildlife populations (spill-over) has precipitated local wildlife extinctions in multiple geographic locations. Identifying such events before they cause population declines requires differentiating spillover from endemic disease, a challenge complicated by a lack of baseline data from wildlife populations that are isolated from domestic animals. We tested sera collected from 12 ocelots (Leopardus pardalis) native to Barro Colorado Island, Panama, which is free of domestic animals, for antibodies to feline herpes virus, feline calicivirus, feline corona virus, feline panleukopenia virus, canine distemper virus, and feline immunodeficiency virus (FIV), typically a species-specific infection. Samples also were tested for feline leukemia virus antigens. Positive tests results were only observed for FIV; 50% of the ocelots were positive. We hypothesize that isolation of this population has prevented introduction of pathogens typically attributed to contact with domestic animals. The high density of ocelots on Barro Colorado Island may contribute to a high prevalence of FIV infection, as would be expected with increased contact rates among conspecifics in a geographically restricted population.

  13. Gear optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian

    1988-01-01

    The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.

  14. SAR (Synthetic Aperture Radar). Earth observing system. Volume 2F: Instrument panel report

    NASA Technical Reports Server (NTRS)

    1987-01-01

    The scientific and engineering requirements for the Earth Observing System (EOS) imaging radar are provided. The radar is based on Shuttle Imaging Radar-C (SIR-C), and would include three frequencies: 1.25 GHz, 5.3 GHz, and 9.6 GHz; selectable polarizations for both transmit and receive channels; and selectable incidence angles from 15 to 55 deg. There would be three main viewing modes: a local high-resolution mode with typically 25 m resolution and 50 km swath width; a regional mapping mode with 100 m resolution and up to 200 km swath width; and a global mapping mode with typically 500 m resolution and up to 700 km swath width. The last mode allows global coverage in three days. The EOS SAR will be the first orbital imaging radar to provide multifrequency, multipolarization, multiple incidence angle observations of the entire Earth. Combined with Canadian and Japanese satellites, continuous radar observation capability will be possible. Major applications in the areas of glaciology, hydrology, vegetation science, oceanography, geology, and data and information systems are described.

  15. Late onset of atypical paroxysmal non-kinesigenic dyskinesia with remote history of Graves' disease.

    PubMed

    Rana, Abdul Qayyum; Nadeem, Ambreen; Yousuf, Muhammad Saad; Kachhvi, Zakerabibi M

    2013-10-01

    Paroxysmal non-kinesigenic dyskinesia (PNKD) is a rare hyperkinetic movement disorder and falls under the category of paroxysmal movement disorders. In this condition, episodes are spontaneous, involuntary, and involve dystonic posturing with choreic and ballistic movements. Attacks last for minutes to hours and rarely occur more than once per day. Attacks are not typically triggered by sudden movement, but may be brought on by alcohol, caffeine, stress, fatigue, or chocolate. We report a patient with multiple atypical features of PNKD. She had a 7-year history of this condition with onset at the age of 59, and a remote history of Graves' disease requiring total thyroidectomy. The frequency of attacks in our case ranged from five to six times a day to a minimum of twice per week, and the duration of episode was short, lasting not more than 2 min. Typically, PNKDs occur at a much younger age and have longer attack durations with low frequency. Administering clonazepam worked to reduce her symptoms, although majority of previous research suggests that pharmacological interventions have poor outcomes.

  16. Ocelots on Barro Colorado Island Are Infected with Feline Immunodeficiency Virus but Not Other Common Feline and Canine Viruses

    PubMed Central

    Franklin, Samuel P.; Kays, Roland W.; Moreno, Ricardo; TerWee, Julie A.; Troyer, Jennifer L.; VandeWoude, Sue

    2011-01-01

    Transmission of pathogens from domestic animals to wildlife populations (spill-over) has precipitated local wildlife extinctions in multiple geographic locations. Identifying such events before they cause population declines requires differentiating spillover from endemic disease, a challenge complicated by a lack of baseline data from wildlife populations that are isolated from domestic animals. We tested sera collected from 12 ocelots (Leopardus pardalis) native to Barro Colorado Island, Panama, which is free of domestic animals, for antibodies to feline herpes virus, feline calicivirus, feline corona virus, feline panleukopenia virus, canine distemper virus, and feline immunodeficiency virus (FIV), typically a species-specific infection. Samples also were tested for feline leukemia virus antigens. Positive tests results were only observed for FIV; 50% of the ocelots were positive. We hypothesize that isolation of this population has prevented introduction of pathogens typically attributed to contact with domestic animals. The high density of ocelots on Barro Colorado Island may contribute to a high prevalence of FIV infection, as would be expected with increased contact rates among conspecifics in a geographically restricted population. PMID:18689668

  17. Universal scheme for finite-probability perfect transfer of arbitrary multispin states through spin chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Zhong-Xiao, E-mail: zxman@mail.qfnu.edu.cn; An, Nguyen Ba, E-mail: nban@iop.vast.ac.vn; Xia, Yun-Jie, E-mail: yjxia@mail.qfnu.edu.cn

    In combination with the theories of open system and quantum recovering measurement, we propose a quantum state transfer scheme using spin chains by performing two sequential operations: a projective measurement on the spins of ‘environment’ followed by suitably designed quantum recovering measurements on the spins of interest. The scheme allows perfect transfer of arbitrary multispin states through multiple parallel spin chains with finite probability. Our scheme is universal in the sense that it is state-independent and applicable to any model possessing spin–spin interactions. We also present possible methods to implement the required measurements taking into account the current experimental technologies.more » As applications, we consider two typical models for which the probabilities of perfect state transfer are found to be reasonably high at optimally chosen moments during the time evolution. - Highlights: • Scheme that can achieve perfect quantum state transfer is devised. • The scheme is state-independent and applicable to any spin-interaction models. • The scheme allows perfect transfer of arbitrary multispin states. • Applications to two typical models are considered in detail.« less

  18. Screening and Spectral Summing of LANL Empty Waste Drums - 13226

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruetzmacher, Kathleen M.; Bustos, Roland M.; Ferran, Scott G.

    2013-07-01

    Empty 55-gallon drums that formerly held transuranic (TRU) waste (often over-packed in 85- gallon drums) are generated at LANL and require radiological characterization for disposition. These drums are typically measured and analyzed individually using high purity germanium (HPGe) gamma detectors. This approach can be resource and time intensive. For a project requiring several hundred drums to be characterized in a short time frame, an alternative approach was developed. The approach utilizes a combination of field screening and spectral summing that was required to be technically defensible and meet the Nevada Nuclear Security Site (NNSS) Waste Acceptance Criteria (WAC). In themore » screening phase of the operation, the drums were counted for 300 seconds (compared to 600 seconds for the typical approach) and checked against Low Level (LL)/TRU thresholds established for each drum configuration and detector. Multiple TRU nuclides and multiple gamma rays for each nuclide were evaluated using an automated spreadsheet utility that can process data from up to 42 drums at a time. Screening results were reviewed by an expert analyst to confirm the field LL/TRU determination. The spectral summing analysis technique combines spectral data (channel-by-channel) associated with a group of individual waste containers producing a composite spectrum. The grouped drums must meet specific similarity criteria. Another automated spreadsheet utility was used to spectral sum data from an unlimited number of similar drums grouped together. The composite spectrum represents a virtual combined drum for the group of drums and was analyzed using the SNAP{sup TM}/Radioassay Data Sheet (RDS)/Batch Data Report (BDR) method. The activity results for a composite virtual drum were divided equally amongst the individual drums to generate characterization results for each individual drum in the group. An initial batch of approximately 500 drums were measured and analyzed in less than 2 months in 2011. A second batch of approximately 500 more drums were measured and analyzed during the following 2 1/2 months. Four different HPGe detectors were employed for the operation. The screening and spectral summing approach can reduce the overall measurement and analysis time required. However, developing the technical details and automation spreadsheets requires a significant amount of expert time prior to beginning field operations and must be considered in the overall project schedule. This approach has continued to be used for characterizing several hundred more empty drums in 2012 and is planned to continue in 2013. (authors)« less

  19. Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development

    ERIC Educational Resources Information Center

    Morse, Anthony F.; Cangelosi, Angelo

    2017-01-01

    Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between…

  20. Effects of Peer Mediation on Preschoolers' Compliance and Compliance Precursors

    ERIC Educational Resources Information Center

    Beaulieu, Lauren; Hanley, Gregory P.; Roberson, Aleasha A.

    2013-01-01

    We used a multiple baseline design across participants to evaluate the effects of teaching 4 typically developing preschoolers to attend to their names and to a group call (referred to as "precursors") on their compliance with typical classroom instructions. We then measured the extent to which the effects on both precursors and…

  1. The Value of Using Multiple Metrics to Evaluate PCB Exposure.

    PubMed

    Archer, Megan C; Harwood, Amanda D; Nutile, Samuel A; Hartz, Kara E Huff; Mills, Marc A; Garvey, Jim E; Lydy, Michael J

    2018-04-01

    Current methods for evaluating exposure in ecosystems contaminated with hydrophobic organic contaminants typically focus on sediment exposure. However, a comprehensive environmental assessment requires a more holistic approach that not only estimates sediment concentrations, but also accounts for exposure by quantifying other pathways, such as bioavailability, bioaccumulation, trophic transfer potential, and transport of hydrophobic organic contaminants within and outside of the aquatic system. The current study evaluated the ability of multiple metrics to estimate exposure in an aquatic ecosystem. This study utilized a small lake contaminated with polychlorinated biphenyls (PCBs) to evaluate exposure to multiple trophic levels as well as the transport of these contaminants within and outside of the lake. The PCBs were localized to sediments in one area of the lake, yet this area served as the source of PCBs to aquatic invertebrates, emerging insects, and fish and terrestrial spiders in the riparian ecosystem. The Tenax extractable and biota PCB concentrations indicated tissue concentrations were localized to benthic invertebrates and riparian spiders in a specific cove. Fish data, however, demonstrated that fish throughout the lake had PCB tissue concentrations, leading to wider exposure risk. The inclusion of PCB exposure measures at several trophic levels provided multiple lines of evidence to the scope of exposure through the aquatic and riparian food web, which aids in assessing risk and developing potential future remediation strategies.

  2. Multiple myeloma: diagnosis and treatment.

    PubMed

    Nau, Konrad C; Lewis, William D

    2008-10-01

    Multiple myeloma, the most common bone malignancy, is occurring with increasing frequency in older persons. Typical symptoms are bone pain, malaise, anemia, renal insufficiency, and hypercalcemia. Incidental discovery on comprehensive laboratory panels is common. The disease is diagnosed with serum or urine protein electrophoresis or immunofixation and bone marrow aspirate analysis. Skeletal radiographs are important in staging multiple myeloma and revealing lytic lesions, vertebral compression fractures, and osteoporosis. Magnetic resonance imaging and positron emission tomography or computed tomography are emerging as useful tools in the evaluation of patients with myeloma; magnetic resonance imaging is preferred for evaluating acute spinal compression. Nuclear bone scans and dual energy x-ray absorptiometry have no role in the diagnosis and staging of myeloma. The differential diagnosis of monoclonal gammopathies includes monoclonal gammopathy of uncertain significance, smoldering (asymptomatic) and symptomatic multiple myeloma, amyloidosis, B-cell non-Hodgkin lymphoma, Waldenström macroglobulinemia, and rare plasma cell leukemia and heavy chain diseases. Patients with monoclonal gammopathy of uncertain significance or smoldering multiple myeloma should be followed closely, but not treated. Symptomatic multiple myeloma is treated with chemotherapy followed by autologous stem cell transplantation, if possible. Melphalan, prednisolone, dexamethasone, vincristine, doxorubicin, bortezomib, and thalidomide and its analogue lenalidomide have been used successfully. It is important that family physicians recognize and appropriately treat multiple myeloma complications. Bone pain is treated with opiates, bisphosphonates, radiotherapy, vertebroplasty, or kyphoplasty; nephrotoxic nonsteroidal anti-inflammatory drugs should be avoided. Hypercalcemia is treated with isotonic saline infusions, steroids, furosemide, or bisphosphonates. Because of susceptibility to infections, patients require broad-spectrum antibiotics for febrile illness and immunization against influenza, pneumococcus, and Haemophilus influenzae B. Five-year survival rates approach 33 percent, and the median survival rate is 33 months.

  3. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  4. Hospitals' Internal Accountability

    PubMed Central

    Kraetschmer, Nancy; Jass, Janak; Woodman, Cheryl; Koo, Irene; Kromm, Seija K.; Deber, Raisa B.

    2014-01-01

    This study aimed to enhance understanding of the dimensions of accountability captured and not captured in acute care hospitals in Ontario, Canada. Based on an Ontario-wide survey and follow-up interviews with three acute care hospitals in the Greater Toronto Area, we found that the two dominant dimensions of hospital accountability being reported are financial and quality performance. These two dimensions drove both internal and external reporting. Hospitals' internal reports typically included performance measures that were required or mandated in external reports. Although respondents saw reporting as a valuable mechanism for hospitals and the health system to monitor and track progress against desired outcomes, multiple challenges with current reporting requirements were communicated, including the following: 58% of survey respondents indicated that performance-reporting resources were insufficient; manual data capture and performance reporting were prevalent, with the majority of hospitals lacking sophisticated tools or technology to effectively capture, analyze and report performance data; hospitals tended to focus on those processes and outcomes with high measurability; and 53% of respondents indicated that valuable cross-system accountability, performance measures or both were not captured by current reporting requirements. PMID:25305387

  5. Factorization in large-scale many-body calculations

    DOE PAGES

    Johnson, Calvin W.; Ormand, W. Erich; Krastev, Plamen G.

    2013-08-07

    One approach for solving interacting many-fermion systems is the configuration-interaction method, also sometimes called the interacting shell model, where one finds eigenvalues of the Hamiltonian in a many-body basis of Slater determinants (antisymmetrized products of single-particle wavefunctions). The resulting Hamiltonian matrix is typically very sparse, but for large systems the nonzero matrix elements can nonetheless require terabytes or more of storage. An alternate algorithm, applicable to a broad class of systems with symmetry, in our case rotational invariance, is to exactly factorize both the basis and the interaction using additive/multiplicative quantum numbers; such an algorithm recreates the many-body matrix elementsmore » on the fly and can reduce the storage requirements by an order of magnitude or more. Here, we discuss factorization in general and introduce a novel, generalized factorization method, essentially a ‘double-factorization’ which speeds up basis generation and set-up of required arrays. Although we emphasize techniques, we also place factorization in the context of a specific (unpublished) configuration-interaction code, BIGSTICK, which runs both on serial and parallel machines, and discuss the savings in memory due to factorization.« less

  6. Public Reception of Climate Science: Coherence, Reliability, and Independence.

    PubMed

    Hahn, Ulrike; Harris, Adam J L; Corner, Adam

    2016-01-01

    Possible measures to mitigate climate change require global collective actions whose impacts will be felt by many, if not all. Implementing such actions requires successful communication of the reasons for them, and hence the underlying climate science, to a degree that far exceeds typical scientific issues which do not require large-scale societal response. Empirical studies have identified factors, such as the perceived level of consensus in scientific opinion and the perceived reliability of scientists, that can limit people's trust in science communicators and their subsequent acceptance of climate change claims. Little consideration has been given, however, to recent formal results within philosophy concerning the relationship between truth, the reliability of evidence sources, the coherence of multiple pieces of evidence/testimonies, and the impact of (non-)independence between sources of evidence. This study draws on these results to evaluate exactly what has (and, more important, has not yet) been established in the empirical literature about the factors that bias the public's reception of scientific communications about climate change. Copyright © 2015 Cognitive Science Society, Inc.

  7. Quantifying the energy required for groundwater pumping across a regional aquifer system

    NASA Astrophysics Data System (ADS)

    Ronayne, M. J.; Shugert, D. T.

    2017-12-01

    Groundwater pumping can be a substantial source of energy expenditure, particularly in semiarid regions with large depths to water. In this study we assessed the energy required for groundwater pumping in the Denver Basin aquifer system, a group of sedimentary rock aquifers used for municipal water supply in Colorado. In recent decades, declining water levels in the Denver Basin aquifers has resulted in increased pumping lifts and higher energy use rates. We quantified the spatially variable energy intensity for groundwater pumping by analyzing spatial variations in the lift requirement. The median energy intensities for two major aquifers were 1.2 and 1.8 kWh m-3. Considering typical municipal well production rates and household water use in the study area, these results indicate that the energy cost associated with groundwater pumping can be a significant fraction (>20%) of the total electricity consumption for all household end uses. Pumping at this scale (hundreds of municipal wells producing from deep aquifers) also generates substantial greenhouse gas emissions. Analytical wellfield modeling conducted as part of this study clearly demonstrates how multiple components of the lift impact the energy requirement. Results provide guidance for water management strategies that reduce energy expenditure.

  8. Managing quality and compliance.

    PubMed

    McNeil, Alice; Koppel, Carl

    2015-01-01

    Critical care nurses assume vital roles in maintaining patient care quality. There are distinct facets to the process including standard setting, regulatory compliance, and completion of reports associated with these endeavors. Typically, multiple niche software applications are required and user interfaces are varied and complex. Although there are distinct quality indicators that must be tracked as well as a list of serious or sentinel events that must be documented and reported, nurses may not know the precise steps to ensure that information is properly documented and actually reaches the proper authorities for further investigation and follow-up actions. Technology advances have permitted the evolution of a singular software platform, capable of monitoring quality indicators and managing all facets of reporting associated with regulatory compliance.

  9. Flaw Stability Considering Residual Stress for Aging Management of Spent Nuclear Fuel Multiple-Purpose Canisters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, Poh-Sang; Sindelar, Robert L.

    A typical multipurpose canister (MPC) is made of austenitic stainless steel and is loaded with spent nuclear fuel assemblies. Because heat treatment for stress relief is not required for the construction of the MPC, the canister is susceptible to stress corrosion cracking in the weld or heat affected zone regions under long-term storage conditions. Logic for flaw acceptance is developed should crack-like flaws be detected by Inservice Inspection. The procedure recommended by API 579-1/ASME FFS-1, Fitness-for-Service, is used to calculate the instability crack length or depth by failure assessment diagram. It is demonstrated that the welding residual stress has amore » strong influence on the results.« less

  10. Cerebral Fat Embolism: Recognition, Complications, and Prognosis.

    PubMed

    Godoy, Daniel Agustín; Di Napoli, Mario; Rabinstein, Alejandro A

    2017-09-20

    Fat embolism syndrome (FES) is a rare syndrome caused by embolization of fat particles into multiple organs including the brain. It typically manifests with petechial rash, deteriorating mental status, and progressive respiratory insufficiency, usually occurring within 24-48 h of trauma with long-bone fractures or an orthopedic surgery. The diagnosis of FES is based on clinical and imaging findings, but requires exclusion of alternative diagnoses. Although there is no specific treatment for FES, prompt recognition is important because it can avoid unnecessary interventions and clarify prognosis. Patients with severe FES can become critically ill, but even comatose patients with respiratory failure may recover favorably. Prophylactic measures, such as early stabilization of fractures and certain intraoperative techniques, may help decrease the incidence and severity of FES.

  11. Depth-estimation-enabled compound eyes

    NASA Astrophysics Data System (ADS)

    Lee, Woong-Bi; Lee, Heung-No

    2018-04-01

    Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.

  12. Collaborative voxel-based surgical virtual environments.

    PubMed

    Acosta, Eric; Muniz, Gilbert; Armonda, Rocco; Bowyer, Mark; Liu, Alan

    2008-01-01

    Virtual Reality-based surgical simulators can utilize Collaborative Virtual Environments (C-VEs) to provide team-based training. To support real-time interactions, C-VEs are typically replicated on each user's local computer and a synchronization method helps keep all local copies consistent. This approach does not work well for voxel-based C-VEs since large and frequent volumetric updates make synchronization difficult. This paper describes a method that allows multiple users to interact within a voxel-based C-VE for a craniotomy simulator being developed. Our C-VE method requires smaller update sizes and provides faster synchronization update rates than volumetric-based methods. Additionally, we address network bandwidth/latency issues to simulate networked haptic and bone drilling tool interactions with a voxel-based skull C-VE.

  13. Flaw Stability Considering Residual Stress for Aging Management of Spent Nuclear Fuel Multiple-Purpose Canisters

    DOE PAGES

    Lam, Poh-Sang; Sindelar, Robert L.

    2016-04-28

    A typical multipurpose canister (MPC) is made of austenitic stainless steel and is loaded with spent nuclear fuel assemblies. Because heat treatment for stress relief is not required for the construction of the MPC, the canister is susceptible to stress corrosion cracking in the weld or heat affected zone regions under long-term storage conditions. Logic for flaw acceptance is developed should crack-like flaws be detected by Inservice Inspection. The procedure recommended by API 579-1/ASME FFS-1, Fitness-for-Service, is used to calculate the instability crack length or depth by failure assessment diagram. It is demonstrated that the welding residual stress has amore » strong influence on the results.« less

  14. Multiparametric imaging of brain hemodynamics and function using gas-inhalation MRI.

    PubMed

    Liu, Peiying; Welch, Babu G; Li, Yang; Gu, Hong; King, Darlene; Yang, Yihong; Pinho, Marco; Lu, Hanzhang

    2017-02-01

    Diagnosis and treatment monitoring of cerebrovascular diseases routinely require hemodynamic imaging of the brain. Current methods either only provide part of the desired information or require the injection of multiple exogenous agents. In this study, we developed a multiparametric imaging scheme for the imaging of brain hemodynamics and function using gas-inhalation MRI. The proposed technique uses a single MRI scan to provide simultaneous measurements of baseline venous cerebral blood volume (vCBV), cerebrovascular reactivity (CVR), bolus arrival time (BAT), and resting-state functional connectivity (fcMRI). This was achieved with a novel, concomitant O 2 and CO 2 gas inhalation paradigm, rapid MRI image acquisition with a 9.3min BOLD sequence, and an advanced algorithm to extract multiple hemodynamic information from the same dataset. In healthy subjects, CVR and vCBV values were 0.23±0.03%/mmHg and 0.0056±0.0006%/mmHg, respectively, with a strong correlation (r=0.96 for CVR and r=0.91 for vCBV) with more conventional, separate acquisitions that take twice the scan time. In patients with Moyamoya syndrome, CVR in the stenosis-affected flow territories (typically anterior-cerebral-artery, ACA, and middle-cerebral-artery, MCA, territories) was significantly lower than that in posterior-cerebral-artery (PCA), which typically has minimal stenosis, flow territories (0.12±0.06%/mmHg vs. 0.21±0.05%/mmHg, p<0.001). BAT of the gas bolus was significantly longer (p=0.008) in ACA/MCA territories, compared to PCA, and the maps were consistent with the conventional contrast-enhanced CT perfusion method. FcMRI networks were robustly identified from the gas-inhalation MRI data after factoring out the influence of CO 2 and O 2 on the signal time course. The spatial correspondence between the gas-data-derived fcMRI maps and those using a separate, conventional fcMRI scan was excellent, showing a spatial correlation of 0.58±0.17 and 0.64±0.20 for default mode network and primary visual network, respectively. These findings suggest that advanced gas-inhalation MRI provides reliable measurements of multiple hemodynamic parameters within a clinically acceptable imaging time and is suitable for patient examinations. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Multiparametric imaging of brain hemodynamics and function using gas-inhalation MRI

    PubMed Central

    Liu, Peiying; Welch, Babu G.; Li, Yang; Gu, Hong; King, Darlene; Yang, Yihong; Pinho, Marco; Lu, Hanzhang

    2016-01-01

    Diagnosis and treatment monitoring of cerebrovascular diseases routinely require hemodynamic imaging of the brain. Current methods either only provide part of the desired information or require the injection of multiple exogenous agents. In this study, we developed a multiparametric imaging scheme for the imaging of brain hemodynamics and function using gas-inhalation MRI. The proposed technique uses a single MRI scan to provide simultaneous measurements of baseline venous cerebral blood volume (vCBV), cerebrovascular reactivity (CVR), bolus arrival time (BAT), and resting-state functional connectivity (fcMRI). This was achieved with a novel, concomitant O2 and CO2 gas inhalation paradigm, rapid MRI image acquisition with a 9.3 min BOLD sequence, and an advanced algorithm to extract multiple hemodynamic information from the same dataset. In healthy subjects, CVR and vCBV values were 0.23±0.03 %/mmHg and 0.0056±0.0006 %/mmHg, respectively, with a strong correlation (r=0.96 for CVR and r=0.91 for vCBV) with more conventional, separate acquisitions that take twice the scan time. In patients with Moyamoya syndrome, CVR in the stenosis-affected flow territories (typically anterior-cerebral-artery, ACA, and middle-cerebral-artery, MCA, territories) was significantly lower than that in posterior-cerebral-artery (PCA), which typically has minimal stenosis, flow territories (0.12±0.06 %/mmHg vs. 0.21±0.05 %/mmHg, p<0.001). BAT of the gas bolus was significantly longer (p=0.008) in ACA/MCA territories, compared to PCA, and the maps were consistent with the conventional contrast-enhanced CT perfusion method. FcMRI networks were robustly identified from the gas-inhalation MRI data after factoring out the influence of CO2 and O2 on the signal time course. The spatial correspondence between the gas-data-derived fcMRI maps and those using a separate, conventional fcMRI scan was excellent, showing a spatial correlation of 0.58±0.17 and 0.64±0.20 for default mode network and primary visual network, respectively. These findings suggest that advanced gas-inhalation MRI provides reliable measurements of multiple hemodynamic parameters within a clinically acceptable imaging time and is suitable for patient examinations. PMID:27693197

  16. Rights of Conscience Protections for Armed Forces Service Members and Their Chaplains

    DTIC Science & Technology

    2015-07-22

    established five categories of religious accommodation requests: dietary, grooming, medical , uniform, and worship practices.2 • Dietary: typically, these... Medical : typically, these are requests for a waiver of mandatory immunizations. • Uniform: typically, these are requests to wear religious jewelry or...service members in their units. Requirements A chaplain applicant is required to meet DoD medical and physical standards for commissioning as an

  17. Cognitive Diagnostic Models for Tests with Multiple-Choice and Constructed-Response Items

    ERIC Educational Resources Information Center

    Kuo, Bor-Chen; Chen, Chun-Hua; Yang, Chih-Wei; Mok, Magdalena Mo Ching

    2016-01-01

    Traditionally, teachers evaluate students' abilities via their total test scores. Recently, cognitive diagnostic models (CDMs) have begun to provide information about the presence or absence of students' skills or misconceptions. Nevertheless, CDMs are typically applied to tests with multiple-choice (MC) items, which provide less diagnostic…

  18. Self-Rated Estimates of Multiple Intelligences Based on Approaches to Learning

    ERIC Educational Resources Information Center

    Bowles, Terry

    2008-01-01

    To date questionnaires that measure Multiple Intelligences (MIs) have typically not been systematically developed, have poor psychometric properties, and relatively low reliability. The aim of this research was to define the factor structure, and reliability of nine talents which are the behavioural outcomes of MIs, using items representing…

  19. Hidden Item Variance in Multiple Mini-Interview Scores

    ERIC Educational Resources Information Center

    Zaidi, Nikki L.; Swoboda, Christopher M.; Kelcey, Benjamin M.; Manuel, R. Stephen

    2017-01-01

    The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation…

  20. A Bayesian Missing Data Framework for Generalized Multiple Outcome Mixed Treatment Comparisons

    ERIC Educational Resources Information Center

    Hong, Hwanhee; Chu, Haitao; Zhang, Jing; Carlin, Bradley P.

    2016-01-01

    Bayesian statistical approaches to mixed treatment comparisons (MTCs) are becoming more popular because of their flexibility and interpretability. Many randomized clinical trials report multiple outcomes with possible inherent correlations. Moreover, MTC data are typically sparse (although richer than standard meta-analysis, comparing only two…

  1. The Impact of Noninvariant Intercepts in Latent Means Models

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.

    2013-01-01

    Latent means methods such as multiple-indicator multiple-cause (MIMIC) and structured means modeling (SMM) allow researchers to determine whether or not a significant difference exists between groups' factor means. Strong invariance is typically recommended when interpreting latent mean differences. The extent of the impact of noninvariant…

  2. Groundwater monitoring of hydraulic fracturing in California: Recommendations for permit-required monitoring

    NASA Astrophysics Data System (ADS)

    Esser, B. K.; Beller, H. R.; Carroll, S.; Cherry, J. A.; Jackson, R. B.; Jordan, P. D.; Madrid, V.; Morris, J.; Parker, B. L.; Stringfellow, W. T.; Varadharajan, C.; Vengosh, A.

    2015-12-01

    California recently passed legislation mandating dedicated groundwater quality monitoring for new well stimulation operations. The authors provided the State with expert advice on the design of such monitoring networks. Factors that must be considered in designing a new and unique groundwater monitoring program include: Program design: The design of a monitoring program is contingent on its purpose, which can range from detection of individual well leakage to demonstration of regional impact. The regulatory goals for permit-required monitoring conducted by operators on a well-by-well basis will differ from the scientific goals of a regional monitoring program conducted by the State. Vulnerability assessment: Identifying factors that increase the probability of transport of fluids from the hydrocarbon target zone to a protected groundwater zone enables the intensity of permit-required monitoring to be tiered by risk and also enables prioritization of regional monitoring of groundwater basins based on vulnerability. Risk factors include well integrity; proximity to existing wellbores and geologic features; wastewater disposal; vertical separation between the hydrocarbon and groundwater zones; and site-specific hydrogeology. Analyte choice: The choice of chemical analytes in a regulatory monitoring program is guided by the goals of detecting impact, assuring public safety, preventing resource degradation, and minimizing cost. Balancing these goals may be best served by tiered approach in which targeted analysis of specific chemical additives is triggered by significant changes in relevant but more easily analyzed constituents. Such an approach requires characterization of baseline conditions, especially in areas with long histories of oil and gas development. Monitoring technology: Monitoring a deep subsurface process or a long wellbore is more challenging than monitoring a surface industrial source. The requirement for monitoring multiple groundwater aquifers across a range of depths and of monitoring at deeper depths than is typical for regulatory monitoring programs requires consideration of monitoring technology, which can range from clusters of wells to multiple wells in a single wellbore to multi-level systems in a single cased wellbore.

  3. Speckle Interferometry at the Blanco and SOAR Telescopes in 2008 and 2009

    NASA Technical Reports Server (NTRS)

    Tokovinin, Andrei; Mason, Brian D.; Hartkopf, William I.

    2010-01-01

    The results of speckle interferometric measurements of binary and multiple stars conducted in 2008 and 2009 at the Blanco and Southern Astrophysical Research (SOAR) 4 m telescopes in Chile are presented. A tot al of 1898 measurements of 1189 resolved pairs or sub-systems and 394 observations of 285 un-resolved targets are listed. We resolved for the first time 48 new pairs, 21 of which are new sub-systems in close visual multiple stars. Typical internal measurement precision is 0.3 mas in both coordinates, typical companion detection capability is delta m approximately 4.2 at 0.15 degree separation. These data were obtained with a new electron-multiplication CCD camera; data processing is described in detail, including estimation of magnitude difference, observational errors, detection limits, and analysis of artifacts. We comment on some newly discovered pairs and objects of special interest.

  4. SPECKLE INTERFEROMETRY AT THE BLANCO AND SOAR TELESCOPES IN 2008 AND 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokovinin, Andrei; Mason, Brian D.; Hartkopf, William I.

    2010-02-15

    The results of speckle interferometric measurements of binary and multiple stars conducted in 2008 and 2009 at the Blanco and SOAR 4 m telescopes in Chile are presented. A total of 1898 measurements of 1189 resolved pairs or sub-systems and 394 observations of 285 un-resolved targets are listed. We resolved for the first time 48 new pairs, 21 of which are new sub-systems in close visual multiple stars. Typical internal measurement precision is 0.3 mas in both coordinates, typical companion detection capability is {delta}m {approx} 4.2 at 0.''15 separation. These data were obtained with a new electron-multiplication CCD camera; datamore » processing is described in detail, including estimation of magnitude difference, observational errors, detection limits, and analysis of artifacts. We comment on some newly discovered pairs and objects of special interest.« less

  5. Reliable inference of light curve parameters in the presence of systematics

    NASA Astrophysics Data System (ADS)

    Gibson, Neale P.

    2016-10-01

    Time-series photometry and spectroscopy of transiting exoplanets allow us to study their atmospheres. Unfortunately, the required precision to extract atmospheric information surpasses the design specifications of most general purpose instrumentation. This results in instrumental systematics in the light curves that are typically larger than the target precision. Systematics must therefore be modelled, leaving the inference of light-curve parameters conditioned on the subjective choice of systematics models and model-selection criteria. Here, I briefly review the use of systematics models commonly used for transmission and emission spectroscopy, including model selection, marginalisation over models, and stochastic processes. These form a hierarchy of models with increasing degree of objectivity. I argue that marginalisation over many systematics models is a minimal requirement for robust inference. Stochastic models provide even more flexibility and objectivity, and therefore produce the most reliable results. However, no systematics models are perfect, and the best strategy is to compare multiple methods and repeat observations where possible.

  6. Methods to Measure Lipophagy in Yeast.

    PubMed

    Cristobal-Sarramian, A; Radulovic, M; Kohlwein, S D

    2017-01-01

    Maintenance of cellular and organismal lipid homeostasis is critical for life, and any deviation from a balanced equilibrium between fat uptake and degradation may have deleterious consequences, resulting in severe lipid-associated disorders. Excess fat is typically stored in cytoplasmic organelles termed "lipid droplets" (LDs); to adjust for a constantly fluctuating supply of and demand for cellular fat, these organelles are metabolically highly dynamic and subject to multiple levels of regulation. In addition to a well-described cytosolic lipid degradation pathway, recent evidence underscores the importance of "lipophagy" in cellular lipid homeostasis, i.e., the degradation of LD by autophagy in the lysosome/vacuole. Pioneering work in yeast mutant models has unveiled the requirement of key components of the autophagy machinery, providing evidence for a highly conserved process of lipophagy from yeast to man. However, further work is required to unveil the intricate metabolic interaction between LD metabolism and autophagy to sustain membrane homeostasis and cellular survival. © 2017 Elsevier Inc. All rights reserved.

  7. Low cost Ku-band earth terminals for voice/data/facsimile

    NASA Technical Reports Server (NTRS)

    Kelley, R. L.

    1977-01-01

    A Ku-band satellite earth terminal capable of providing two way voice/facsimile teleconferencing, 128 Kbps data, telephone, and high-speed imagery services is proposed. Optimized terminal cost and configuration are presented as a function of FDMA and TDMA approaches to multiple access. The entire terminal from the antenna to microphones, speakers and facsimile equipment is considered. Component cost versus performance has been projected as a function of size of the procurement and predicted hardware innovations and production techniques through 1985. The lowest cost combinations of components has been determined in a computer optimization algorithm. The system requirements including terminal EIRP and G/T, satellite size, power per spacecraft transponder, satellite antenna characteristics, and link propagation outage were selected using a computerized system cost/performance optimization algorithm. System cost and terminal cost and performance requirements are presented as a function of the size of a nationwide U.S. network. Service costs are compared with typical conference travel costs to show the viability of the proposed terminal.

  8. Feature-based pairwise retinal image registration by radial distortion correction

    NASA Astrophysics Data System (ADS)

    Lee, Sangyeol; Abràmoff, Michael D.; Reinhardt, Joseph M.

    2007-03-01

    Fundus camera imaging is widely used to document disorders such as diabetic retinopathy and macular degeneration. Multiple retinal images can be combined together through a procedure known as mosaicing to form an image with a larger field of view. Mosaicing typically requires multiple pairwise registrations of partially overlapped images. We describe a new method for pairwise retinal image registration. The proposed method is unique in that the radial distortion due to image acquisition is corrected prior to the geometric transformation. Vessel lines are detected using the Hessian operator and are used as input features to the registration. Since the overlapping region is typically small in a retinal image pair, only a few correspondences are available, thus limiting the applicable model to an afine transform at best. To recover the distortion due to curved-surface of retina and lens optics, a combined approach of an afine model with a radial distortion correction is proposed. The parameters of the image acquisition and radial distortion models are estimated during an optimization step that uses Powell's method driven by the vessel line distance. Experimental results using 20 pairs of green channel images acquired from three subjects with a fundus camera confirmed that the afine model with distortion correction could register retinal image pairs to within 1.88+/-0.35 pixels accuracy (mean +/- standard deviation) assessed by vessel line error, which is 17% better than the afine-only approach. Because the proposed method needs only two correspondences, it can be applied to obtain good registration accuracy even in the case of small overlap between retinal image pairs.

  9. A method for achieving an order-of-magnitude increase in the temporal resolution of a standard CRT computer monitor.

    PubMed

    Fiesta, Matthew P; Eagleman, David M

    2008-09-15

    As the frequency of a flickering light is increased, the perception of flicker is replaced by the perception of steady light at what is known as the critical flicker fusion threshold (CFFT). This threshold provides a useful measure of the brain's information processing speed, and has been used in medicine for over a century both for diagnostic and drug efficacy studies. However, the hardware for presenting the stimulus has not advanced to take advantage of computers, largely because the refresh rates of typical monitors are too slow to provide fine-grained changes in the alternation rate of a visual stimulus. For example, a cathode ray tube (CRT) computer monitor running at 100Hz will render a new frame every 10 ms, thus restricting the period of a flickering stimulus to multiples of 20 ms. These multiples provide a temporal resolution far too low to make precise threshold measurements, since typical CFFT values are in the neighborhood of 35 ms. We describe here a simple and novel technique to enable alternating images at several closely-spaced periods on a standard monitor. The key to our technique is to programmatically control the video card to dynamically reset the refresh rate of the monitor. Different refresh rates allow slightly different frame durations; this can be leveraged to vastly increase the resolution of stimulus presentation times. This simple technique opens new inroads for experiments on computers that require more finely-spaced temporal resolution than a monitor at a single, fixed refresh rate can allow.

  10. Spacelab mission dependent training parametric resource requirements study

    NASA Technical Reports Server (NTRS)

    Ogden, D. H.; Watters, H.; Steadman, J.; Conrad, L.

    1976-01-01

    Training flows were developed for typical missions, resource relationships analyzed, and scheduling optimization algorithms defined. Parametric analyses were performed to study the effect of potential changes in mission model, mission complexity and training time required on the resource quantities required to support training of payload or mission specialists. Typical results of these analyses are presented both in graphic and tabular form.

  11. Case-control analysis in highway safety: Accounting for sites with multiple crashes.

    PubMed

    Gross, Frank

    2013-12-01

    There is an increased interest in the use of epidemiological methods in highway safety analysis. The case-control and cohort methods are commonly used in the epidemiological field to identify risk factors and quantify the risk or odds of disease given certain characteristics and factors related to an individual. This same concept can be applied to highway safety where the entity of interest is a roadway segment or intersection (rather than a person) and the risk factors of interest are the operational and geometric characteristics of a given roadway. One criticism of the use of these methods in highway safety is that they have not accounted for the difference between sites with single and multiple crashes. In the medical field, a disease either occurs or it does not; multiple occurrences are generally not an issue. In the highway safety field, it is necessary to evaluate the safety of a given site while accounting for multiple crashes. Otherwise, the analysis may underestimate the safety effects of a given factor. This paper explores the use of the case-control method in highway safety and two variations to account for sites with multiple crashes. Specifically, the paper presents two alternative methods for defining cases in a case-control study and compares the results in a case study. The first alternative defines a separate case for each crash in a given study period, thereby increasing the weight of the associated roadway characteristics in the analysis. The second alternative defines entire crash categories as cases (sites with one crash, sites with two crashes, etc.) and analyzes each group separately in comparison to sites with no crashes. The results are also compared to a "typical" case-control application, where the cases are simply defined as any entity that experiences at least one crash and controls are those entities without a crash in a given period. In a "typical" case-control design, the attributes associated with single-crash segments are weighted the same as the attributes of segments with multiple crashes. The results support the hypothesis that the "typical" case-control design may underestimate the safety effects of a given factor compared to methods that account for sites with multiple crashes. Compared to the first alternative case definition (where multiple crash segments represent multiple cases) the results from the "typical" case-control design are less pronounced (i.e., closer to unity). The second alternative (where case definitions are constructed for various crash categories and analyzed separately) provides further evidence that sites with single and multiple crashes should not be grouped together in a case-control analysis. This paper indicates a clear need to differentiate sites with single and multiple crashes in a case-control analysis. While the results suggest that sites with multiple crashes can be accounted for using a case-control design, further research is needed to determine the optimal method for addressing this issue. This paper provides a starting point for that research. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Insight into multiple-triggering effect in DTSCRs for ESD protection

    NASA Astrophysics Data System (ADS)

    Zhang, Lizhong; Wang, Yuan; Wang, Yize; He, Yandong

    2017-07-01

    The diode-triggered silicon-controlled rectifier (DTSCR) is widely used for electrostatic discharge (ESD) protection in advanced CMOS process owing to its advantages, such as design simplification, adjustable trigger/holding voltage, low parasitic capacitance. However, the multiple-triggering effect in the typical DTSCR device may cause undesirable larger overall trigger voltage, which results in a reduced ESD safe margin. In previous research, the major cause is attributed to the higher current level required in the intrinsic SCR. The related discussions indicate that it seems to result from the current division rule between the intrinsic and parasitic SCR formed in the triggering process. In this letter, inserting a large space into the trigger diodes is proposed to get a deeper insight into this issue. The triggering current is observed to be regularly reduced along with the increased space, which confirms that the current division is determined by the parasitic resistance distributed between the intrinsic and parasitic SCR paths. The theoretical analysis is well confirmed by device simulation and transmission line pulse (TLP) test results. The reduced overall trigger voltage is achieved in the modified DTSCR structures due to the comprehensive result of the parasitic resistance vs triggering current, which indicates a minimized multiple-triggering effect. Project supported by the Beijing Natural Science Foundation, China (No. 4162030).

  13. Is psychology suffering from a replication crisis? What does "failure to replicate" really mean?

    PubMed

    Maxwell, Scott E; Lau, Michael Y; Howard, George S

    2015-09-01

    Psychology has recently been viewed as facing a replication crisis because efforts to replicate past study findings frequently do not show the same result. Often, the first study showed a statistically significant result but the replication does not. Questions then arise about whether the first study results were false positives, and whether the replication study correctly indicates that there is truly no effect after all. This article suggests these so-called failures to replicate may not be failures at all, but rather are the result of low statistical power in single replication studies, and the result of failure to appreciate the need for multiple replications in order to have enough power to identify true effects. We provide examples of these power problems and suggest some solutions using Bayesian statistics and meta-analysis. Although the need for multiple replication studies may frustrate those who would prefer quick answers to psychology's alleged crisis, the large sample sizes typically needed to provide firm evidence will almost always require concerted efforts from multiple investigators. As a result, it remains to be seen how many of the recently claimed failures to replicate will be supported or instead may turn out to be artifacts of inadequate sample sizes and single study replications. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  14. Multifocal laser surgery: cutting enhancement by hydrodynamic interactions between cavitation bubbles.

    PubMed

    Toytman, I; Silbergleit, A; Simanovski, D; Palanker, D

    2010-10-01

    Transparent biological tissues can be precisely dissected with ultrafast lasers using optical breakdown in the tight focal zone. Typically, tissues are cut by sequential application of pulses, each of which produces a single cavitation bubble. We investigate the hydrodynamic interactions between simultaneous cavitation bubbles originating from multiple laser foci. Simultaneous expansion and collapse of cavitation bubbles can enhance the cutting efficiency, by increasing the resulting deformations in tissue, and the associated rupture zone. An analytical model of the flow induced by the bubbles is presented and experimentally verified. The threshold strain of the material rupture is measured in a model tissue. Using the computational model and the experimental value of the threshold strain one can compute the shape of the rupture zone in tissue resulting from application of multiple bubbles. With the threshold strain of 0.7 two simultaneous bubbles produce a continuous cut when applied at the distance 1.35 times greater than that required in sequential approach. Simultaneous focusing of the laser in multiple spots along the line of intended cut can extend this ratio to 1.7. Counterpropagating jets forming during collapse of two bubbles in materials with low viscosity can further extend the cutting zone-up to approximately a factor of 1.5.

  15. An efficient multiple exposure image fusion in JPEG domain

    NASA Astrophysics Data System (ADS)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  16. Understanding nonlinear vibration behaviours in high-power ultrasonic surgical devices

    PubMed Central

    Mathieson, Andrew; Cardoni, Andrea; Cerisola, Niccolò; Lucas, Margaret

    2015-01-01

    Ultrasonic surgical devices are increasingly used in oral, craniofacial and maxillofacial surgery to cut mineralized tissue, offering the surgeon high accuracy with minimal risk to nerve and vessel tissue. Power ultrasonic devices operate in resonance, requiring their length to be a half-wavelength or multiple-half-wavelength. For bone surgery, devices based on a half-wavelength have seen considerable success, but longer multiple-half-wavelength endoscopic devices have recently been proposed to widen the range of surgeries. To provide context for these developments, some examples of surgical procedures and the associated designs of ultrasonic cutting tips are presented. However, multiple-half-wavelength components, typical of endoscopic devices, have greater potential to exhibit nonlinear dynamic behaviours that have a highly detrimental effect on device performance. Through experimental characterization of the dynamic behaviour of endoscopic devices, it is demonstrated how geometrical features influence nonlinear dynamic responses. Period doubling, a known route to chaotic behaviour, is shown to be significantly influenced by the cutting tip shape, whereas the cutting tip has only a limited effect on Duffing-like responses, particularly the shape of the hysteresis curve, which is important for device stability. These findings underpin design, aiming to pave the way for a new generation of ultrasonic endoscopic surgical devices. PMID:27547081

  17. Against the odds? De novo structure determination of a pilin with two cysteine residues by sulfur SAD.

    PubMed

    Gorgel, Manuela; Bøggild, Andreas; Ulstrup, Jakob Jensen; Weiss, Manfred S; Müller, Uwe; Nissen, Poul; Boesen, Thomas

    2015-05-01

    Exploiting the anomalous signal of the intrinsic S atoms to phase a protein structure is advantageous, as ideally only a single well diffracting native crystal is required. However, sulfur is a weak anomalous scatterer at the typical wavelengths used for X-ray diffraction experiments, and therefore sulfur SAD data sets need to be recorded with a high multiplicity. In this study, the structure of a small pilin protein was determined by sulfur SAD despite several obstacles such as a low anomalous signal (a theoretical Bijvoet ratio of 0.9% at a wavelength of 1.8 Å), radiation damage-induced reduction of the cysteines and a multiplicity of only 5.5. The anomalous signal was improved by merging three data sets from different volumes of a single crystal, yielding a multiplicity of 17.5, and a sodium ion was added to the substructure of anomalous scatterers. In general, all data sets were balanced around the threshold values for a successful phasing strategy. In addition, a collection of statistics on structures from the PDB that were solved by sulfur SAD are presented and compared with the data. Looking at the quality indicator R(anom)/R(p.i.m.), an inconsistency in the documentation of the anomalous R factor is noted and reported.

  18. Multiple pure tone noise prediction

    NASA Astrophysics Data System (ADS)

    Han, Fei; Sharma, Anupam; Paliath, Umesh; Shieh, Chingwei

    2014-12-01

    This paper presents a fully numerical method for predicting multiple pure tones, also known as “Buzzsaw” noise. It consists of three steps that account for noise source generation, nonlinear acoustic propagation with hard as well as lined walls inside the nacelle, and linear acoustic propagation outside the engine. Noise generation is modeled by steady, part-annulus computational fluid dynamics (CFD) simulations. A linear superposition algorithm is used to construct full-annulus shock/pressure pattern just upstream of the fan from part-annulus CFD results. Nonlinear wave propagation is carried out inside the duct using a pseudo-two-dimensional solution of Burgers' equation. Scattering from nacelle lip as well as radiation to farfield is performed using the commercial solver ACTRAN/TM. The proposed prediction process is verified by comparing against full-annulus CFD simulations as well as against static engine test data for a typical high bypass ratio aircraft engine with hardwall as well as lined inlets. Comparisons are drawn against nacelle unsteady pressure transducer measurements at two axial locations as well as against near- and far-field microphone array measurements outside the duct. This is the first fully numerical approach (no experimental or empirical input is required) to predict multiple pure tone noise generation, in-duct propagation and far-field radiation. It uses measured blade coordinates to calculate MPT noise.

  19. Arrhythmogenic Right Ventricular Cardiomyopathy with Multiple Thrombi and Ventricular Tachycardia of Atypical Left Branch Bundle Block Morphology.

    PubMed

    Gong, Shenzhen; Wei, Xin; Liu, Guyue; Wu, Feng; Chen, Xiaoping

    2018-04-06

    A 61-year-old male patient was admitted to our hospital with recurrent palpitations and syncope. Electrocardiography, echocardiography, and contrast-enhanced computed tomography were performed. The patient was diagnosed with arrhythmogenic right ventricular cardiomyopathy (ARVC) complicated by multiple thrombi, and ventricular tachycardia (VT) without typical left bundle branch block (LBBB) morphology. This case suggests that VT is not always the sole contributor to syncope and death in patients with ARVC, and pulmonary embolism should be considered. Furthermore, VT with typical LBBB morphology is not an absolute necessity as a major criterion for the diagnosis of ARVC when the right heart is extremely enlarged.

  20. Longitudinal gradient coils with enhanced radial uniformity in restricted diameter: Single-current and multiple-current approaches.

    PubMed

    Romero, Javier A; Domínguez, Gabriela A; Anoardo, Esteban

    2017-03-01

    An important requirement for a gradient coil is that the uniformity of the generated magnetic field gradient should be maximal within the active volume of the coil. For a cylindrical geometry, the radial uniformity of the gradient turns critic, particularly in cases where the gradient-unit has to be designed to fit into the inner bore of a compact magnet of reduced dimensions, like those typically used in fast-field-cycling NMR. In this paper we present two practical solutions aimed to fulfill this requirement. We propose a matrix-inversion optimization algorithm based on the Biot-Savart law, that using a proper cost function, allows maximizing the uniformity of the gradient and power efficiency. The used methodology and the simulation code were validated in a single-current design, by comparing the computer simulated field map with the experimental data measured in a real prototype. After comparing the obtained results with the target field approach, a multiple-element coil driven by independent current sources is discussed, and a real prototype evaluated. Opposed equispaced independent windings are connected in pairs conforming an arrangement of independent anti-Helmholtz units. This last coil seizes 80% of its radial dimension with a gradient uniformity better than 5%. The design also provides an adaptable region of uniformity along with adjustable coil efficiency. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Memory-assisted quantum key distribution resilient against multiple-excitation effects

    NASA Astrophysics Data System (ADS)

    Lo Piparo, Nicolò; Sinclair, Neil; Razavi, Mohsen

    2018-01-01

    Memory-assisted measurement-device-independent quantum key distribution (MA-MDI-QKD) has recently been proposed as a technique to improve the rate-versus-distance behavior of QKD systems by using existing, or nearly-achievable, quantum technologies. The promise is that MA-MDI-QKD would require less demanding quantum memories than the ones needed for probabilistic quantum repeaters. Nevertheless, early investigations suggest that, in order to beat the conventional memory-less QKD schemes, the quantum memories used in the MA-MDI-QKD protocols must have high bandwidth-storage products and short interaction times. Among different types of quantum memories, ensemble-based memories offer some of the required specifications, but they typically suffer from multiple excitation effects. To avoid the latter issue, in this paper, we propose two new variants of MA-MDI-QKD both relying on single-photon sources for entangling purposes. One is based on known techniques for entanglement distribution in quantum repeaters. This scheme turns out to offer no advantage even if one uses ideal single-photon sources. By finding the root cause of the problem, we then propose another setup, which can outperform single memory-less setups even if we allow for some imperfections in our single-photon sources. For such a scheme, we compare the key rate for different types of ensemble-based memories and show that certain classes of atomic ensembles can improve the rate-versus-distance behavior.

  2. Low power multi-camera system and algorithms for automated threat detection

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  3. Towards Robust Designs Via Multiple-Objective Optimization Methods

    NASA Technical Reports Server (NTRS)

    Man Mohan, Rai

    2006-01-01

    Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.

  4. The Question-Driven Laboratory Exercise: A New Pedagogy Applied to a Green Modification of Grignard Reagent Formation and Reaction

    ERIC Educational Resources Information Center

    Teixeira, Jennifer M.; Byers, Jessie Nedrow; Perez, Marilu G.; Holman, R. W.

    2010-01-01

    Experimental exercises within second-year-level organic laboratory manuals typically involve a statement of a principle that is then validated by student generation of data in a single experiment. These experiments are structured in the exact opposite order of the scientific method, in which data interpretation, typically from multiple related…

  5. Development of Product Relatedness and Distance Effects in Typical Achievers and in Children with Mathematics Learning Disabilities

    ERIC Educational Resources Information Center

    Rotem, Avital; Henik, Avishai

    2015-01-01

    The current study examined the development of two effects that have been found in single-digit multiplication errors: relatedness and distance. Typically achieving (TA) second, fourth, and sixth graders and adults, and sixth and eighth graders with a mathematics learning disability (MLD) performed a verification task. Relatedness was defined by a…

  6. AlInAsSb separate absorption, charge, and multiplication avalanche photodiodes

    NASA Astrophysics Data System (ADS)

    Ren, Min; Maddox, Scott J.; Woodson, Madison E.; Chen, Yaojia; Bank, Seth R.; Campbell, Joe C.

    2016-05-01

    We report AlxIn1-xAsySb1-y separate absorption, charge, and multiplication avalanche photodiodes (APDs) that operate in the short-wavelength infrared spectrum. They exhibit excess noise factor less or equal to that of Si and the low dark currents typical of III-V compound APDs.

  7. Aggregating Polytomous DIF Results over Multiple Test Administrations

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Ye, Lei; Isham, Steven

    2018-01-01

    In typical differential item functioning (DIF) assessments, an item's DIF status is not influenced by its status in previous test administrations. An item that has shown DIF at multiple administrations may be treated the same way as an item that has shown DIF in only the most recent administration. Therefore, much useful information about the…

  8. Auditory Training with Multiple Talkers and Passage-Based Semantic Cohesion

    ERIC Educational Resources Information Center

    Casserly, Elizabeth D.; Barney, Erin C.

    2017-01-01

    Purpose: Current auditory training methods typically result in improvements to speech recognition abilities in quiet, but learner gains may not extend to other domains in speech (e.g., recognition in noise) or self-assessed benefit. This study examined the potential of training involving multiple talkers and training emphasizing discourse-level…

  9. Patterns of disturbance at multiple scales in real and simulated landscapes

    Treesearch

    Giovanni Zurlini; Kurt H. Riitters; Nicola Zaccarelli; Irene Petrosoillo

    2007-01-01

    We describe a framework to characterize and interpret the spatial patterns of disturbances at multiple scales in socio-ecological systems. Domains of scale are defined in pattern metric space and mapped in geographic space, which can help to understand how anthropogenic disturbances might impact biodiversity through habitat modification. The approach identifies typical...

  10. Multiple Imputation of Multilevel Missing Data-Rigor versus Simplicity

    ERIC Educational Resources Information Center

    Drechsler, Jörg

    2015-01-01

    Multiple imputation is widely accepted as the method of choice to address item-nonresponse in surveys. However, research on imputation strategies for the hierarchical structures that are typically found in the data in educational contexts is still limited. While a multilevel imputation model should be preferred from a theoretical point of view if…

  11. Emotional and Behavioural Problems in Children with Visual Impairment, Intellectual and Multiple Disabilities

    ERIC Educational Resources Information Center

    Alimovic, S.

    2013-01-01

    Background: Children with multiple impairments have more complex developmental problems than children with a single impairment. Method: We compared children, aged 4 to 11 years, with intellectual disability (ID) and visual impairment to children with single ID, single visual impairment and typical development on "Child Behavior Check…

  12. Probing for the Multiplicative Term in Modern Expectancy-Value Theory: A Latent Interaction Modeling Study

    ERIC Educational Resources Information Center

    Trautwein, Ulrich; Marsh, Herbert W.; Nagengast, Benjamin; Ludtke, Oliver; Nagy, Gabriel; Jonkmann, Kathrin

    2012-01-01

    In modern expectancy-value theory (EVT) in educational psychology, expectancy and value beliefs additively predict performance, persistence, and task choice. In contrast to earlier formulations of EVT, the multiplicative term Expectancy x Value in regression-type models typically plays no major role in educational psychology. The present study…

  13. Smashing the Stovepipe: Leveraging the GMSEC Open Architecture and Advanced IT Automation to Rapidly Prototype, Develop and Deploy Next-Generation Multi-Mission Ground Systems

    NASA Technical Reports Server (NTRS)

    Swenson, Paul

    2017-01-01

    Satellite/Payload Ground Systems - Typically highly-customized to a specific mission's use cases - Utilize hundreds (or thousands!) of specialized point-to-point interfaces for data flows / file transfers Documentation and tracking of these complex interfaces requires extensive time to develop and extremely high staffing costs Implementation and testing of these interfaces are even more cost-prohibitive, and documentation often lags behind implementation resulting in inconsistencies down the road With expanding threat vectors, IT Security, Information Assurance and Operational Security have become key Ground System architecture drivers New Federal security-related directives are generated on a daily basis, imposing new requirements on current / existing ground systems - These mandated activities and data calls typically carry little or no additional funding for implementation As a result, Ground System Sustaining Engineering groups and Information Technology staff continually struggle to keep up with the rolling tide of security Advancing security concerns and shrinking budgets are pushing these large stove-piped ground systems to begin sharing resources - I.e. Operational / SysAdmin staff, IT security baselines, architecture decisions or even networks / hosting infrastructure Refactoring these existing ground systems into multi-mission assets proves extremely challenging due to what is typically very tight coupling between legacy components As a result, many "Multi-Mission" ops. environments end up simply sharing compute resources and networks due to the difficulty of refactoring into true multi-mission systems Utilizing continuous integration / rapid system deployment technologies in conjunction with an open architecture messaging approach allows System Engineers and Architects to worry less about the low-level details of interfaces between components and configuration of systems GMSEC messaging is inherently designed to support multi-mission requirements, and allows components to aggregate data across multiple homogeneous or heterogeneous satellites or payloads - The highly-successful Goddard Science and Planetary Operations Control Center (SPOCC) utilizes GMSEC as the hub for it's automation and situational awareness capability Shifts focus towards getting GS to a final configuration-managed baseline, as well as multi-mission / big-picture capabilities that help increase situational awareness, promote cross-mission sharing and establish enhanced fleet management capabilities across all levels of the enterprise.

  14. Statistical sensor fusion analysis of near-IR polarimetric and thermal imagery for the detection of minelike targets

    NASA Astrophysics Data System (ADS)

    Weisenseel, Robert A.; Karl, William C.; Castanon, David A.; DiMarzio, Charles A.

    1999-02-01

    We present an analysis of statistical model based data-level fusion for near-IR polarimetric and thermal data, particularly for the detection of mines and mine-like targets. Typical detection-level data fusion methods, approaches that fuse detections from individual sensors rather than fusing at the level of the raw data, do not account rationally for the relative reliability of different sensors, nor the redundancy often inherent in multiple sensors. Representative examples of such detection-level techniques include logical AND/OR operations on detections from individual sensors and majority vote methods. In this work, we exploit a statistical data model for the detection of mines and mine-like targets to compare and fuse multiple sensor channels. Our purpose is to quantify the amount of knowledge that each polarimetric or thermal channel supplies to the detection process. With this information, we can make reasonable decisions about the usefulness of each channel. We can use this information to improve the detection process, or we can use it to reduce the number of required channels.

  15. Reduction of intergranular exchange coupling and grain size for high Ku CoPt-based granular media: Metal-oxide buffer layer and multiple oxide boundary materials

    NASA Astrophysics Data System (ADS)

    Tham, Kim Kong; Kushibiki, Ryosuke; Kamada, Tomonari; Hinata, Shintaro; Saito, Shin

    2018-05-01

    Investigation of magnetic properties and microstructure of granular media with various multiple oxides as the grain boundary material is reported. Saturation magnetization (Ms), uniaxial magnetocrystalline anisotropy (Ku), and magnetic grain diameter (GD) of the granular media show linear correlation with volume weighted average for melting point (Tm) of each oxides (Tmave). Ku of magnetic grains (Kugrain) shows a trade-off relation with GD that it is a big challenge to satisfy both high Kugrain and small GD by only controlling Tmave. To obtain a granular medium with appropriate Kugrain, GD, and low degree of intergranular exchange coupling, the combination of Tmave control of grain boundary material by mixing oxides and employment of a buffer layer are required. Here the degree of intergranular exchange coupling is estimated from the slope of M-H loop at around coercivity (α). By applying this technique, a typical granular medium with Kugrain of 1.0×107 erg/cm3, GD of 5.1 nm, and α of 1.2 is realized.

  16. ULTIMATE: a deployable multiple integral field unit for Subaru

    NASA Astrophysics Data System (ADS)

    Ellis, S. C.; Zhelem, Ross; Brown, David; Staszak, Nicholas F.; Lidman, Chris; Nataf, David M.; Casey, Andrew R.; Xavier, Pascal; Sheinis, Andrew; Gillingham, Peter; Tims, Julia; Lawrence, Jon; Bryant, Julia; Sharp, Rob

    2016-08-01

    ULTIMATE is an instrument concept under development at the AAO, for the Subaru Telescope, which will have the unique combination of ground layer adaptive optics feeding multiple deployable integral field units. This will allow ULTIMATE to probe unexplored parameter space, enabling science cases such as the evolution of galaxies at z 0:5 to 1.5, and the dark matter content of the inner part of our Galaxy. ULTIMATE will use Starbugs to position between 7 and 13 IFUs over a 14 × 8 arcmin field-of-view, pro- vided by a new wide-field corrector. All Starbugs can be positioned simultaneously, to an accuracy of better than 5 milli-arcsec within the typical slew-time of the telescope, allowing for very efficient re-configuration between observations. The IFUs will feed either the near-infrared nuMOIRCS or the visible/ near-infrared PFS spectrographs, or both. Future possible upgrades include the possibility of purpose built spectrographs and incorporating OH suppression using fibre Bragg gratings. We describe the science case and resulting design requirements, the baseline instrument concept, and the expected performance of the instrument.

  17. Introduction and application of the multiscale coefficient of variation analysis.

    PubMed

    Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh

    2017-10-01

    Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.

  18. A multichannel model for the self-consistent analysis of coherent transport in graphene nanoribbons.

    PubMed

    Mencarelli, Davide; Pierantoni, Luca; Farina, Marco; Di Donato, Andrea; Rozzi, Tullio

    2011-08-23

    In this contribution, we analyze the multichannel coherent transport in graphene nanoribbons (GNRs) by a scattering matrix approach. We consider the transport properties of GNR devices of a very general form, involving multiple bands and multiple leads. The 2D quantum transport over the whole GNR surface, described by the Schrödinger equation, is strongly nonlinear as it implies calculation of self-generated and externally applied electrostatic potentials, solutions of the 3D Poisson equation. The surface charge density is computed as a balance of carriers traveling through the channel at all of the allowed energies. Moreover, formation of bound charges corresponding to a discrete modal spectrum is observed and included in the model. We provide simulation examples by considering GNR configurations typical for transistor devices and GNR protrusions that find an interesting application as cold cathodes for X-ray generation. With reference to the latter case, a unified model is required in order to couple charge transport and charge emission. However, to a first approximation, these could be considered as independent problems, as in the example. © 2011 American Chemical Society

  19. On the structure of cellular solutions in Rayleigh-Benard-Marangoni flows in small-aspect-ratio containers

    NASA Technical Reports Server (NTRS)

    Dijkstra, Henk A.

    1992-01-01

    Multiple steady flow patterns occur in surface-tension/buoyancy-driven convection in a liquid layer heated from below (Rayleigh-Benard-Marangoni flows). Techniques of numerical bifurcation theory are used to study the multiplicity and stability of two-dimensional steady flow patterns (rolls) in rectangular small-aspect-ratio containers as the aspect ratio is varied. For pure Marangoni flows at moderate Biot and Prandtl number, the transitions occurring when paths of codimension 1 singularities intersect determine to a large extent the multiplicity of stable patterns. These transitions also lead, for example, to Hopf bifurcations and stable periodic flows for a small range in aspect ratio. The influence of the type of lateral walls on the multiplicity of steady states is considered. 'No-slip' lateral walls lead to hysteresis effects and typically restrict the number of stable flow patterns (with respect to 'slippery' sidewalls) through the occurrence of saddle node bifurcations. In this way 'no-slip' sidewalls induce a selection of certain patterns, which typically have the largest Nusselt number, through secondary bifurcation.

  20. Novel Control Strategy for Multiple Run-of-the-River Hydro Power Plants to Provide Grid Ancillary Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob

    Electricity generated by Hydropower Plants (HPPs) contributes a considerable portion of bulk electricity generation and delivers it with a low carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which includes solar and wind energy. The increasing penetration of wind and solar penetration leads to a lowered inertia in the grid and hence poses stability challenges. In recent years, breakthrough in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments in power grids. Multiple ROR HPPs if integrated with scalable, multi time-step energy storage so that the total output canmore » be controlled. Although, the size of a single energy storage is far smaller than that of a typical reservoir, cohesively managing multiple sets of energy storage distributed in different locations is proposed. The ratings of storages and multiple ROR HPPs approximately equals the rating of a large, conventional HPP. The challenges associated with the system architecture and operation are described. Energy storage technologies such as supercapacitors, flywheels, batteries etc. can function as a dispatchable synthetic reservoir with a scalable size of energy storage will be integrated. Supercapacitors, flywheels, and battery are chosen to provide fast, medium, and slow responses to support grid requirements. Various dynamic and transient power grid conditions are simulated and performances of integrated ROR HPPs with energy storage is provided. The end goal of this research is to investigate the inertial equivalence of a large, conventional HPP with a unique set of multiple ROR HPPs and optimally rated energy storage systems.« less

  1. Running of the scalar spectral index in bouncing cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehners, Jean-Luc; Wilson-Ewing, Edward, E-mail: jean-luc.lehners@aei.mpg.de, E-mail: wilson-ewing@aei.mpg.de

    We calculate the running of the scalar index in the ekpyrotic and matter bounce cosmological scenarios, and find that it is typically negative for ekpyrotic models, while it is typically positive for realizations of the matter bounce where multiple fields are present. This can be compared to inflation, where the observationally preferred models typically predict a negative running. The magnitude of the running is expected to be between 10{sup −4} and up to 10{sup −2}, leading in some cases to interesting expectations for near-future observations.

  2. Multiple lobes in the far-field distribution of terahertz quantum-cascade lasers due to self-interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Röben, B., E-mail: roeben@pdi-berlin.de; Wienold, M.; Schrottke, L.

    2016-06-15

    The far-field distribution of the emission intensity of terahertz (THz) quantum-cascade lasers (QCLs) frequently exhibits multiple lobes instead of a single-lobed Gaussian distribution. We show that such multiple lobes can result from self-interference related to the typically large beam divergence of THz QCLs and the presence of an inevitable cryogenic operation environment including optical windows. We develop a quantitative model to reproduce the multiple lobes. We also demonstrate how a single-lobed far-field distribution can be achieved.

  3. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  4. Light chains removal by extracorporeal techniques in acute kidney injury due to multiple myeloma: a position statement of the Onconephrology Work Group of the Italian Society of Nephrology.

    PubMed

    Fabbrini, P; Finkel, K; Gallieni, M; Capasso, G; Cavo, M; Santoro, A; Pasquali, S

    2016-12-01

    Acute kidney injury (AKI) is a frequent complication of multiple myeloma and is associated with increased short-term mortality. Additionally, even a single episode of AKI can eventually lead to end-stage renal disease (ESRD), significantly reducing quality of life and long-term survival. In the setting of multiple myeloma, severe AKI (requiring dialysis) is typically secondary to cast nephropathy (CN). Renal injury in CN is due to intratubular obstruction from precipitation of monoclonal serum free light chains (sFLC) as well as direct tubular toxicity of sFLC via stimulation of nuclear factor (NF)κB inflammatory pathways. Current mainstays of CN treatment are early removal of precipitating factors such as nephrotoxic drugs, acidosis and dehydration, together with rapid reduction of sFLC levels. Introduction of the proteasome inhibitor bortezomib has significantly improved the response rates in multiple myeloma due to its ability to rapidly reduce sFLC levels and has been referred to as "renoprotective" therapy. As an adjunct to chemotherapy, several new extracorporeal techniques have raised interest as a further means to reduce sFLC concentrations in the treatment of CN. Whether addition of extracorporeal therapies to renoprotective therapy can result in better renal recovery is still a matter of debate and there are currently no guidelines in this field. In this positon paper, we offer an overview of the available data and the authors' perspectives on extracorporeal treatments in CN.

  5. Color object detection using spatial-color joint probability functions.

    PubMed

    Luo, Jiebo; Crandall, David

    2006-06-01

    Object detection in unconstrained images is an important image understanding problem with many potential applications. There has been little success in creating a single algorithm that can detect arbitrary objects in unconstrained images; instead, algorithms typically must be customized for each specific object. Consequently, it typically requires a large number of exemplars (for rigid objects) or a large amount of human intuition (for nonrigid objects) to develop a robust algorithm. We present a robust algorithm designed to detect a class of compound color objects given a single model image. A compound color object is defined as having a set of multiple, particular colors arranged spatially in a particular way, including flags, logos, cartoon characters, people in uniforms, etc. Our approach is based on a particular type of spatial-color joint probability function called the color edge co-occurrence histogram. In addition, our algorithm employs perceptual color naming to handle color variation, and prescreening to limit the search scope (i.e., size and location) for the object. Experimental results demonstrated that the proposed algorithm is insensitive to object rotation, scaling, partial occlusion, and folding, outperforming a closely related algorithm based on color co-occurrence histograms by a decisive margin.

  6. Hybrid estimation of complex systems.

    PubMed

    Hofbaur, Michael W; Williams, Brian C

    2004-10-01

    Modern automated systems evolve both continuously and discretely, and hence require estimation techniques that go well beyond the capability of a typical Kalman Filter. Multiple model (MM) estimation schemes track these system evolutions by applying a bank of filters, one for each discrete system mode. Modern systems, however, are often composed of many interconnected components that exhibit rich behaviors, due to complex, system-wide interactions. Modeling these systems leads to complex stochastic hybrid models that capture the large number of operational and failure modes. This large number of modes makes a typical MM estimation approach infeasible for online estimation. This paper analyzes the shortcomings of MM estimation, and then introduces an alternative hybrid estimation scheme that can efficiently estimate complex systems with large number of modes. It utilizes search techniques from the toolkit of model-based reasoning in order to focus the estimation on the set of most likely modes, without missing symptoms that might be hidden amongst the system noise. In addition, we present a novel approach to hybrid estimation in the presence of unknown behavioral modes. This leads to an overall hybrid estimation scheme for complex systems that robustly copes with unforeseen situations in a degraded, but fail-safe manner.

  7. A primer on medical education in the United States through the lens of a current resident physician.

    PubMed

    Mowery, Yvonne M

    2015-10-01

    Physician training and standards for medical licensure differ widely across the globe. The medical education process in the United States (US) typically involves a minimum of 11 years of formal training and multiple standardized examinations between graduating from secondary school and becoming an attending physician with full medical licensure. Students in the US traditionally enter a 4-year medical school after completing an undergraduate bachelor's degree, in contrast to most other countries where medical training begins after graduation from high school. Medical school seniors planning to practice medicine in the US must complete postgraduate clinical training, referred to as residency, within the specialty of their choosing. The duration of residency varies depending on specialty, typically lasting between 3 and 7 years. For subspecialty fields, additional clinical training is often required in the form of a fellowship. Many experts have called for changes in the medical education system to shorten medical training in the US, and reforms are ongoing in some institutions. However, physician education in the US generally remains a progression from undergraduate premedical coursework to 4 years of medical school, followed by residency training with an optional subspecialty fellowship.

  8. Application of the SNoW machine learning paradigm to a set of transportation imaging problems

    NASA Astrophysics Data System (ADS)

    Paul, Peter; Burry, Aaron M.; Wang, Yuheng; Kozitsky, Vladimir

    2012-01-01

    Machine learning methods have been successfully applied to image object classification problems where there is clear distinction between classes and where a comprehensive set of training samples and ground truth are readily available. The transportation domain is an area where machine learning methods are particularly applicable, since the classification problems typically have well defined class boundaries and, due to high traffic volumes in most applications, massive roadway data is available. Though these classes tend to be well defined, the particular image noises and variations can be challenging. Another challenge is the extremely high accuracy typically required in most traffic applications. Incorrect assignment of fines or tolls due to imaging mistakes is not acceptable in most applications. For the front seat vehicle occupancy detection problem, classification amounts to determining whether one face (driver only) or two faces (driver + passenger) are detected in the front seat of a vehicle on a roadway. For automatic license plate recognition, the classification problem is a type of optical character recognition problem encompassing multiple class classification. The SNoW machine learning classifier using local SMQT features is shown to be successful in these two transportation imaging applications.

  9. A primer on medical education in the United States through the lens of a current resident physician

    PubMed Central

    2015-01-01

    Physician training and standards for medical licensure differ widely across the globe. The medical education process in the United States (US) typically involves a minimum of 11 years of formal training and multiple standardized examinations between graduating from secondary school and becoming an attending physician with full medical licensure. Students in the US traditionally enter a 4-year medical school after completing an undergraduate bachelor’s degree, in contrast to most other countries where medical training begins after graduation from high school. Medical school seniors planning to practice medicine in the US must complete postgraduate clinical training, referred to as residency, within the specialty of their choosing. The duration of residency varies depending on specialty, typically lasting between 3 and 7 years. For subspecialty fields, additional clinical training is often required in the form of a fellowship. Many experts have called for changes in the medical education system to shorten medical training in the US, and reforms are ongoing in some institutions. However, physician education in the US generally remains a progression from undergraduate premedical coursework to 4 years of medical school, followed by residency training with an optional subspecialty fellowship. PMID:26623123

  10. A primer on medical education in the United States through the lens of a current resident physician

    PubMed Central

    2015-01-01

    Physician training and standards for medical licensure differ widely across the globe. The medical education process in the United States (US) typically involves a minimum of 11 years of formal training and multiple standardized examinations between graduating from secondary school and becoming an attending physician with full medical licensure. Students in the US traditionally enter a 4-year medical school after completing an undergraduate bachelor’s degree, in contrast to most other countries where medical training begins after graduation from high school. Medical school seniors planning to practice medicine in the US must complete postgraduate clinical training, referred to as residency, within the specialty of their choosing. The duration of residency varies depending on specialty, typically lasting between 3 and 7 years. For subspecialty fields, additional clinical training is often required in the form of a fellowship. Many experts have called for changes in the medical education system to shorten medical training in the US, and reforms are ongoing in some institutions. However, physician education in the US generally remains a progression from undergraduate premedical coursework to 4 years of medical school, followed by residency training with an optional subspecialty fellowship. PMID:26605316

  11. An overview of learning disabilities: psychoeducational perspectives.

    PubMed

    Johnson, D J

    1995-01-01

    In general, people with learning disabilities are a heterogeneous population that require a multidisciplinary evaluation and careful, well-planned intervention. Despite this heterogeneity, patterns of problems often co-occur. Therefore, diagnosticians and educators should look beyond single areas of achievement such as reading or arithmetic. In addition, problems in one area of learning typically have secondary impacts on higher levels of learning. That is, comprehension problems typically interfere with expression. Every effort should be made to examine patterns of problems and to avoid fragmentation of services so that each area of underachievement is not treated separately. Although learning disabilities usually interfere with school performance, they are not simply academic handicaps. They interfere with certain social activities as well as occupational pursuits. In many instances, they impact on mental health and self-esteem. Therefore, students need multiple services. And, as emphasized throughout this journal issue, learning disabled individuals may have comorbid conditions such as attention deficit disorder, depression, and neurologic problems. Furthermore, the problems may change over time. Children may first be identified because of language comprehension problems but later have reading or mathematics difficulty. With intervention, oral expressive problems may be alleviated but may be manifested later in written language.

  12. Ultrasonographic findings in hereditary neuropathy with liability to pressure palsies.

    PubMed

    Bayrak, Ayse O; Bayrak, Ilkay Koray; Battaloglu, Esra; Ozes, Burcak; Yildiz, Onur; Onar, Musa Kazim

    2015-02-01

    The aims of this study were to evaluate the sonographic findings of patients with hereditary neuropathy with liability to pressure palsies (HNPP) and to examine the correlation between sonographic and electrophysiological findings. Nine patients whose electrophysiological findings indicated HNPP and whose diagnosis was confirmed by genetic analysis were enrolled in the study. The median, ulnar, peroneal, and tibial nerves were evaluated by ultrasonography. We ultrasonographically evaluated 18 median, ulnar, peroneal, and tibial nerves. Nerve enlargement was identified in the median, ulnar, and peroneal nerves at the typical sites of compression. None of the patients had nerve enlargement at a site of noncompression. None of the tibial nerves had increased cross-sectional area (CSA) values. There were no significant differences in median, ulnar, and peroneal nerve distal motor latencies (DMLs) between the patients with an increased CSA and those with a normal CSA. In most cases, there was no correlation between electrophysiological abnormalities and clinical or sonographic findings. Although multiple nerve enlargements at typical entrapment sites on sonographic evaluation can suggest HNPP, ultrasonography cannot be used as a diagnostic tool for HNPP. Ultrasonography may contribute to the differential diagnosis of HNPP and other demyelinating polyneuropathies or compression neuropathies; however, further studies are required.

  13. Imaging spectrometer concepts for next-generation planetary missions

    NASA Technical Reports Server (NTRS)

    Herring, M.; Juergens, D. W.; Kupferman, P. N.; Vane, G.

    1984-01-01

    In recent years there has been an increasing interest in the imaging spectrometer concept, in which imaging is accomplished in multiple, contiguous spectral bands at typical intervals of 5 to 20 nm. There are two implementations of this concept under consideration for upcoming planetary missions. One is the scanning, or 'whisk-broom' approach, in which each picture element (pixel) of the scene is spectrally dispersed onto a linear array of detectors; the spatial information is provided by a scan mirror in combination with the vehicle motion. The second approach is the 'push-broom' imager, in which a line of pixels from the scene is spectrally dispersed onto a two-dimensional (area-array) detector. In this approach, the scan mirror is eliminated, but the optics and focal plane are more complex. This paper discusses the application of these emerging instrument concepts to the planetary program. Key issues are the trade-off between the two types of imaging spectrometer, the available data rate from a typical planetary mission, and the focal-plane cooling requirements. Specific straw-man conceptual designs for the Mars Geoscience/Climatology Orbiter (MGCO) and the Mariner Mark II Comet Rendezvous/Asteroid Flyby (CRAF) missions are discussed.

  14. New developments in the treatment of optic neuritis

    PubMed Central

    Jenkins, Thomas M; Toosy, Ahmed T

    2010-01-01

    Acute optic neuritis (ON) has various etiologies. The most common presentation is inflammatory, demyelinating, idiopathic, or “typical” ON, which may be associated with multiple sclerosis. This must be differentiated from “atypical” causes of ON, which differ in their clinical presentation, natural history, management, and prognosis. Clinical “red flags” for an atypical cause of ON include absent or persistent pain, exudates and hemorrhages on fundoscopy, very severe, bilateral, or progressive visual loss, and failure to recover. In typical ON, steroids shorten the duration of the attack, but do not influence visual outcome. This is in contrast to atypical ON associated with conditions such as sarcoidosis and neuromyelitis optica, which require aggressive immunosuppression and sometimes plasma exchange. The visual prognosis of typical ON is generally good. The prognosis in atypical ON is more variable. New developments aimed at designing better treatments for patients who fail to recover are discussed, focusing on recent research elucidating mechanisms of damage and recovery in ON. Future therapeutic directions may include enhancing repair processes, such as remyelination or adaptive neuroplasticity, or alternative methods of immunomodulation. Pilot studies investigating the safety and proof-of-principle of stem cell treatment are currently underway. PMID:28539768

  15. Unlocking higher harmonics in atomic force microscopy with gentle interactions.

    PubMed

    Santos, Sergio; Barcons, Victor; Font, Josep; Verdaguer, Albert

    2014-01-01

    In dynamic atomic force microscopy, nanoscale properties are encoded in the higher harmonics. Nevertheless, when gentle interactions and minimal invasiveness are required, these harmonics are typically undetectable. Here, we propose to externally drive an arbitrary number of exact higher harmonics above the noise level. In this way, multiple contrast channels that are sensitive to compositional variations are made accessible. Numerical integration of the equation of motion shows that the external introduction of exact harmonic frequencies does not compromise the fundamental frequency. Thermal fluctuations are also considered within the detection bandwidth of interest and discussed in terms of higher-harmonic phase contrast in the presence and absence of an external excitation of higher harmonics. Higher harmonic phase shifts further provide the means to directly decouple the true topography from that induced by compositional heterogeneity.

  16. Basic design of MRM assays for peptide quantification.

    PubMed

    James, Andrew; Jorgensen, Claus

    2010-01-01

    With the recent availability and accessibility of mass spectrometry for basic and clinical research, the requirement for stable, sensitive, and reproducible assays to specifically detect proteins of interest has increased. Multiple reaction monitoring (MRM) or selective reaction monitoring (SRM) is a highly selective, sensitive, and robust assay to monitor the presence and amount of biomolecules. Until recently, MRM was typically used for the detection of drugs and other biomolecules from body fluids. With increased focus on biomarkers and systems biology approaches, researchers in the proteomics field have taken advantage of this approach. In this chapter, we will introduce the reader to the basic principle of designing and optimizing an MRM workflow. We provide examples of MRM workflows for standard proteomic samples and provide suggestions for the reader who is interested in using MRM for quantification.

  17. Sickle Cell Crisis Complicated by Synthetic Cannabinoid Abuse: A Case Report.

    PubMed

    Zheng, Crystal Y; Minniti, Caterina P; Chaitowitz, Mark H

    2016-06-01

    We describe a case of delirium occurring in a hospitalized sickle cell patient. Following admission for a typical pain crisis, the patient continued to report unrelieved pain with marked agitation for several days, despite escalating doses of opioid analgesia, and ultimately required intubation following development of acute chest syndrome (ACS). After some delay, it was discovered that he had been using a synthetic cannabinoid (K2) which may have precipitated his pain crisis and, with hindsight, explained his prolonged period of delirium. Delayed recognition was due to multiple factors, notably the absence of an index of suspicion for this novel drug, the presence of alternate explanations for the patient's altered mental status, and the fact that reliable laboratory screening for synthetic cannabinoids is currently not widely available.

  18. Electric Propulsion for Low Earth Orbit Communication Satellites

    NASA Technical Reports Server (NTRS)

    Oleson, Steven R.

    1997-01-01

    Electric propulsion was evaluated for orbit insertion, satellite positioning and de-orbit applications on big (hundreds of kilograms) and little (tens of kilograms) low earth orbit communication satellite constellations. A simple, constant circumferential thrusting method was used. This technique eliminates the complex guidance and control required when shading of the solar arrays must be considered. Power for propulsion was assumed to come from the existing payload power. Since the low masses of these satellites enable multiple spacecraft per launch, the ability to add spacecraft to a given launch was used as a figure of merit. When compared to chemical propulsion ammonia resistojets, ion, Hall, and pulsed plasma thrusters allowed an additional spacecraft per launch Typical orbit insertion and de-orbit times were found to range from a few days to a few months.

  19. Following the clues to neuropathic pain. Distribution and other leads reveal the cause and the treatment approach.

    PubMed

    Belgrade, M J

    1999-11-01

    Neuropathic pain can seem enigmatic at first because it can last indefinitely and often a cause is not evident. However, heightened awareness of typical characteristics, such as the following, makes identification fairly easy: The presence of certain accompanying conditions (e.g., diabetes, HIV or herpes zoster infection, multiple sclerosis) Pain described as shooting, stabbing, lancinating, burning, or searing Pain worse at night Pain following anatomic nerve distribution Pain in a numb or insensate site The presence of allodynia Neuropathic pain responds poorly to standard pain therapies and usually requires specialized medications (e.g., anticonvulsants, tricyclic antidepressants, opioid analgesics) for optimal control. Successful pain control is enhanced with use of a systematic approach consisting of disease modification, local or regional measures, and systemic therapy.

  20. Tipping elements in the Arctic marine ecosystem.

    PubMed

    Duarte, Carlos M; Agustí, Susana; Wassmann, Paul; Arrieta, Jesús M; Alcaraz, Miquel; Coello, Alexandra; Marbà, Núria; Hendriks, Iris E; Holding, Johnna; García-Zarandona, Iñigo; Kritzberg, Emma; Vaqué, Dolors

    2012-02-01

    The Arctic marine ecosystem contains multiple elements that present alternative states. The most obvious of which is an Arctic Ocean largely covered by an ice sheet in summer versus one largely devoid of such cover. Ecosystems under pressure typically shift between such alternative states in an abrupt, rather than smooth manner, with the level of forcing required for shifting this status termed threshold or tipping point. Loss of Arctic ice due to anthropogenic climate change is accelerating, with the extent of Arctic sea ice displaying increased variance at present, a leading indicator of the proximity of a possible tipping point. Reduced ice extent is expected, in turn, to trigger a number of additional tipping elements, physical, chemical, and biological, in motion, with potentially large impacts on the Arctic marine ecosystem.

  1. A study on the co- and adjacent channel protection requirements for mobile satellite ACSSB modulation

    NASA Technical Reports Server (NTRS)

    Sydor, John T.

    1988-01-01

    Samples of speech modulated by narrowband frequency modulation (NBFM) (cellular) and amplitude companded single sideband (ACSSB) radios were subjected to simulated co- and adjacent channel interference environments typical of proposed frequency division multiple access (FDMA) mobile satellite systems. These samples were then listened to by a group of evaluators whose subjective responses to the samples were used to produce a series of graphs showing the relationship between subjective acceptability, carrier to noise density (C/No), carrier to interference ratio (C/I), and frequency offset. The results show that in a mobile satellite environment, ACSSB deteriorates more slowly than NBFM. The co- and adjacent channel protection ratios for both modulation techniques were roughly the same, even though the mechanism for signal deterioration is different.

  2. [Cutaneous and mucosal manifestations associated with cocaine use].

    PubMed

    Imbernón-Moya, Adrián; Chico, Ricardo; Aguilar-Martínez, Antonio

    2016-06-17

    Complications due to cocaine are a public health problem. The typical cutaneous disease is leukocytoclastic vasculitis and/or thrombotic vasculopathy affecting mainly the ears. No intense systemic involvement is usually present, but there may be several cutaneous, mucosal and systemic manifestations. Other findings associated as arthralgia, neutropaenia or agranulocytosis, low titer positive antinuclear antibodies, antiphospholipid antibody positivity and neutrophil cytoplasmic antibodies against multiple antigens help the diagnosis. This disease requires a clinical suspicion with a clinical history, a complete physical examination and a broad differential diagnosis for an early and correct diagnosis. The course is usually self-limited. In most cases the only treatment is to discontinue the use of cocaine associated with symptomatic treatment, no proven benefit of systemic corticosteroids. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.

  3. Alert Triage v 0.1 beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doak, Justin E.; Ingram, Joe; Johnson, Josh

    2016-01-06

    In the cyber security operations of a typical organization, data from multiple sources are monitored, and when certain conditions in the data are met, an alert is generated in an alert management system. Analysts inspect these alerts to decide if any deserve promotion to an event requiring further scrutiny. This triage process is manual, time-consuming, and detracts from the in-depth investigation of events. We have created a software system that uses supervised machine learning to automatically prioritize these alerts. In particular we utilize active learning to make efficient use of the pool of unlabeled alerts, thereby improving the performance ofmore » our ranking models over passive learning. We have demonstrated the effectiveness of our system on a large, real-world dataset of cyber security alerts.« less

  4. Nuclear magnetic resonance detection and spectroscopy of single proteins using quantum logic

    NASA Astrophysics Data System (ADS)

    Lovchinsky, I.; Sushkov, A. O.; Urbach, E.; de Leon, N. P.; Choi, S.; De Greve, K.; Evans, R.; Gertner, R.; Bersin, E.; Müller, C.; McGuinness, L.; Jelezko, F.; Walsworth, R. L.; Park, H.; Lukin, M. D.

    2016-02-01

    Nuclear magnetic resonance spectroscopy is a powerful tool for the structural analysis of organic compounds and biomolecules but typically requires macroscopic sample quantities. We use a sensor, which consists of two quantum bits corresponding to an electronic spin and an ancillary nuclear spin, to demonstrate room temperature magnetic resonance detection and spectroscopy of multiple nuclear species within individual ubiquitin proteins attached to the diamond surface. Using quantum logic to improve readout fidelity and a surface-treatment technique to extend the spin coherence time of shallow nitrogen-vacancy centers, we demonstrate magnetic field sensitivity sufficient to detect individual proton spins within 1 second of integration. This gain in sensitivity enables high-confidence detection of individual proteins and allows us to observe spectral features that reveal information about their chemical composition.

  5. Electrophysiology Tool Construction

    PubMed Central

    Ide, David

    2016-01-01

    This protocol documents the construction of a custom microscope stage system currently in widespread use by a wide variety of investigators. The current design and construction of this stage is the result of multiple iterations, integrating input from a number of electrophysiologists working with a variety of preparations. Thus, this tool is a generally applicable solution, suitable for a wide array of end-user requirements; its flexible design facilitates rapid and easy configuration, making it useful for multi-user microscopes, as individual researchers can reconfigure the stage system or have their own readily replaceable stage plates. Furthermore, the stage can be manufactured using equipment typically found in small research machine shops, and by keeping the various parts on hand, machinists can quickly satisfy new requests and/or modifications for a wide variety of applications. PMID:23315946

  6. Waste Collector System Technology Comparisons for Constellation Applications

    NASA Technical Reports Server (NTRS)

    Broyan, James Lee, Jr.

    2006-01-01

    The Waste Collection Systems (WCS) for space vehicles have utilized a variety of hardware for collecting human metabolic wastes. It has typically required multiple missions to resolve crew usability and hardware performance issues that are difficult to duplicate on the ground. New space vehicles should leverage off past WCS systems. Past WCS hardware designs are substantially different and unique for each vehicle. However, each WCS can be analyzed and compared as a subset of technologies which encompass fecal collection, urine collection, air systems, pretreatment systems. Technology components from the WCS of various vehicles can then be combined to reduce hardware mass and volume while maximizing use of previous technology and proven human-equipment interfaces. Analysis of past US and Russian WCS are compared and extrapolated to Constellation missions.

  7. Retrieval of Droplet size Density Distribution from Multiple field of view Cross polarized Lidar Signals: Theory and Experimental Validation

    DTIC Science & Technology

    2016-06-02

    Retrieval of droplet-size density distribution from multiple-field-of-view cross-polarized lidar signals: theory and experimental validation...theoretical and experimental studies of mul- tiple scattering and multiple-field-of-view (MFOV) li- dar detection have made possible the retrieval of cloud...droplet cloud are typical of Rayleigh scattering, with a signature close to a dipole (phase function quasi -flat and a zero-depolarization ratio

  8. A method for reduction of Acoustic Emission (AE) data with application in machine failure detection and diagnosis

    NASA Astrophysics Data System (ADS)

    Vicuña, Cristián Molina; Höweler, Christoph

    2017-12-01

    The use of AE in machine failure diagnosis has increased over the last years. Most AE-based failure diagnosis strategies use digital signal processing and thus require the sampling of AE signals. High sampling rates are required for this purpose (e.g. 2 MHz or higher), leading to streams of large amounts of data. This situation is aggravated if fine resolution and/or multiple sensors are required. These facts combine to produce bulky data, typically in the range of GBytes, for which sufficient storage space and efficient signal processing algorithms are required. This situation probably explains why, in practice, AE-based methods consist mostly in the calculation of scalar quantities such as RMS and Kurtosis, and the analysis of their evolution in time. While the scalar-based approach offers the advantage of maximum data reduction; it has the disadvantage that most part of the information contained in the raw AE signal is lost unrecoverably. This work presents a method offering large data reduction, while keeping the most important information conveyed by the raw AE signal, useful for failure detection and diagnosis. The proposed method consist in the construction of a synthetic, unevenly sampled signal which envelopes the AE bursts present on the raw AE signal in a triangular shape. The constructed signal - which we call TriSignal - also permits the estimation of most scalar quantities typically used for failure detection. But more importantly, it contains the information of the time of occurrence of the bursts, which is key for failure diagnosis. Lomb-Scargle normalized periodogram is used to construct the TriSignal spectrum, which reveals the frequency content of the TriSignal and provides the same information as the classic AE envelope. The paper includes application examples in planetary gearbox and low-speed rolling element bearing.

  9. Using multi-criteria analysis of simulation models to understand complex biological systems

    Treesearch

    Maureen C. Kennedy; E. David Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  10. Cross-Platform Learning: On the Nature of Children's Learning from Multiple Media Platforms

    ERIC Educational Resources Information Center

    Fisch, Shalom M.

    2013-01-01

    It is increasingly common for an educational media project to span several media platforms (e.g., TV, Web, hands-on materials), assuming that the benefits of learning from multiple media extend beyond those gained from one medium alone. Yet research typically has investigated learning from a single medium in isolation. This paper reviews several…

  11. Multiple Measures of Outcome in Assessing a Prison-Based Drug Treatment Program

    ERIC Educational Resources Information Center

    Prendergast, Michael L.; Hall, Elizabeth A.; Wexler, Harry K.

    2003-01-01

    Evaluations of prison-based drug treatment programs typically focus on one or two dichotomous outcome variables related to recidivism. In contrast, this paper uses multiple measures of outcomes related to crime and drug use to examine the impact of prison treatment. Crime variables included self-report data of time to first illegal activity,…

  12. From Individualism to Co-Construction and Back Again: Rethinking Research Methodology for Children with Profound and Multiple Learning Disabilities

    ERIC Educational Resources Information Center

    Simmons, Ben; Watson, Debbie

    2015-01-01

    Children with profound and multiple learning disabilities (PMLD) are said to experience severe congenital impairments to consciousness and cognition stemming from neurological damage. Such children are understood as operating at the pre-verbal stages of development, and research in the field typically draws conceptual resources from psychology to…

  13. Acoustic classification of multiple simultaneous bird species: a multi-instance multi-label approach

    Treesearch

    F. Briggs; B. Lakshminarayanan; L. Neal; X.Z. Fern; R. Raich; S.F. Hadley; A.S. Hadley; M.G. Betts

    2012-01-01

    Although field-collected recordings typically contain multiple simultaneously vocalizing birds of different species, acoustic species classification in this setting has received little study so far. This work formulates the problem of classifying the set of species present in an audio recording using the multi-instance multi-label (MIML) framework for machine learning...

  14. Modeling Differential Item Functioning Using a Generalization of the Multiple-Group Bifactor Model

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rijmen, Frank; Rabe-Hesketh, Sophia

    2013-01-01

    The authors present a generalization of the multiple-group bifactor model that extends the classical bifactor model for categorical outcomes by relaxing the typical assumption of independence of the specific dimensions. In addition to the means and variances of all dimensions, the correlations among the specific dimensions are allowed to differ…

  15. Accounting for multiple sources of uncertainty in impact assessments: The example of the BRACE study

    NASA Astrophysics Data System (ADS)

    O'Neill, B. C.

    2015-12-01

    Assessing climate change impacts often requires the use of multiple scenarios, types of models, and data sources, leading to a large number of potential sources of uncertainty. For example, a single study might require a choice of a forcing scenario, climate model, bias correction and/or downscaling method, societal development scenario, model (typically several) for quantifying elements of societal development such as economic and population growth, biophysical model (such as for crop yields or hydrology), and societal impact model (e.g. economic or health model). Some sources of uncertainty are reduced or eliminated by the framing of the question. For example, it may be useful to ask what an impact outcome would be conditional on a given societal development pathway, forcing scenario, or policy. However many sources of uncertainty remain, and it is rare for all or even most of these sources to be accounted for. I use the example of a recent integrated project on the Benefits of Reduced Anthropogenic Climate changE (BRACE) to explore useful approaches to uncertainty across multiple components of an impact assessment. BRACE comprises 23 papers that assess the differences in impacts between two alternative climate futures: those associated with Representative Concentration Pathways (RCPs) 4.5 and 8.5. It quantifies difference in impacts in terms of extreme events, health, agriculture, tropical cyclones, and sea level rise. Methodologically, it includes climate modeling, statistical analysis, integrated assessment modeling, and sector-specific impact modeling. It employs alternative scenarios of both radiative forcing and societal development, but generally uses a single climate model (CESM), partially accounting for climate uncertainty by drawing heavily on large initial condition ensembles. Strengths and weaknesses of the approach to uncertainty in BRACE are assessed. Options under consideration for improving the approach include the use of perturbed physics ensembles of CESM, employing results from multiple climate models, and combining the results from single impact models with statistical representations of uncertainty across multiple models. A key consideration is the relationship between the question being addressed and the uncertainty approach.

  16. Multicomponent pre-stack seismic waveform inversion in transversely isotropic media using a non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Padhi, Amit; Mallick, Subhashis

    2014-03-01

    Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.

  17. Genomic Data Quality Impacts Automated Detection of Lateral Gene Transfer in Fungi

    PubMed Central

    Dupont, Pierre-Yves; Cox, Murray P.

    2017-01-01

    Lateral gene transfer (LGT, also known as horizontal gene transfer), an atypical mechanism of transferring genes between species, has almost become the default explanation for genes that display an unexpected composition or phylogeny. Numerous methods of detecting LGT events all rely on two fundamental strategies: primary structure composition or gene tree/species tree comparisons. Discouragingly, the results of these different approaches rarely coincide. With the wealth of genome data now available, detection of laterally transferred genes is increasingly being attempted in large uncurated eukaryotic datasets. However, detection methods depend greatly on the quality of the underlying genomic data, which are typically complex for eukaryotes. Furthermore, given the automated nature of genomic data collection, it is typically impractical to manually verify all protein or gene models, orthology predictions, and multiple sequence alignments, requiring researchers to accept a substantial margin of error in their datasets. Using a test case comprising plant-associated genomes across the fungal kingdom, this study reveals that composition- and phylogeny-based methods have little statistical power to detect laterally transferred genes. In particular, phylogenetic methods reveal extreme levels of topological variation in fungal gene trees, the vast majority of which show departures from the canonical species tree. Therefore, it is inherently challenging to detect LGT events in typical eukaryotic genomes. This finding is in striking contrast to the large number of claims for laterally transferred genes in eukaryotic species that routinely appear in the literature, and questions how many of these proposed examples are statistically well supported. PMID:28235827

  18. Cognitive outcomes in pediatric heart transplant recipients bridged to transplantation with ventricular assist devices.

    PubMed

    Stein, Mary Lynette; Bruno, Jennifer L; Konopacki, Kelly L; Kesler, Shelli; Reinhartz, Olaf; Rosenthal, David

    2013-02-01

    Ventricular assist devices (VADs) have been associated with high rates of neurologic injury in pediatric patients during the period of support, but the delayed consequences of this type of injury have not been described in the literature. In this study we assess cognitive outcomes with indices of general intellectual functioning, including working memory, processing speed, perceptual reasoning and verbal comprehension, for pediatric heart transplant recipients who required VAD support as a bridge to transplant (n = 9). We present an aggregate of these VAD patients combined with heart transplant recipients who did not require mechanical circulatory support (n = 11), and compare the performance of all transplant patients (n = 20) to typically developing, healthy comparators (n = 12). We also present a post hoc analysis of those transplant recipients with significant medical morbidity in the first year of life, referred to as the "high-risk" transplant group (n = 5), and compare them with the "low-risk" transplant group (n = 15) and the typically developing comparators (n = 12). The mean performance of the VAD patients was in the average range for each of the examined indices of cognitive functioning. A total of 11% of the VAD patients performed in the impaired range and 78% performed in the average range, with 11% in the superior range on measures of general intellectual functioning. The typically developing participants performed significantly better than the aggregated transplant recipients on all indices except verbal comprehension. Lower cognitive performance in the combined transplant group appears to be associated with medical morbidity in the first year of life. Despite significant neurologic risk factors, this cohort of pediatric patients who were bridged to transplant with VAD demonstrated resiliency in terms of cognitive outcomes. In this heterogeneous population, it is likely that multiple factors contributed to the cognitive outcomes. As VAD use becomes more common in pediatric patients, a prospective evaluation of cognitive outcomes is warranted. Copyright © 2013 International Society for Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.

  19. Mathematical and numerical challenges in living biological materials

    NASA Astrophysics Data System (ADS)

    Forest, M. Gregory; Vasquez, Paula A.

    2013-10-01

    The proclaimed Century of Biology is rapidly leading to the realization of how starkly different and more complex biological materials are than the materials that underpinned the industrial and technological revolution. These differences arise, in part, because biological matter exhibits both viscous and elastic behavior. Moreover, this behavior varies across the frequency, wavelength and amplitude spectrum of forcing. This broadclass of responsesin biological matter requires multiple frequency-dependent functions to specify material behavior, instead of a discrete set of parameters that relate to either viscosity or elasticity. This complexity prevails even if the biological matter is assumed to be spatially homogeneous, which is rarely true. However, very little progress has been made on the characterization of heterogeneity and how to build that information into constitutive laws and predictive models. In addition, most biological matter is non-stationary, which motivates the term "living". Biomaterials typically are in an active state in order to perform certain functions, and they often are modified or replenished on the basis of external stimuli. It has become popular in materials engineering to try to duplicate some of the functionality of biomaterials, e.g., a lot of effort has gone into the design of self-assembling, self-healing and shape shifting materials. These distinguishing features of biomaterials require significantly more degrees of freedom than traditional composites and many of the molecular species and their roles in functionality have yet to be determined. A typical biological material includes small molecule biochemical species that react and diffuse within larger species. These large molecular weightspecies provide the primary structural and biophysical properties of the material. The small molecule binding and unbinding kinetics serves to modulate material properties, and typical small molecule production and release are governed by external stimuli (e.g., stress). The bottom line is that the mathematical and numerical tools of 20th Century materials science are often insufficient for describing biological materials and for predicting their behavior both in vitro and in vivo.

  20. A high-efficiency HPGe coincidence system for environmental analysis.

    PubMed

    Britton, R; Davies, A V; Burnett, J L; Jackson, M J

    2015-08-01

    The Comprehensive Nuclear-Test-Ban Treaty (CTBT) is supported by a network of certified laboratories which must meet certain sensitivity requirements for CTBT relevant radionuclides. At the UK CTBT Radionuclide Laboratory (GBL15), a high-efficiency, dual-detector gamma spectroscopy system has been developed to improve the sensitivity of measurements for treaty compliance, greatly reducing the time required for each sample. Utilising list-mode acquisition, each sample can be counted once, and processed multiple times to further improve sensitivity. For the 8 key radionuclides considered, Minimum Detectable Activities (MDA's) were improved by up to 37% in standard mode (when compared to a typical CTBT detector system), with the acquisition time required to achieve the CTBT sensitivity requirements reduced from 6 days to only 3. When utilising the system in coincidence mode, the MDA for (60) Co in a high-activity source was improved by a factor of 34 when compared to a standard CTBT detector, and a factor of 17 when compared to the dual-detector system operating in standard mode. These MDA improvements will allow the accurate and timely quantification of radionuclides that decay via both singular and cascade γ emission, greatly enhancing the effectiveness of CTBT laboratories. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  1. Automated Design of Restraint Layer of an Inflatable Vessel

    NASA Technical Reports Server (NTRS)

    Spexarth, Gary

    2007-01-01

    A Mathcad computer program largely automates the design and analysis of the restraint layer (the primary load-bearing layer) of an inflatable vessel that consists of one or more sections having cylindrical, toroidal, and/or spherical shape(s). A restraint layer typically comprises webbing in the form of multiple straps. The design task includes choosing indexing locations along the straps, computing the load at every location in each strap, computing the resulting stretch at each location, and computing the amount of undersizing required of each strap so that, once the vessel is inflated and the straps thus stretched, the vessel can be expected to assume the desired shape. Prior to the development of this program, the design task was performed by use of a difficult-to-use spreadsheet program that required manual addition of rows and columns depending on the numbers of strap rows and columns of a given design. In contrast, this program is completely parametric and includes logic that automatically adds or deletes rows and columns as needed. With minimal input from the user, this program automatically computes indexing locations, strap lengths, undersizing requirements, and all design data required to produce detailed drawings and assembly procedures. It also generates textual comments that help the user understand the calculations.

  2. Probability of Loss of Crew Achievability Studies for NASA's Exploration Systems Development

    NASA Technical Reports Server (NTRS)

    Boyer, Roger L.; Bigler, Mark; Rogers, James H.

    2014-01-01

    Over the last few years, NASA has been evaluating various vehicle designs for multiple proposed design reference missions (DRM) beyond low Earth orbit in support of its Exploration Systems Development (ESD) programs. This paper addresses several of the proposed missions and the analysis techniques used to assess the key risk metric, probability of loss of crew (LOC). Probability of LOC is a metric used to assess the safety risk as well as a design requirement. These risk assessments typically cover the concept phase of a DRM, i.e. when little more than a general idea of the mission is known and are used to help establish "best estimates" for proposed program and agency level risk requirements. These assessments or studies were categorized as LOC achievability studies to help inform NASA management as to what "ball park" estimates of probability of LOC could be achieved for each DRM and were eventually used to establish the corresponding LOC requirements. Given that details of the vehicles and mission are not well known at this time, the ground rules, assumptions, and consistency across the programs become the important basis of the assessments as well as for the decision makers to understand.

  3. Probability of Loss of Crew Achievability Studies for NASA's Exploration Systems Development

    NASA Technical Reports Server (NTRS)

    Boyer, Roger L.; Bigler, Mark; Rogers, James H.

    2015-01-01

    Over the last few years, NASA has been evaluating various vehicle designs for multiple proposed design reference missions (DRM) beyond low Earth orbit in support of its Exploration Systems Development (ESD) programs. This paper addresses several of the proposed missions and the analysis techniques used to assess the key risk metric, probability of loss of crew (LOC). Probability of LOC is a metric used to assess the safety risk as well as a design requirement. These risk assessments typically cover the concept phase of a DRM, i.e. when little more than a general idea of the mission is known and are used to help establish "best estimates" for proposed program and agency level risk requirements. These assessments or studies were categorized as LOC achievability studies to help inform NASA management as to what "ball park" estimates of probability of LOC could be achieved for each DRM and were eventually used to establish the corresponding LOC requirements. Given that details of the vehicles and mission are not well known at this time, the ground rules, assumptions, and consistency across the programs become the important basis of the assessments as well as for the decision makers to understand.

  4. Si-strip photon counting detectors for contrast-enhanced spectral mammography

    NASA Astrophysics Data System (ADS)

    Chen, Buxin; Reiser, Ingrid; Wessel, Jan C.; Malakhov, Nail; Wawrzyniak, Gregor; Hartsough, Neal E.; Gandhi, Thulasi; Chen, Chin-Tu; Iwanczyk, Jan S.; Barber, William C.

    2015-08-01

    We report on the development of silicon strip detectors for energy-resolved clinical mammography. Typically, X-ray integrating detectors based on scintillating cesium iodide CsI(Tl) or amorphous selenium (a-Se) are used in most commercial systems. Recently, mammography instrumentation has been introduced based on photon counting Si strip detectors. The required performance for mammography in terms of the output count rate, spatial resolution, and dynamic range must be obtained with sufficient field of view for the application, thus requiring the tiling of pixel arrays and particular scanning techniques. Room temperature Si strip detector, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel, provided that the sensors are designed for rapid signal formation across the X-ray energy ranges of the application. We present our methods and results from the optimization of Si-strip detectors for contrast enhanced spectral mammography. We describe the method being developed for quantifying iodine contrast using the energy-resolved detector with fixed thresholds. We demonstrate the feasibility of the method by scanning an iodine phantom with clinically relevant contrast levels.

  5. A 'Global Reference' Comparator for Biosimilar Development.

    PubMed

    Webster, Christopher J; Woollett, Gillian R

    2017-08-01

    Major drug regulators have indicated in guidance their flexibility to accept some development data for biosimilars generated with reference product versions licensed outside their own jurisdictions, but most authorities require new bridging studies between these versions and the versions of them licensed locally. The costs of these studies are not trivial in absolute terms and, due to the multiplier effect of required repetition by each biosimilar sponsor, their collective costs are substantial. Yet versions of biologics licensed in different jurisdictions usually share the same development data, and any manufacturing changes between versions have been justified by a rigorous comparability process. The fact that a biosimilar is usually expected to be licensed in multiple jurisdictions, in each case as similar to the local reference product, confirms that minor analytical differences between versions of reference biologics are typically inconsequential for clinical outcomes and licensing. A greatly simplified basis for selecting a reference comparator, that does not require conducting new bridging studies, is proposed and justified based on the shared data of the reference product versions as well as the proof offered where biosimilars have already been approved. The relevance of this proposal to the interchangeability designation available in the US is discussed.

  6. Sensor Performance Requirements for the Retrieval of Atmospheric Aerosols by Airborne Optical Remote Sensing

    PubMed Central

    Seidel, Felix; Schläpfer, Daniel; Nieke, Jens; Itten, Klaus I.

    2008-01-01

    This study explores performance requirements for the retrieval of the atmospheric aerosol optical depth (AOD) by airborne optical remote sensing instruments. Independent of any retrieval techniques, the calculated AOD retrieval requirements are compared with the expected performance parameters of the upcoming hyperspectral sensor APEX at the reference wavelength of 550nm. The AOD accuracy requirements are defined to be capable of resolving transmittance differences of 0.01 to 0.04 according to the demands of atmospheric corrections for remote sensing applications. For the purposes of this analysis, the signal at the sensor level is simulated by radiation transfer equations. The resulting radiances are translated into the AOD retrieval sensitivity (Δτλaer) and compared to the available measuring sensitivity of the sensor (NEΔLλsensor). This is done for multiple signal-to-noise ratios (SNR) and surface reflectance values. It is shown that an SNR of 100 is adequate for AOD retrieval at 550nm under typical remote sensing conditions and a surface reflectance of 10% or less. Such dark surfaces require the lowest SNR values and therefore offer the best sensitivity for measuring AOD. Brighter surfaces with up to 30% reflectance require an SNR of around 300. It is shown that AOD retrieval for targets above 50% surface reflectance is more problematic with the current sensor performance as it may require an SNR larger than 1000. In general, feasibility is proven for the analyzed cases under simulated conditions. PMID:27879801

  7. Sensor Performance Requirements for the Retrieval of Atmospheric Aerosols by Airborne Optical Remote Sensing.

    PubMed

    Seidel, Felix; Schläpfer, Daniel; Nieke, Jens; Itten, Klaus I

    2008-03-18

    This study explores performance requirements for the retrieval of the atmospheric aerosol optical depth (AOD) by airborne optical remote sensing instruments. Independent of any retrieval techniques, the calculated AOD retrieval requirements are compared with the expected performance parameters of the upcoming hyperspectral sensor APEX at the reference wavelength of 550nm. The AOD accuracy requirements are defined to be capable of resolving transmittance differences of 0.01 to 0.04 according to the demands of atmospheric corrections for remote sensing applications. For the purposes of this analysis, the signal at the sensor level is simulated by radiation transfer equations. The resulting radiances are translated into the AOD retrieval sensitivity (Δτ λ aer ) and compared to the available measuring sensitivity of the sensor (NE ΔL λ sensor ). This is done for multiple signal-to-noise ratios (SNR) and surface reflectance values. It is shown that an SNR of 100 is adequate for AOD retrieval at 550nm under typical remote sensing conditions and a surface reflectance of 10% or less. Such dark surfaces require the lowest SNR values and therefore offer the best sensitivity for measuring AOD. Brighter surfaces with up to 30% reflectance require an SNR of around 300. It is shown that AOD retrieval for targets above 50% surface reflectance is more problematic with the current sensor performance as it may require an SNR larger than 1000. In general, feasibility is proven for the analyzed cases under simulated conditions.

  8. The emergence of autoclitic frames in atypically and typically developing children as a function of multiple exemplar instruction.

    PubMed

    Luke, Nicole; Greer, R Douglas; Singer-Dudek, Jessica; Keohane, Dolleen-Day

    2011-01-01

    In two experiments, we tested the effect of multiple exemplar instruction (MEI) for training sets on the emergence of autoclitic frames for spatial relations for novel tacts and mands. In Experiment 1, we used a replicated pre- and post-intervention probe design with four students with significant learning disabilities to test for acquisition of four autoclitic frames with novel tacts and mands before and after MEI. The untaught topographies emerged for all participants. In Experiment 2, we used a multiple probe design to test the effects of the MEI procedures on the same responses in four typically developing, bilingual students. The novel usage emerged for all participants. In the latter experiment, the children demonstrated untaught usage of mand or tact frames regardless of whether they were taught to respond in either listener or speaker functions alone or across listener and speaker functions. The findings are discussed in terms of the role of MEI in the formation of abstractions.

  9. Context matters: the structure of task goals affects accuracy in multiple-target visual search.

    PubMed

    Clark, Kait; Cain, Matthew S; Adcock, R Alison; Mitroff, Stephen R

    2014-05-01

    Career visual searchers such as radiologists and airport security screeners strive to conduct accurate visual searches, but despite extensive training, errors still occur. A key difference between searches in radiology and airport security is the structure of the search task: Radiologists typically scan a certain number of medical images (fixed objective), and airport security screeners typically search X-rays for a specified time period (fixed duration). Might these structural differences affect accuracy? We compared performance on a search task administered either under constraints that approximated radiology or airport security. Some displays contained more than one target because the presence of multiple targets is an established source of errors for career searchers, and accuracy for additional targets tends to be especially sensitive to contextual conditions. Results indicate that participants searching within the fixed objective framework produced more multiple-target search errors; thus, adopting a fixed duration framework could improve accuracy for career searchers. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  10. 29 CFR 1926.1432 - Multiple-crane/derrick lifts-supplemental requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 8 2011-07-01 2011-07-01 false Multiple-crane/derrick lifts-supplemental requirements... Cranes and Derricks in Construction § 1926.1432 Multiple-crane/derrick lifts—supplemental requirements... implementation. (1) The multiple-crane/derrick lift must be directed by a person who meets the criteria for both...

  11. Selective high-affinity polydentate ligands and methods of making such

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denardo, Sally J.; Denardo, Gerald L.; Balhorn, Rodney L.

    This invention provides novel polydentate selective high affinity ligands (SHALs) that can be used in a variety of applications in a manner analogous to the use of antibodies. SHALs typically comprise a multiplicity of ligands that each bind different region son the target molecule. The ligands are joined directly or through a linker thereby forming a polydentate moiety that typically binds the target molecule with high selectivity and avidity.

  12. Selective high-affinity polydentate ligands and methods of making such

    DOEpatents

    DeNardo, Sally; DeNardo, Gerald; Balhorn, Rodney

    2013-09-17

    This invention provides polydentate selective high affinity ligands (SHALs) that can be used in a variety of applications in a manner analogous to the use of antibodies. SHALs typically comprise a multiplicity of ligands that each binds different regions on the target molecule. The ligands are joined directly or through a linker thereby forming a polydentate moiety that typically binds the target molecule with high selectivity and avidity.

  13. Selective high affinity polydentate ligands and methods of making such

    DOEpatents

    DeNardo, Sally; DeNardo, Gerald; Balhorn, Rodney

    2010-02-16

    This invention provides novel polydentate selective high affinity ligands (SHALs) that can be used in a variety of applications in a manner analogous to the use of antibodies. SHALs typically comprise a multiplicity of ligands that each bind different region son the target molecule. The ligands are joined directly or through a linker thereby forming a polydentate moiety that typically binds the target molecule with high selectivity and avidity.

  14. The how and why of a $10 optical coherence tomography system

    NASA Astrophysics Data System (ADS)

    Leahy, M. J.; Wilson, C.; Hogan, J.; O'Brien, Peter; Dsouza, R.; Neuhaus, K.; Bogue, D.; Subhash, H.; O'Riordan, Colm; McNamara, Paul M.

    2016-03-01

    Optical Coherence Tomography (OCT) is the fastest growing medical imaging modality with more than 1Bln worth of scans ordered and over 400M worth of equipment shipped in 2010, just nine years after its commercialization. It is at various stages of acceptance and approvals for eye care, coronary care and skin cancer care and is spreading rapidly to other medical specialties. Indeed, it is the leading success of translation of biophotonics science into clinical practice. Significant effort is being made to provide sufficient evidence for efficacy across a broad range of applications, but more needs to be done to radically reduce the cost of OCT so that it can spread to underserved markets and address new, fast growing opportunities in mobile health monitoring. Currently, a clinical OCT system ranges in price from 50k to 150k, typically is housed on a bedside trolley, runs off AC power, and requires skilled, extensively trained technicians to operate. The cost, size, and skill level required keep this wonderful technology beyond the reach of mainstream primary care, much less individual consumers seeking to monitor their health on a routine basis outside of typical clinical settings and major urban medical centers. Beyond the first world market, there are 6.5 billion people with similar eye and skin cancer care needs which cannot be met by the current generation of large, expensive, complex, and delicate OCT systems. This paper will describe a means to manufacture a low cost, compact, simple, and robust OCT system, using parts and a configuration similar to a CD-ROM or DVD pickup unit (see figure 1). Essentially, this system—multiple reference OCT (MR-OCT)—is based on the use of a partial mirror in the reference arm of a time domain OCT system to provide multiple references, and hence A-scans, at several depths simultaneously (see figure 2). We have already shown that a system based on this configuration can achieve an SNR of greater than 90 dB, which is sufficient for many medical imaging and biometry applications.

  15. Teaching Composition Skills with Weekly Multiple Choice Tests in Lieu of Theme Writing. Final Report.

    ERIC Educational Resources Information Center

    Scannell, Dale P.; Haugh, Oscar M.

    The purpose of the study was to compare the effectiveness with which composition skills could be taught by the traditional theme-assignment approach and by an experimental method using weekly multiple-choice composition tests in lieu of theme writing. The weekly tests were based on original but typical first-draft compositions and covered problems…

  16. Multiple Cranial Nerve Palsies in Giant Cell Arteritis.

    PubMed

    Ross, Michael; Bursztyn, Lulu; Superstein, Rosanne; Gans, Mark

    2017-12-01

    Giant cell arteritis (GCA) is a systemic vasculitis of medium and large arteries often with ophthalmic involvement, including ischemic optic neuropathy, retinal artery occlusion, and ocular motor cranial nerve palsies. This last complication occurs in 2%-15% of patients, but typically involves only 1 cranial nerve. We present 2 patients with biopsy-proven GCA associated with multiple cranial nerve palsies.

  17. Statistical inference for Hardy-Weinberg proportions in the presence of missing genotype information.

    PubMed

    Graffelman, Jan; Sánchez, Milagros; Cook, Samantha; Moreno, Victor

    2013-01-01

    In genetic association studies, tests for Hardy-Weinberg proportions are often employed as a quality control checking procedure. Missing genotypes are typically discarded prior to testing. In this paper we show that inference for Hardy-Weinberg proportions can be biased when missing values are discarded. We propose to use multiple imputation of missing values in order to improve inference for Hardy-Weinberg proportions. For imputation we employ a multinomial logit model that uses information from allele intensities and/or neighbouring markers. Analysis of an empirical data set of single nucleotide polymorphisms possibly related to colon cancer reveals that missing genotypes are not missing completely at random. Deviation from Hardy-Weinberg proportions is mostly due to a lack of heterozygotes. Inbreeding coefficients estimated by multiple imputation of the missings are typically lowered with respect to inbreeding coefficients estimated by discarding the missings. Accounting for missings by multiple imputation qualitatively changed the results of 10 to 17% of the statistical tests performed. Estimates of inbreeding coefficients obtained by multiple imputation showed high correlation with estimates obtained by single imputation using an external reference panel. Our conclusion is that imputation of missing data leads to improved statistical inference for Hardy-Weinberg proportions.

  18. Neuron’s eye view: Inferring features of complex stimuli from neural responses

    PubMed Central

    Chen, Xin; Beck, Jeffrey M.

    2017-01-01

    Experiments that study neural encoding of stimuli at the level of individual neurons typically choose a small set of features present in the world—contrast and luminance for vision, pitch and intensity for sound—and assemble a stimulus set that systematically varies along these dimensions. Subsequent analysis of neural responses to these stimuli typically focuses on regression models, with experimenter-controlled features as predictors and spike counts or firing rates as responses. Unfortunately, this approach requires knowledge in advance about the relevant features coded by a given population of neurons. For domains as complex as social interaction or natural movement, however, the relevant feature space is poorly understood, and an arbitrary a priori choice of features may give rise to confirmation bias. Here, we present a Bayesian model for exploratory data analysis that is capable of automatically identifying the features present in unstructured stimuli based solely on neuronal responses. Our approach is unique within the class of latent state space models of neural activity in that it assumes that firing rates of neurons are sensitive to multiple discrete time-varying features tied to the stimulus, each of which has Markov (or semi-Markov) dynamics. That is, we are modeling neural activity as driven by multiple simultaneous stimulus features rather than intrinsic neural dynamics. We derive a fast variational Bayesian inference algorithm and show that it correctly recovers hidden features in synthetic data, as well as ground-truth stimulus features in a prototypical neural dataset. To demonstrate the utility of the algorithm, we also apply it to cluster neural responses and demonstrate successful recovery of features corresponding to monkeys and faces in the image set. PMID:28827790

  19. Control Algorithms Charge Batteries Faster

    NASA Technical Reports Server (NTRS)

    2012-01-01

    On March 29, 2011, NASA s Mercury Surface, Space Environment, Geochemistry and Ranging (MESSENGER) spacecraft beamed a milestone image to Earth: the first photo of Mercury taken from orbit around the solar system s innermost planet. (MESSENGER is also the first spacecraft to orbit Mercury.) Like most of NASA s deep space probes, MESSENGER is enabled by a complex power system that allows its science instruments and communications to function continuously as it travels millions of miles from Earth. "Typically, there isn't one particular power source that can support the entire mission," says Linda Taylor, electrical engineer in Glenn Research Center s Power Systems Analysis Branch. "If you have solar arrays and you are in orbit, at some point you re going to be in eclipse." Because of this, Taylor explains, spacecraft like MESSENGER feature hybrid power systems. MESSENGER is powered by a two-panel solar array coupled with a nickel hydrogen battery. The solar arrays provide energy to the probe and charge the battery; when the spacecraft s orbit carries it behind Mercury and out of the Sun s light, the spacecraft switches to battery power to continue operations. Typically, hybrid systems with multiple power inputs and a battery acting alternately as storage and a power source require multiple converters to handle the power flow between the devices, Taylor says. (Power converters change the qualities of electrical energy, such as from alternating current to direct current, or between different levels of voltage or frequency.) This contributes to a pair of major concerns for spacecraft design. "Weight and size are big drivers for any space application," Taylor says, noting that every pound added to a space vehicle incurs significant costs. For an innovative solution to managing power flows in a lightweight, cost-effective manner, NASA turned to a private industry partner.

  20. CONSTRAINTS ON THE SYNCHROTRON EMISSION MECHANISM IN GAMMA-RAY BURSTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beniamini, Paz; Piran, Tsvi, E-mail: paz.beniamini@mail.huji.ac.il, E-mail: tsvi.piran@mail.huji.ac.il

    2013-05-20

    We reexamine the general synchrotron model for gamma-ray bursts' (GRBs') prompt emission and determine the regime in the parameter phase space in which it is viable. We characterize a typical GRB pulse in terms of its peak energy, peak flux, and duration and use the latest Fermi observations to constrain the high-energy part of the spectrum. We solve for the intrinsic parameters at the emission region and find the possible parameter phase space for synchrotron emission. Our approach is general and it does not depend on a specific energy dissipation mechanism. Reasonable synchrotron solutions are found with energy ratios ofmore » 10{sup -4} < {epsilon}{sub B}/{epsilon}{sub e} < 10, bulk Lorentz factor values of 300 < {Gamma} < 3000, typical electrons' Lorentz factor values of 3 Multiplication-Sign 10{sup 3} < {gamma}{sub e} < 10{sup 5}, and emission radii of the order 10{sup 15} cm < R < 10{sup 17} cm. Most remarkable among those are the rather large values of the emission radius and the electron's Lorentz factor. We find that soft (with peak energy less than 100 keV) but luminous (isotropic luminosity of 1.5 Multiplication-Sign 10{sup 53}) pulses are inefficient. This may explain the lack of strong soft bursts. In cases when most of the energy is carried out by the kinetic energy of the flow, such as in the internal shocks, the synchrotron solution requires that only a small fraction of the electrons are accelerated to relativistic velocities by the shocks. We show that future observations of very high energy photons from GRBs by CTA could possibly determine all parameters of the synchrotron model or rule it out altogether.« less

  1. Novel maximum likelihood approach for passive detection and localisation of multiple emitters

    NASA Astrophysics Data System (ADS)

    Hernandez, Marcel

    2017-12-01

    In this paper, a novel target acquisition and localisation algorithm (TALA) is introduced that offers a capability for detecting and localising multiple targets using the intermittent "signals-of-opportunity" (e.g. acoustic impulses or radio frequency transmissions) they generate. The TALA is a batch estimator that addresses the complex multi-sensor/multi-target data association problem in order to estimate the locations of an unknown number of targets. The TALA is unique in that it does not require measurements to be of a specific type, and can be implemented for systems composed of either homogeneous or heterogeneous sensors. The performance of the TALA is demonstrated in simulated scenarios with a network of 20 sensors and up to 10 targets. The sensors generate angle-of-arrival (AOA), time-of-arrival (TOA), or hybrid AOA/TOA measurements. It is shown that the TALA is able to successfully detect 83-99% of the targets, with a negligible number of false targets declared. Furthermore, the localisation errors of the TALA are typically within 10% of the errors generated by a "genie" algorithm that is given the correct measurement-to-target associations. The TALA also performs well in comparison with an optimistic Cramér-Rao lower bound, with typical differences in performance of 10-20%, and differences in performance of 40-50% in the most difficult scenarios considered. The computational expense of the TALA is also controllable, which allows the TALA to maintain computational feasibility even in the most challenging scenarios considered. This allows the approach to be implemented in time-critical scenarios, such as in the localisation of artillery firing events. It is concluded that the TALA provides a powerful situational awareness aid for passive surveillance operations.

  2. Carney Complex

    MedlinePlus

    ... to the condition. Significant freckling without darkly pigmented spots or typical pattern Blue nevus, if multiple and confirmed by biopsy Café-au-lait spots, which are light brown spots on skin, or ...

  3. Microbial genotype-phenotype mapping by class association rule mining.

    PubMed

    Tamura, Makio; D'haeseleer, Patrik

    2008-07-01

    Microbial phenotypes are typically due to the concerted action of multiple gene functions, yet the presence of each gene may have only a weak correlation with the observed phenotype. Hence, it may be more appropriate to examine co-occurrence between sets of genes and a phenotype (multiple-to-one) instead of pairwise relations between a single gene and the phenotype. Here, we propose an efficient class association rule mining algorithm, netCAR, in order to extract sets of COGs (clusters of orthologous groups of proteins) associated with a phenotype from COG phylogenetic profiles and a phenotype profile. netCAR takes into account the phylogenetic co-occurrence graph between COGs to restrict hypothesis space, and uses mutual information to evaluate the biconditional relation. We examined the mining capability of pairwise and multiple-to-one association by using netCAR to extract COGs relevant to six microbial phenotypes (aerobic, anaerobic, facultative, endospore, motility and Gram negative) from 11,969 unique COG profiles across 155 prokaryotic organisms. With the same level of false discovery rate, multiple-to-one association can extract about 10 times more relevant COGs than one-to-one association. We also reveal various topologies of association networks among COGs (modules) from extracted multiple-to-one correlation rules relevant with the six phenotypes; including a well-connected network for motility, a star-shaped network for aerobic and intermediate topologies for the other phenotypes. netCAR outperforms a standard CAR mining algorithm, CARapriori, while requiring several orders of magnitude less computational time for extracting 3-COG sets. Source code of the Java implementation is available as Supplementary Material at the Bioinformatics online website, or upon request to the author. Supplementary data are available at Bioinformatics online.

  4. Children with mathematical learning disability fail in recruiting verbal and numerical brain regions when solving simple multiplication problems.

    PubMed

    Berteletti, Ilaria; Prado, Jérôme; Booth, James R

    2014-08-01

    Greater skill in solving single-digit multiplication problems requires a progressive shift from a reliance on numerical to verbal mechanisms over development. Children with mathematical learning disability (MD), however, are thought to suffer from a specific impairment in numerical mechanisms. Here we tested the hypothesis that this impairment might prevent MD children from transitioning toward verbal mechanisms when solving single-digit multiplication problems. Brain activations during multiplication problems were compared in MD and typically developing (TD) children (3rd to 7th graders) in numerical and verbal regions which were individuated by independent localizer tasks. We used small (e.g., 2 × 3) and large (e.g., 7 × 9) problems as these problems likely differ in their reliance on verbal versus numerical mechanisms. Results indicate that MD children have reduced activations in both the verbal (i.e., left inferior frontal gyrus and left middle temporal to superior temporal gyri) and the numerical (i.e., right superior parietal lobule including intra-parietal sulcus) regions suggesting that both mechanisms are impaired. Moreover, the only reliable activation observed for MD children was in the numerical region when solving small problems. This suggests that MD children could effectively engage numerical mechanisms only for the easier problems. Conversely, TD children showed a modulation of activation with problem size in the verbal regions. This suggests that TD children were effectively engaging verbal mechanisms for the easier problems. Moreover, TD children with better language skills were more effective at engaging verbal mechanisms. In conclusion, results suggest that the numerical- and language-related processes involved in solving multiplication problems are impaired in MD children. Published by Elsevier Ltd.

  5. Multiple injection mode with or without repeated sample injections: Strategies to enhance productivity in countercurrent chromatography.

    PubMed

    Müller, Marco; Wasmer, Katharina; Vetter, Walter

    2018-06-29

    Countercurrent chromatography (CCC) is an all liquid based separation technique typically used for the isolation and purification of natural compounds. The simplicity of the method makes it easy to scale up CCC separations from analytical to preparative and even industrial scale. However, scale-up of CCC separations requires two different instruments with varying coil dimensions. Here we developed two variants of the CCC multiple injection mode as an alternative to increase the throughput and enhance productivity of a CCC separation when using only one instrument. The concept is based on the parallel injection of samples at different points in the CCC column system and the simultaneous separation using one pump only. The wiring of the CCC setup was modified by the insertion of a 6-port selection valve, multiple T-pieces and sample loops. Furthermore, the introduction of storage sample loops enabled the CCC system to be used with repeated injection cycles. Setup and advantages of both multiple injection modes were shown by the isolation of the furan fatty acid 11-(3,4-dimethyl-5-pentylfuran-2-yl)-undecanoic acid (11D5-EE) from an ethyl ester oil rich in 4,7,10,13,16,19-docosahexaenoic acid (DHA-EE). 11D5-EE was enriched in one step from 1.9% to 99% purity. The solvent consumption per isolated amount of analyte could be reduced by ∼40% compared to increased throughput CCC and by ∼5% in the repeated multiple injection mode which also facilitated the isolation of the major compound (DHA-EE) in the sample. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Web-based reactive transport modeling using PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Karra, S.; Lichtner, P. C.; Versteeg, R.; Zhang, Y.

    2017-12-01

    Actionable understanding of system behavior in the subsurface is required for a wide spectrum of societal and engineering needs by both commercial firms and government entities and academia. These needs include, for example, water resource management, precision agriculture, contaminant remediation, unconventional energy production, CO2 sequestration monitoring, and climate studies. Such understanding requires the ability to numerically model various coupled processes that occur across different temporal and spatial scales as well as multiple physical domains (reservoirs - overburden, surface-subsurface, groundwater-surface water, saturated-unsaturated zone). Currently, this ability is typically met through an in-house approach where computational resources, model expertise, and data for model parameterization are brought together to meet modeling needs. However, such an approach has multiple drawbacks which limit the application of high-end reactive transport codes such as the Department of Energy funded[?] PFLOTRAN code. In addition, while many end users have a need for the capabilities provided by high-end reactive transport codes, they do not have the expertise - nor the time required to obtain the expertise - to effectively use these codes. We have developed and are actively enhancing a cloud-based software platform through which diverse users are able to easily configure, execute, visualize, share, and interpret PFLOTRAN models. This platform consists of a web application and available on-demand HPC computational infrastructure. The web application consists of (1) a browser-based graphical user interface which allows users to configure models and visualize results interactively, and (2) a central server with back-end relational databases which hold configuration, data, modeling results, and Python scripts for model configuration, and (3) a HPC environment for on-demand model execution. We will discuss lessons learned in the development of this platform, the rationale for different interfaces, implementation choices, as well as the planned path forward.

  7. A time-driven, activity-based costing methodology for determining the costs of red blood cell transfusion in patients with beta thalassaemia major.

    PubMed

    Burns, K E; Haysom, H E; Higgins, A M; Waters, N; Tahiri, R; Rushford, K; Dunstan, T; Saxby, K; Kaplan, Z; Chunilal, S; McQuilten, Z K; Wood, E M

    2018-04-10

    To describe the methodology to estimate the total cost of administration of a single unit of red blood cells (RBC) in adults with beta thalassaemia major in an Australian specialist haemoglobinopathy centre. Beta thalassaemia major is a genetic disorder of haemoglobin associated with multiple end-organ complications and typically requiring lifelong RBC transfusion therapy. New therapeutic agents are becoming available based on advances in understanding of the disorder and its consequences. Assessment of the true total cost of transfusion, incorporating both product and activity costs, is required in order to evaluate the benefits and costs of these new therapies. We describe the bottom-up, time-driven, activity-based costing methodology used to develop process maps to provide a step-by-step outline of the entire transfusion pathway. Detailed flowcharts for each process are described. Direct observations and timing of the process maps document all activities, resources, staff, equipment and consumables in detail. The analysis will include costs associated with performing these processes, including resources and consumables. Sensitivity analyses will be performed to determine the impact of different staffing levels, timings and probabilities associated with performing different tasks. Thirty-one process maps have been developed, with over 600 individual activities requiring multiple timings. These will be used for future detailed cost analyses. Detailed process maps using bottom-up, time-driven, activity-based costing for determining the cost of RBC transfusion in thalassaemia major have been developed. These could be adapted for wider use to understand and compare the costs and complexities of transfusion in other settings. © 2018 British Blood Transfusion Society.

  8. Pipeline transport and simultaneous saccharification of corn stover.

    PubMed

    Kumar, Amit; Cameron, Jay B; Flynn, Peter C

    2005-05-01

    Pipeline transport of corn stover delivered by truck from the field is evaluated against a range of truck transport costs. Corn stover transported by pipeline at 20% solids concentration (wet basis) or higher could directly enter an ethanol fermentation plant, and hence the investment in the pipeline inlet end processing facilities displaces comparable investment in the plant. At 20% solids, pipeline transport of corn stover costs less than trucking at capacities in excess of 1.4 M drytonnes/yr when compared to a mid range of truck transport cost (excluding any credit for economies of scale achieved in the ethanol fermentation plant from larger scale due to multiple pipelines). Pipelining of corn stover gives the opportunity to conduct simultaneous transport and saccharification (STS). If current enzymes are used, this would require elevated temperature. Heating of the slurry for STS, which in a fermentation plant is achieved from waste heat, is a significant cost element (more than 5 cents/l of ethanol) if done at the pipeline inlet unless waste heat is available, for example from an electric power plant located adjacent to the pipeline inlet. Heat loss in a 1.26 m pipeline carrying 2 M drytonnes/yr is about 5 degrees C at a distance of 400 km in typical prairie clay soils, and would not likely require insulation; smaller pipelines or different soil conditions might require insulation for STS. Saccharification in the pipeline would reduce the need for investment in the fermentation plant, saving about 0.2 cents/l of ethanol. Transport of corn stover in multiple pipelines offers the opportunity to develop a large ethanol fermentation plant, avoiding some of the diseconomies of scale that arise from smaller plants whose capacities are limited by issues of truck congestion.

  9. Use of a mobile device in mental health rehabilitation: A clinical and comprehensive analysis of 11 cases.

    PubMed

    Briand, Catherine; Sablier, Juliette; Therrien, Julie-Anne; Charbonneau, Karine; Pelletier, Jean-François; Weiss-Lambrou, Rhoda

    2018-07-01

    This study aimed to test the feasibility of using a mobile device (Apple technology: iPodTouch®, iPhone® or iPad®) among people with severe mental illness (SMI) in a rehabilitation and recovery process and to document the parameters to be taken into account and the issues involved in implementing this technology in living environments and mental health care settings. A qualitative multiple case study design and multiple data sources were used to understand each case in depth. A clinical and comprehensive analysis of 11 cases was conducted with exploratory and descriptive aims (and the beginnings of explanation building). The multiple-case analysis brought out four typical profiles to illustrate the extent of integration of a personal digital assistant (PDA) as a tool to support mental health rehabilitation and recovery. Each profile highlights four categories of variables identified as determining factors in this process: (1) state of health and related difficulties (cognitive or functional); (2) relationship between comfort level with technology, motivation and personal effort deployed; (3) relationship between support required and support received; and (4) the living environment and follow-up context. This study allowed us to consider the contexts and conditions to be put in place for the successful integration of mobile technology in a mental health rehabilitation and recovery process.

  10. The Impact of Gate Width Setting and Gate Utilization Factors on Plutonium Assay in Passive Correlated Neutron Counting

    DOE PAGES

    Henzlova, Daniela; Menlove, Howard Olsen; Croft, Stephen; ...

    2015-06-15

    In the field of nuclear safeguards, passive neutron multiplicity counting (PNMC) is a method typically employed in non-destructive assay (NDA) of special nuclear material (SNM) for nonproliferation, verification and accountability purposes. PNMC is generally performed using a well-type thermal neutron counter and relies on the detection of correlated pairs or higher order multiplets of neutrons emitted by an assayed item. To assay SNM, a set of parameters for a given well-counter is required to link the measured multiplicity rates to the assayed item properties. Detection efficiency, die-away time, gate utilization factors (tightly connected to die-away time) as well as optimummore » gate width setting are among the key parameters. These parameters along with the underlying model assumptions directly affect the accuracy of the SNM assay. In this paper we examine the role of gate utilization factors and the single exponential die-away time assumption and their impact on the measurements for a range of plutonium materials. In addition, we examine the importance of item-optimized coincidence gate width setting as opposed to using a universal gate width value. Finally, the traditional PNMC based on multiplicity shift register electronics is extended to Feynman-type analysis and application of this approach to Pu mass assay is demonstrated.« less

  11. Time use of parents raising children with severe or profound intellectual and multiple disabilities.

    PubMed

    Luijkx, J; van der Putten, A A J; Vlaskamp, C

    2017-07-01

    Raising children with severe or profound intellectual and multiple disabilities (PIMD) is expected to put extreme pressure on parental time use patterns. The aim of this study was to examine the total time use of mothers and fathers raising children with PIMD and compare it with the time use of parents of typically developing children. Twenty-seven fathers and 30 mothers raising children with PIMD completed a time use diary on a mobile phone or tablet app, as did 66 fathers and 109 mothers of typically developing children. Independent t-tests and Mann-Whitney tests were performed to compare mean time use. There are no differences in the time use of parents of children with PIMD on contracted time (paid work and educational activities) and necessary time (personal care, eating and drinking and sleeping) when compared with parents of typically developing children. There are significant differences between the parents of children with PIMD and the parents of typically developing children in terms of committed time (time for domestic work and the care and supervision of their children) and free time. The mothers of children with PIMD spend significantly less time on domestic work and more time on care and supervision than mothers of typically developing children. This study shows that the parents of children with PIMD have to spend a significant amount of time on care tasks and have on average 1.5 h less free time per day than parents of typically developing children. This is a striking difference, because leisure time can substantially contribute to well-being. Therefore, it is important not only to consider a child with PIMD's support needs but also to identify what parents need to continue their children's daily care and supervision. © 2017 John Wiley & Sons Ltd.

  12. Web Image Search Re-ranking with Click-based Similarity and Typicality.

    PubMed

    Yang, Xiaopeng; Mei, Tao; Zhang, Yong Dong; Liu, Jie; Satoh, Shin'ichi

    2016-07-20

    In image search re-ranking, besides the well known semantic gap, intent gap, which is the gap between the representation of users' query/demand and the real intent of the users, is becoming a major problem restricting the development of image retrieval. To reduce human effects, in this paper, we use image click-through data, which can be viewed as the "implicit feedback" from users, to help overcome the intention gap, and further improve the image search performance. Generally, the hypothesis visually similar images should be close in a ranking list and the strategy images with higher relevance should be ranked higher than others are widely accepted. To obtain satisfying search results, thus, image similarity and the level of relevance typicality are determinate factors correspondingly. However, when measuring image similarity and typicality, conventional re-ranking approaches only consider visual information and initial ranks of images, while overlooking the influence of click-through data. This paper presents a novel re-ranking approach, named spectral clustering re-ranking with click-based similarity and typicality (SCCST). First, to learn an appropriate similarity measurement, we propose click-based multi-feature similarity learning algorithm (CMSL), which conducts metric learning based on clickbased triplets selection, and integrates multiple features into a unified similarity space via multiple kernel learning. Then based on the learnt click-based image similarity measure, we conduct spectral clustering to group visually and semantically similar images into same clusters, and get the final re-rank list by calculating click-based clusters typicality and withinclusters click-based image typicality in descending order. Our experiments conducted on two real-world query-image datasets with diverse representative queries show that our proposed reranking approach can significantly improve initial search results, and outperform several existing re-ranking approaches.

  13. Nano-Multiplication-Region Avalanche Photodiodes and Arrays

    NASA Technical Reports Server (NTRS)

    Zheng, Xinyu; Pain, Bedabrata; Cunningham, Thomas

    2008-01-01

    Nano-multiplication-region avalanche photodiodes (NAPDs), and imaging arrays of NAPDs integrated with complementary metal oxide/semiconductor (CMOS) active-pixel-sensor integrated circuitry, are being developed for applications in which there are requirements for high-sensitivity (including photoncounting) detection and imaging at wavelengths from about 250 to 950 nm. With respect to sensitivity and to such other characteristics as speed, geometric array format, radiation hardness, power demand of associated circuitry, size, weight, and robustness, NAPDs and arrays thereof are expected to be superior to prior photodetectors and arrays including CMOS active-pixel sensors (APSs), charge-coupled devices (CCDs), traditional APDs, and microchannelplate/ CCD combinations. Figure 1 depicts a conceptual NAPD array, integrated with APS circuitry, fabricated on a thick silicon-on-insulator wafer (SOI). Figure 2 presents selected aspects of the structure of a typical single pixel, which would include a metal oxide/semiconductor field-effect transistor (MOSFET) integrated with the NAPD. The NAPDs would reside in silicon islands formed on the buried oxide (BOX) layer of the SOI wafer. The silicon islands would be surrounded by oxide-filled insulation trenches, which, together with the BOX layer, would constitute an oxide embedding structure. There would be two kinds of silicon islands: NAPD islands for the NAPDs and MOSFET islands for in-pixel and global CMOS circuits. Typically, the silicon islands would be made between 5 and 10 m thick, but, if necessary, the thickness could be chosen outside this range. The side walls of the silicon islands would be heavily doped with electron-acceptor impurities (p+-doped) to form anodes for the photodiodes and guard layers for the MOSFETs. A nanoscale reach-through structure at the front (top in the figures) central position of each NAPD island would contain the APD multiplication region. Typically, the reach-through structure would be about 0.1 microns in diameter and between 0.3 and 0.4 nm high. The top layer in the reach-through structure would be heavily doped with electron-donor impurities (n+-doped) to make it act as a cathode. A layer beneath the cathode, between 0.1 and 0.2 nm thick, would be p-doped to a concentration .10(exp 17)cu cm. A thin n+-doped polysilicon pad would be formed on the top of the cathode to protect the cathode against erosion during a metal-silicon alloying step that would be part of the process of fabricating the array.

  14. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  15. Multiple Ordinal Regression by Maximizing the Sum of Margins

    PubMed Central

    Hamsici, Onur C.; Martinez, Aleix M.

    2016-01-01

    Human preferences are usually measured using ordinal variables. A system whose goal is to estimate the preferences of humans and their underlying decision mechanisms requires to learn the ordering of any given sample set. We consider the solution of this ordinal regression problem using a Support Vector Machine algorithm. Specifically, the goal is to learn a set of classifiers with common direction vectors and different biases correctly separating the ordered classes. Current algorithms are either required to solve a quadratic optimization problem, which is computationally expensive, or are based on maximizing the minimum margin (i.e., a fixed margin strategy) between a set of hyperplanes, which biases the solution to the closest margin. Another drawback of these strategies is that they are limited to order the classes using a single ranking variable (e.g., perceived length). In this paper, we define a multiple ordinal regression algorithm based on maximizing the sum of the margins between every consecutive class with respect to one or more rankings (e.g., perceived length and weight). We provide derivations of an efficient, easy-to-implement iterative solution using a Sequential Minimal Optimization procedure. We demonstrate the accuracy of our solutions in several datasets. In addition, we provide a key application of our algorithms in estimating human subjects’ ordinal classification of attribute associations to object categories. We show that these ordinal associations perform better than the binary one typically employed in the literature. PMID:26529784

  16. A Spacecraft Housekeeping System-on-Chip in a Radiation Hardened Structured ASIC

    NASA Technical Reports Server (NTRS)

    Suarez, George; DuMonthier, Jeffrey J.; Sheikh, Salman S.; Powell, Wesley A.; King, Robyn L.

    2012-01-01

    Housekeeping systems are essential to health monitoring of spacecraft and instruments. Typically, sensors are distributed across various sub-systems and data is collected using components such as analog-to-digital converters, analog multiplexers and amplifiers. In most cases programmable devices are used to implement the data acquisition control and storage, and the interface to higher level systems. Such discrete implementations require additional size, weight, power and interconnect complexity versus an integrated circuit solution, as well as the qualification of multiple parts. Although commercial devices are readily available, they are not suitable for space applications due the radiation tolerance and qualification requirements. The Housekeeping System-o n-A-Chip (HKSOC) is a low power, radiation hardened integrated solution suitable for spacecraft and instrument control and data collection. A prototype has been designed and includes a wide variety of functions including a 16-channel analog front-end for driving and reading sensors, analog-to-digital and digital-to-analog converters, on-chip temperature sensor, power supply current sense circuits, general purpose comparators and amplifiers, a 32-bit processor, digital I/O, pulse-width modulation (PWM) generators, timers and I2C master and slave serial interfaces. In addition, the device can operate in a bypass mode where the processor is disabled and external logic is used to control the analog and mixed signal functions. The device is suitable for stand-alone or distributed systems where multiple chips can be deployed across different sub-systems as intelligent nodes with computing and processing capabilities.

  17. Secure management of biomedical data with cryptographic hardware.

    PubMed

    Canim, Mustafa; Kantarcioglu, Murat; Malin, Bradley

    2012-01-01

    The biomedical community is increasingly migrating toward research endeavors that are dependent on large quantities of genomic and clinical data. At the same time, various regulations require that such data be shared beyond the initial collecting organization (e.g., an academic medical center). It is of critical importance to ensure that when such data are shared, as well as managed, it is done so in a manner that upholds the privacy of the corresponding individuals and the overall security of the system. In general, organizations have attempted to achieve these goals through deidentification methods that remove explicitly, and potentially, identifying features (e.g., names, dates, and geocodes). However, a growing number of studies demonstrate that deidentified data can be reidentified to named individuals using simple automated methods. As an alternative, it was shown that biomedical data could be shared, managed, and analyzed through practical cryptographic protocols without revealing the contents of any particular record. Yet, such protocols required the inclusion of multiple third parties, which may not always be feasible in the context of trust or bandwidth constraints. Thus, in this paper, we introduce a framework that removes the need for multiple third parties by collocating services to store and to process sensitive biomedical data through the integration of cryptographic hardware. Within this framework, we define a secure protocol to process genomic data and perform a series of experiments to demonstrate that such an approach can be run in an efficient manner for typical biomedical investigations.

  18. Secure Management of Biomedical Data With Cryptographic Hardware

    PubMed Central

    Canim, Mustafa; Kantarcioglu, Murat; Malin, Bradley

    2014-01-01

    The biomedical community is increasingly migrating toward research endeavors that are dependent on large quantities of genomic and clinical data. At the same time, various regulations require that such data be shared beyond the initial collecting organization (e.g., an academic medical center). It is of critical importance to ensure that when such data are shared, as well as managed, it is done so in a manner that upholds the privacy of the corresponding individuals and the overall security of the system. In general, organizations have attempted to achieve these goals through deidentification methods that remove explicitly, and potentially, identifying features (e.g., names, dates, and geocodes). However, a growing number of studies demonstrate that deidentified data can be reidentified to named individuals using simple automated methods. As an alternative, it was shown that biomedical data could be shared, managed, and analyzed through practical cryptographic protocols without revealing the contents of any particular record. Yet, such protocols required the inclusion of multiple third parties, which may not always be feasible in the context of trust or bandwidth constraints. Thus, in this paper, we introduce a framework that removes the need for multiple third parties by collocating services to store and to process sensitive biomedical data through the integration of cryptographic hardware. Within this framework, we define a secure protocol to process genomic data and perform a series of experiments to demonstrate that such an approach can be run in an efficient manner for typical biomedical investigations. PMID:22010157

  19. Electrical conductivity and piezoresistive response of 3D printed thermoplastic polyurethane/multiwalled carbon nanotube composites

    NASA Astrophysics Data System (ADS)

    Hohimer, Cameron J.; Petrossian, Gayaneh; Ameli, Amir; Mo, Changki; Pötschke, Petra

    2018-03-01

    Additive manufacturing (AM) is an emerging field experiencing rapid growth. This paper presents a feasibility study of using fused-deposition modeling (FDM) techniques with smart materials to fabricate objects with sensing and actuating capabilities. The fabrication of objects with sensing typically requires the integration and assembly of multiple components. Incorporating sensing elements into a single FDM process has the potential to significantly simplify manufacturing. The integration of multiple materials, especially smart materials and those with multi-functional properties, into the FDM process is challenging and still requires further development. Previous works by the authors have demonstrated a good printability of thermoplastic polyurethane/multiwall carbon nanotubes (TPU/MWCNT) while maintaining conductivity and piezoresistive response. This research explores the effects of layer height, nozzle temperature, and bed temperature on the electrical conductivity and piezoresistive response of printed TPU/MWCNT nanocomposites. An impedance analyzer was used to determine the conductivity of printed samples under different printing conditions from 5Hz-13MHz. The samples were then tested under compression loads to measure the piezoresistive response. Results show the conductivity and piezoresistive response are only slightly affected by the print parameters and they can be largely considered independent of the print conditions within the examined ranges of print parameters. This behavior simplifies the printing process design for TPU/MWCNT complex structures. This work demonstrates the possibility of manufacturing embedded and multidirectional flexible strain sensors using an inexpensive and versatile method, with potential applications in soft robotics, flexible electronics, and health monitoring.

  20. Quantifying infant physical interactions using sensorized toys in a natural play environment.

    PubMed

    Goyal, Vatsala; Torres, Wilson; Rai, Roshan; Shofer, Frances; Bogen, Daniel; Bryant, Phillip; Prosser, Laura; Johnson, Michelle J

    2017-07-01

    Infants with developmental delays must be detected early in their development to minimize the progression of motor and neurological impairments. Our objective is to quantify how sensorized toys in a natural play environment can promote infant-toy physical interactions. We created a hanging elephant toy, equipped with an inertial measurement unit (IMU), a pressure transducer, and multiple feedback sensors, to be a hand-grasping toy. We used a 3 DoF robotic model with inputs from the IMU to calculate multiple kinematic metrics and an equation to calculate haptic metrics from the pressure transducer. Six typical infants were tested in the gym set-up. Three infants interacted with the toy for more than half the trial time. The youngest infant exhibited the largest toy displacement with ΔD = 27.6 cm, while the oldest infant squeezed the toy with the largest mean pressure of 4.5 kPa. More data on on both typical and atypical infants needs to be collected. After testing atypical infants in the SmarToyGym set-up, we will be able to identify interaction metrics that differentiate atypical and typical infants.

  1. Slowed Search in the Context of Unimpaired Grouping in Autism: Evidence from Multiple Conjunction Search.

    PubMed

    Keehn, Brandon; Joseph, Robert M

    2016-03-01

    In multiple conjunction search, the target is not known in advance but is defined only with respect to the distractors in a given search array, thus reducing the contributions of bottom-up and top-down attentional and perceptual processes during search. This study investigated whether the superior visual search skills typically demonstrated by individuals with autism spectrum disorder (ASD) would be evident in multiple conjunction search. Thirty-two children with ASD and 32 age- and nonverbal IQ-matched typically developing (TD) children were administered a multiple conjunction search task. Contrary to findings from the large majority of studies on visual search in ASD, response times of individuals with ASD were significantly slower than those of their TD peers. Evidence of slowed performance in ASD suggests that the mechanisms responsible for superior ASD performance in other visual search paradigms are not available in multiple conjunction search. Although the ASD group failed to exhibit superior performance, they showed efficient search and intertrial priming levels similar to the TD group. Efficient search indicates that ASD participants were able to group distractors into distinct subsets. In summary, while demonstrating grouping and priming effects comparable to those exhibited by their TD peers, children with ASD were slowed in their performance on a multiple conjunction search task, suggesting that their usual superior performance in visual search tasks is specifically dependent on top-down and/or bottom-up attentional and perceptual processes. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  2. Smart Cup: A Minimally-Instrumented, Smartphone-Based Point-of-Care Molecular Diagnostic Device.

    PubMed

    Liao, Shih-Chuan; Peng, Jing; Mauk, Michael G; Awasthi, Sita; Song, Jinzhao; Friedman, Harvey; Bau, Haim H; Liu, Changchun

    2016-06-28

    Nucleic acid amplification-based diagnostics offer rapid, sensitive, and specific means for detecting and monitoring the progression of infectious diseases. However, this method typically requires extensive sample preparation, expensive instruments, and trained personnel. All of which hinder its use in resource-limited settings, where many infectious diseases are endemic. Here, we report on a simple, inexpensive, minimally-instrumented, smart cup platform for rapid, quantitative molecular diagnostics of pathogens at the point of care. Our smart cup takes advantage of water-triggered, exothermic chemical reaction to supply heat for the nucleic acid-based, isothermal amplification. The amplification temperature is regulated with a phase-change material (PCM). The PCM maintains the amplification reactor at a constant temperature, typically, 60-65°C, when ambient temperatures range from 12 to 35°C. To eliminate the need for an optical detector and minimize cost, we use the smartphone's flashlight to excite the fluorescent dye and the phone camera to record real-time fluorescence emission during the amplification process. The smartphone can concurrently monitor multiple amplification reactors and analyze the recorded data. Our smart cup's utility was demonstrated by amplifying and quantifying herpes simplex virus type 2 (HSV-2) with LAMP assay in our custom-made microfluidic diagnostic chip. We have consistently detected as few as 100 copies of HSV-2 viral DNA per sample. Our system does not require any lab facilities and is suitable for use at home, in the field, and in the clinic, as well as in resource-poor settings, where access to sophisticated laboratories is impractical, unaffordable, or nonexistent.

  3. Borrelia burgdorferi CheY2 Is Dispensable for Chemotaxis or Motility but Crucial for the Infectious Life Cycle of the Spirochete.

    PubMed

    Xu, Hui; Sultan, Syed; Yerke, Aaron; Moon, Ki Hwan; Wooten, R Mark; Motaleb, M A

    2017-01-01

    The requirements for bacterial chemotaxis and motility range from dispensable to crucial for host colonization. Even though more than 50% of all sequenced prokaryotic genomes possess at least one chemotaxis signaling system, many of those genomes contain multiple copies of a chemotaxis gene. However, the functions of most of those additional genes are unknown. Most motile bacteria possess at least one CheY response regulator that is typically dedicated to the control of motility and which is usually essential for virulence. Borrelia burgdorferi appears to be notably different, in that it has three cheY genes, and our current studies on cheY2 suggests that it has varied effects on different aspects of the natural infection cycle. Mutants deficient in this protein exhibit normal motility and chemotaxis in vitro but show reduced virulence in mice. Specifically, the cheY2 mutants were severely attenuated in murine infection and dissemination to distant tissues after needle inoculation. Moreover, while ΔcheY2 spirochetes are able to survive normally in the Ixodes ticks, mice fed upon by the ΔcheY2-infected ticks did not develop a persistent infection in the murine host. Our data suggest that CheY2, despite resembling a typical response regulator, functions distinctively from most other chemotaxis CheY proteins. We propose that CheY2 serves as a regulator for a B. burgdorferi virulence determinant that is required for productive infection within vertebrate, but not tick, hosts. Copyright © 2016 American Society for Microbiology.

  4. Borrelia burgdorferi CheY2 Is Dispensable for Chemotaxis or Motility but Crucial for the Infectious Life Cycle of the Spirochete

    PubMed Central

    Xu, Hui; Sultan, Syed; Yerke, Aaron; Moon, Ki Hwan; Wooten, R. Mark

    2016-01-01

    ABSTRACT The requirements for bacterial chemotaxis and motility range from dispensable to crucial for host colonization. Even though more than 50% of all sequenced prokaryotic genomes possess at least one chemotaxis signaling system, many of those genomes contain multiple copies of a chemotaxis gene. However, the functions of most of those additional genes are unknown. Most motile bacteria possess at least one CheY response regulator that is typically dedicated to the control of motility and which is usually essential for virulence. Borrelia burgdorferi appears to be notably different, in that it has three cheY genes, and our current studies on cheY2 suggests that it has varied effects on different aspects of the natural infection cycle. Mutants deficient in this protein exhibit normal motility and chemotaxis in vitro but show reduced virulence in mice. Specifically, the cheY2 mutants were severely attenuated in murine infection and dissemination to distant tissues after needle inoculation. Moreover, while ΔcheY2 spirochetes are able to survive normally in the Ixodes ticks, mice fed upon by the ΔcheY2-infected ticks did not develop a persistent infection in the murine host. Our data suggest that CheY2, despite resembling a typical response regulator, functions distinctively from most other chemotaxis CheY proteins. We propose that CheY2 serves as a regulator for a B. burgdorferi virulence determinant that is required for productive infection within vertebrate, but not tick, hosts. PMID:27799336

  5. Characterization of Transcription from TATA-Less Promoters: Identification of a New Core Promoter Element XCPE2 and Analysis of Factor Requirements

    PubMed Central

    Anish, Ramakrishnan; Hossain, Mohammad B.; Jacobson, Raymond H.; Takada, Shinako

    2009-01-01

    Background More than 80% of mammalian protein-coding genes are driven by TATA-less promoters which often show multiple transcriptional start sites (TSSs). However, little is known about the core promoter DNA sequences or mechanisms of transcriptional initiation for this class of promoters. Methodology/Principal Findings Here we identify a new core promoter element XCPE2 (X core promoter element 2) (consensus sequence: A/C/G-C-C/T-C-G/A-T-T-G/A-C-C/A+1-C/T) that can direct specific transcription from the second TSS of hepatitis B virus X gene mRNA. XCPE2 sequences can also be found in human promoter regions and typically appear to drive one of the start sites within multiple TSS-containing TATA-less promoters. To gain insight into mechanisms of transcriptional initiation from this class of promoters, we examined requirements of several general transcription factors by in vitro transcription experiments using immunodepleted nuclear extracts and purified factors. Our results show that XCPE2-driven transcription uses at least TFIIB, either TFIID or free TBP, RNA polymerase II (RNA pol II) and the MED26-containing mediator complex but not Gcn5. Therefore, XCPE2-driven transcription can be carried out by a mechanism which differs from previously described TAF-dependent mechanisms for initiator (Inr)- or downstream promoter element (DPE)-containing promoters, the TBP- and SAGA (Spt-Ada-Gcn5-acetyltransferase)-dependent mechanism for yeast TATA-containing promoters, or the TFTC (TBP-free-TAF-containing complex)-dependent mechanism for certain Inr-containing TATA-less promoters. EMSA assays using XCPE2 promoter and purified factors further suggest that XCPE2 promoter recognition requires a set of factors different from those for TATA box, Inr, or DPE promoter recognition. Conclusions/Significance We identified a new core promoter element XCPE2 that are found in multiple TSS-containing TATA-less promoters. Mechanisms of promoter recognition and transcriptional initiation for XCPE2-driven promoters appear different from previously shown mechanisms for classical promoters that show single “focused” TSSs. Our studies provide insight into novel mechanisms of RNA Pol II transcription from multiple TSS-containing TATA-less promoters. PMID:19337366

  6. Using Perseverative Interests to Improve Interactions Between Adolescents with Autism and their Typical Peers in School Settings

    PubMed Central

    Koegel, Robert; Fredeen, Rosy; Kim, Sunny; Danial, John; Rubinstein, Derek; Koegel, Lynn

    2013-01-01

    The literature suggests that adolescents with ASD typically are not socially engaged during unstructured school activities and do not initiate social activities with typically developing peers. This study assessed whether implementing socialization opportunities in the form of lunch clubs based around aspects of the adolescents with ASD’s perseverative interests would promote positive direct and generalized social interaction between the target adolescent and their typically developing peers. A repeated measures multiple baseline experimental design (with two reversals) was implemented across participants. During baseline measures, the participants did not show social engagement or initiations. During intervention, results showed large increases in both social engagement and initiations. Generalization measures also showed that the target adolescents improved their social engagements and initiations with typically developing peers throughout unstructured lunchtime activities. These results have implications for understanding variables related to social development in autism. PMID:24163577

  7. Microseed matrix screening for optimization in protein crystallization: what have we learned?

    PubMed

    D'Arcy, Allan; Bergfors, Terese; Cowan-Jacob, Sandra W; Marsh, May

    2014-09-01

    Protein crystals obtained in initial screens typically require optimization before they are of X-ray diffraction quality. Seeding is one such optimization method. In classical seeding experiments, the seed crystals are put into new, albeit similar, conditions. The past decade has seen the emergence of an alternative seeding strategy: microseed matrix screening (MMS). In this strategy, the seed crystals are transferred into conditions unrelated to the seed source. Examples of MMS applications from in-house projects and the literature include the generation of multiple crystal forms and different space groups, better diffracting crystals and crystallization of previously uncrystallizable targets. MMS can be implemented robotically, making it a viable option for drug-discovery programs. In conclusion, MMS is a simple, time- and cost-efficient optimization method that is applicable to many recalcitrant crystallization problems.

  8. Microseed matrix screening for optimization in protein crystallization: what have we learned?

    PubMed Central

    D’Arcy, Allan; Bergfors, Terese; Cowan-Jacob, Sandra W.; Marsh, May

    2014-01-01

    Protein crystals obtained in initial screens typically require optimization before they are of X-ray diffraction quality. Seeding is one such optimization method. In classical seeding experiments, the seed crystals are put into new, albeit similar, conditions. The past decade has seen the emergence of an alternative seeding strategy: microseed matrix screening (MMS). In this strategy, the seed crystals are transferred into conditions unrelated to the seed source. Examples of MMS applications from in-house projects and the literature include the generation of multiple crystal forms and different space groups, better diffracting crystals and crystallization of previously uncrystallizable targets. MMS can be implemented robotically, making it a viable option for drug-discovery programs. In conclusion, MMS is a simple, time- and cost-efficient optimization method that is applicable to many recalcitrant crystallization problems. PMID:25195878

  9. Visualizing tumor evolution with the fishplot package for R.

    PubMed

    Miller, Christopher A; McMichael, Joshua; Dang, Ha X; Maher, Christopher A; Ding, Li; Ley, Timothy J; Mardis, Elaine R; Wilson, Richard K

    2016-11-07

    Massively-parallel sequencing at depth is now enabling tumor heterogeneity and evolution to be characterized in unprecedented detail. Tracking these changes in clonal architecture often provides insight into therapeutic response and resistance. In complex cases involving multiple timepoints, standard visualizations, such as scatterplots, can be difficult to interpret. Current data visualization methods are also typically manual and laborious, and often only approximate subclonal fractions. We have developed an R package that accurately and intuitively displays changes in clonal structure over time. It requires simple input data and produces illustrative and easy-to-interpret graphs suitable for diagnosis, presentation, and publication. The simplicity, power, and flexibility of this tool make it valuable for visualizing tumor evolution, and it has potential utility in both research and clinical settings. The fishplot package is available at https://github.com/chrisamiller/fishplot .

  10. GPU-accelerated algorithms for compressed signals recovery with application to astronomical imagery deblurring

    NASA Astrophysics Data System (ADS)

    Fiandrotti, Attilio; Fosson, Sophie M.; Ravazzi, Chiara; Magli, Enrico

    2018-04-01

    Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.

  11. Optimal routing of IP packets to multi-homed servers

    NASA Astrophysics Data System (ADS)

    Swartz, K. L.

    1992-08-01

    Multi-homing, or direct attachment to multiple networks, offers both performance and availability benefits for important servers on busy networks. Exploiting these benefits to their fullest requires a modicum of routing knowledge in the clients. Careful policy control must also be reflected in the routing used within the network to make best use of specialized and often scarce resources. While relatively straightforward in theory, this problem becomes much more difficult to solve in a real network containing often intractable implementations from a variety of vendors. This paper presents an analysis of the problem and proposes a useful solution for a typical campus network. Application of this solution at the Stanford Linear Accelerator Center is studied and the problems and pitfalls encountered are discussed, as are the workarounds used to make the system work in the real world.

  12. Using a multifrontal sparse solver in a high performance, finite element code

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Lucas, Robert; Raefsky, Arthur

    1990-01-01

    We consider the performance of the finite element method on a vector supercomputer. The computationally intensive parts of the finite element method are typically the individual element forms and the solution of the global stiffness matrix both of which are vectorized in high performance codes. To further increase throughput, new algorithms are needed. We compare a multifrontal sparse solver to a traditional skyline solver in a finite element code on a vector supercomputer. The multifrontal solver uses the Multiple-Minimum Degree reordering heuristic to reduce the number of operations required to factor a sparse matrix and full matrix computational kernels (e.g., BLAS3) to enhance vector performance. The net result in an order-of-magnitude reduction in run time for a finite element application on one processor of a Cray X-MP.

  13. Modified electrokinetic sample injection method in chromatography and electrophoresis analysis

    DOEpatents

    Davidson, J. Courtney; Balch, Joseph W.

    2001-01-01

    A sample injection method for horizontal configured multiple chromatography or electrophoresis units, each containing a number of separation/analysis channels, that enables efficient introduction of analyte samples. This method for loading when taken in conjunction with horizontal microchannels allows much reduced sample volumes and a means of sample stacking to greatly reduce the concentration of the sample. This reduction in the amount of sample can lead to great cost savings in sample preparation, particularly in massively parallel applications such as DNA sequencing. The essence of this method is in preparation of the input of the separation channel, the physical sample introduction, and subsequent removal of excess material. By this method, sample volumes of 100 nanoliter to 2 microliters have been used successfully, compared to the typical 5 microliters of sample required by the prior separation/analysis method.

  14. Robotic disaster recovery efforts with ad-hoc deployable cloud computing

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Marsh, Ronald; Mohammad, Atif F.

    2013-06-01

    Autonomous operations of search and rescue (SaR) robots is an ill posed problem, which is complexified by the dynamic disaster recovery environment. In a typical SaR response scenario, responder robots will require different levels of processing capabilities during various parts of the response effort and will need to utilize multiple algorithms. Placing these capabilities onboard the robot is a mediocre solution that precludes algorithm specific performance optimization and results in mediocre performance. Architecture for an ad-hoc, deployable cloud environment suitable for use in a disaster response scenario is presented. Under this model, each service provider is optimized for the task and maintains a database of situation-relevant information. This service-oriented architecture (SOA 3.0) compliant framework also serves as an example of the efficient use of SOA 3.0 in an actual cloud application.

  15. Solar and Heliospheric Data Requirements: Going Further Than L1

    NASA Technical Reports Server (NTRS)

    Szabo, A.

    2011-01-01

    Current operational space weather forecasting relies on solar wind observations made by the ACE spacecraft located at the L1 point providing 30-40 minutes warning time. Some use is also made of SOHO and STEREO solar imaging that potentially can give multiple days of warning time. However, our understanding of the propagation and evolution of solar wind transients is still limited resulting in a typical timing uncertainty of approximately 10 hours. In order to improve this critical understanding, a number of NASA missions are being planned. Specifically the Solar Probe Plus and Solar Orbiter missions will investigate the inner Heliospheric evolution of coronal mass ejections and the acceleration and propagation of solar energetic particles. In addition, a number of multi-spacecraft concepts have been studied that have the potential to significantly improve the accuracy of long-term space weather forecasts.

  16. Nuclear magnetic resonance detection and spectroscopy of single proteins using quantum logic.

    PubMed

    Lovchinsky, I; Sushkov, A O; Urbach, E; de Leon, N P; Choi, S; De Greve, K; Evans, R; Gertner, R; Bersin, E; Müller, C; McGuinness, L; Jelezko, F; Walsworth, R L; Park, H; Lukin, M D

    2016-02-19

    Nuclear magnetic resonance spectroscopy is a powerful tool for the structural analysis of organic compounds and biomolecules but typically requires macroscopic sample quantities. We use a sensor, which consists of two quantum bits corresponding to an electronic spin and an ancillary nuclear spin, to demonstrate room temperature magnetic resonance detection and spectroscopy of multiple nuclear species within individual ubiquitin proteins attached to the diamond surface. Using quantum logic to improve readout fidelity and a surface-treatment technique to extend the spin coherence time of shallow nitrogen-vacancy centers, we demonstrate magnetic field sensitivity sufficient to detect individual proton spins within 1 second of integration. This gain in sensitivity enables high-confidence detection of individual proteins and allows us to observe spectral features that reveal information about their chemical composition. Copyright © 2016, American Association for the Advancement of Science.

  17. Hybrid rocket propulsion systems for outer planet exploration missions

    NASA Astrophysics Data System (ADS)

    Jens, Elizabeth T.; Cantwell, Brian J.; Hubbard, G. Scott

    2016-11-01

    Outer planet exploration missions require significant propulsive capability, particularly to achieve orbit insertion. Missions to explore the moons of outer planets place even more demanding requirements on propulsion systems, since they involve multiple large ΔV maneuvers. Hybrid rockets present a favorable alternative to conventional propulsion systems for many of these missions. They typically enjoy higher specific impulse than solids, can be throttled, stopped/restarted, and have more flexibility in their packaging configuration. Hybrids are more compact and easier to throttle than liquids and have similar performance levels. In order to investigate the suitability of these propulsion systems for exploration missions, this paper presents novel hybrid motor designs for two interplanetary missions. Hybrid propulsion systems for missions to Europa and Uranus are presented and compared to conventional in-space propulsion systems. The hybrid motor design for each of these missions is optimized across a range of parameters, including propellant selection, O/F ratio, nozzle area ratio, and chamber pressure. Details of the design process are described in order to provide guidance for researchers wishing to evaluate hybrid rocket motor designs for other missions and applications.

  18. Families of FPGA-Based Accelerators for Approximate String Matching1

    PubMed Central

    Van Court, Tom; Herbordt, Martin C.

    2011-01-01

    Dynamic programming for approximate string matching is a large family of different algorithms, which vary significantly in purpose, complexity, and hardware utilization. Many implementations have reported impressive speed-ups, but have typically been point solutions – highly specialized and addressing only one or a few of the many possible options. The problem to be solved is creating a hardware description that implements a broad range of behavioral options without losing efficiency due to feature bloat. We report a set of three component types that address different parts of the approximate string matching problem. This allows each application to choose the feature set required, then make maximum use of the FPGA fabric according to that application’s specific resource requirements. Multiple, interchangeable implementations are available for each component type. We show that these methods allow the efficient generation of a large, if not complete, family of accelerators for this application. This flexibility was obtained while retaining high performance: We have evaluated a sample against serial reference codes and found speed-ups of from 150× to 400× over a high-end PC. PMID:21603598

  19. Slowing techniques for loading a magneto-optical trap of CaF molecules

    NASA Astrophysics Data System (ADS)

    Truppe, Stefan; Fitch, Noah; Williams, Hannah; Hambach, Moritz; Sauer, Ben; Hinds, Ed; Tarbutt, Mike

    2016-05-01

    Ultracold molecules in a magneto-optical trap (MOT) are useful for testing fundamental physics and studying strongly-interacting quantum systems. With experiments starting with a relatively fast (50-200 m/s) buffer-gas beam, a primary concern is decelerating molecules to below the MOT capture velocity, typically 10 m/s. Direct laser cooling, where the molecules are slowed via momentum transfer from a chirped counter-propagating narrowband laser, is a natural choice. However, chirping the cooling and repump lasers requires precise control of multiple laser frequencies simultaneously. Another approach, called ``white-light slowing'' uses a broadband laser such that all fast molecules in the beam are decelerated. By addressing numerous velocities no chirping is needed. Unfortunately, both techniques have significant losses as molecules are transversely heated during the optical cycling. Ideally, the slowing method would provide simultaneous deceleration and transverse guiding. A newly developed technique, called Zeeman-Sisyphus deceleration, is potentially capable of both. Using permanent magnets and optical pumping, the number of scattered photons is reduced, lessening transverse heating and relaxing the repump requirements. Here we compare all three options for CaF.

  20. Versatile fluid-mixing device for cell and tissue microgravity research applications.

    PubMed

    Wilfinger, W W; Baker, C S; Kunze, E L; Phillips, A T; Hammerstedt, R H

    1996-01-01

    Microgravity life-science research requires hardware that can be easily adapted to a variety of experimental designs and working environments. The Biomodule is a patented, computer-controlled fluid-mixing device that can accommodate these diverse requirements. A typical shuttle payload contains eight Biomodules with a total of 64 samples, a sealed containment vessel, and a NASA refrigeration-incubation module. Each Biomodule contains eight gas-permeable Silastic T tubes that are partitioned into three fluid-filled compartments. The fluids can be mixed at any user-specified time. Multiple investigators and complex experimental designs can be easily accommodated with the hardware. During flight, the Biomodules are sealed in a vessel that provides two levels of containment (liquids and gas) and a stable, investigator-controlled experimental environment that includes regulated temperature, internal pressure, humidity, and gas composition. A cell microencapsulation methodology has also been developed to streamline launch-site sample manipulation and accelerate postflight analysis through the use of fluorescent-activated cell sorting. The Biomodule flight hardware and analytical cell encapsulation methodology are ideally suited for temporal, qualitative, or quantitative life-science investigations.

  1. Multiverse dark matter: SUSY or axions

    NASA Astrophysics Data System (ADS)

    D'Eramo, Francesco; Hall, Lawrence J.; Pappadopulo, Duccio

    2014-11-01

    The observed values of the cosmological constant and the abundance of Dark Matter (DM) can be successfully understood, using certain measures, by imposing the anthropic requirement that density perturbations go non-linear and virialize to form halos. This requires a probability distribution favoring low amounts of DM, i.e. low values of the PQ scale f for the QCD axion and low values of the superpartner mass scale for LSP thermal relics. In theories with independent scanning of multiple DM components, there is a high probability for DM to be dominated by a single component. For example, with independent scanning of f and , TeV-scale LSP DM and an axion solution to the strong CP problem are unlikely to coexist. With thermal LSP DM, the scheme allows an understanding of a Little SUSY Hierarchy with multi-TeV superpartners. Alternatively, with axion DM, PQ breaking before (after) inflation leads to f typically below (below) the projected range of the current ADMX experiment of f = (3 - 30) × 1011 GeV, providing strong motivation to develop experimental techniques for probing lower f.

  2. Challenges in adapting existing clinical natural language processing systems to multiple, diverse health care settings.

    PubMed

    Carrell, David S; Schoen, Robert E; Leffler, Daniel A; Morris, Michele; Rose, Sherri; Baer, Andrew; Crockett, Seth D; Gourevitch, Rebecca A; Dean, Katie M; Mehrotra, Ateev

    2017-09-01

    Widespread application of clinical natural language processing (NLP) systems requires taking existing NLP systems and adapting them to diverse and heterogeneous settings. We describe the challenges faced and lessons learned in adapting an existing NLP system for measuring colonoscopy quality. Colonoscopy and pathology reports from 4 settings during 2013-2015, varying by geographic location, practice type, compensation structure, and electronic health record. Though successful, adaptation required considerably more time and effort than anticipated. Typical NLP challenges in assembling corpora, diverse report structures, and idiosyncratic linguistic content were greatly magnified. Strategies for addressing adaptation challenges include assessing site-specific diversity, setting realistic timelines, leveraging local electronic health record expertise, and undertaking extensive iterative development. More research is needed on how to make it easier to adapt NLP systems to new clinical settings. A key challenge in widespread application of NLP is adapting existing systems to new clinical settings. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. User's Guide for MSAP2D: A Program for Unsteady Aerodynamic and Aeroelastic (Flutter and Forced Response) Analysis of Multistage Compressors and Turbines. 1.0

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Srivastava, R.

    1996-01-01

    This guide describes the input data required for using MSAP2D (Multi Stage Aeroelastic analysis Program - Two Dimensional) computer code. MSAP2D can be used for steady, unsteady aerodynamic, and aeroelastic (flutter and forced response) analysis of bladed disks arranged in multiple blade rows such as those found in compressors, turbines, counter rotating propellers or propfans. The code can also be run for single blade row. MSAP2D code is an extension of the original NPHASE code for multiblade row aerodynamic and aeroelastic analysis. Euler equations are used to obtain aerodynamic forces. The structural dynamic equations are written for a rigid typical section undergoing pitching (torsion) and plunging (bending) motion. The aeroelastic equations are solved in time domain. For single blade row analysis, frequency domain analysis is also provided to obtain unsteady aerodynamic coefficients required in an eigen analysis for flutter. In this manual, sample input and output are provided for a single blade row example, two blade row example with equal and unequal number of blades in the blade rows.

  4. An FPGA computing demo core for space charge simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jinyuan; Huang, Yifei; /Fermilab

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computedmore » using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.« less

  5. Security evaluation and assurance of electronic health records.

    PubMed

    Weber-Jahnke, Jens H

    2009-01-01

    Electronic Health Records (EHRs) maintain information of sensitive nature. Security requirements in this context are typically multilateral, encompassing the viewpoints of multiple stakeholders. Two main research questions arise from a security assurance point of view, namely how to demonstrate the internal correctness of EHRs and how to demonstrate their conformance in relation to multilateral security regulations. The above notions of correctness and conformance directly relate to the general concept of system verification, which asks the question "are we building the system right?" This should not be confused with the concept of system validation, which asks the question "are we building the right system?" Much of the research in the medical informatics community has been concerned with the latter aspect (validation). However, trustworthy security requires assurances that standards are followed and specifications are met. The objective of this paper is to contribute to filling this gap. We give an introduction to fundamentals of security assurance, summarize current assurance standards, and report on experiences with using security assurance methodology applied to the EHR domain, specifically focusing on case studies in the Canadian context.

  6. Requirements for efficient cell-type proportioning: regulatory timescales, stochasticity and lateral inhibition

    NASA Astrophysics Data System (ADS)

    Pfeuty, B.; Kaneko, K.

    2016-04-01

    The proper functioning of multicellular organisms requires the robust establishment of precise proportions between distinct cell types. This developmental differentiation process typically involves intracellular regulatory and stochastic mechanisms to generate cell-fate diversity as well as intercellular signaling mechanisms to coordinate cell-fate decisions at tissue level. We thus surmise that key insights about the developmental regulation of cell-type proportion can be captured by the modeling study of clustering dynamics in population of inhibitory-coupled noisy bistable systems. This general class of dynamical system is shown to exhibit a very stable two-cluster state, but also metastability, collective oscillations or noise-induced state hopping, which can prevent from timely and reliably reaching a robust and well-proportioned clustered state. To circumvent these obstacles or to avoid fine-tuning, we highlight a general strategy based on dual-time positive feedback loops, such as mediated through transcriptional versus epigenetic mechanisms, which improves proportion regulation by coordinating early and flexible lineage priming with late and firm commitment. This result sheds new light on the respective and cooperative roles of multiple regulatory feedback, stochasticity and lateral inhibition in developmental dynamics.

  7. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship s flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm s design, along with mathematical models of the algorithm s performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  8. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  9. A multiple-alignment based primer design algorithm for genetically highly variable DNA targets

    PubMed Central

    2013-01-01

    Background Primer design for highly variable DNA sequences is difficult, and experimental success requires attention to many interacting constraints. The advent of next-generation sequencing methods allows the investigation of rare variants otherwise hidden deep in large populations, but requires attention to population diversity and primer localization in relatively conserved regions, in addition to recognized constraints typically considered in primer design. Results Design constraints include degenerate sites to maximize population coverage, matching of melting temperatures, optimizing de novo sequence length, finding optimal bio-barcodes to allow efficient downstream analyses, and minimizing risk of dimerization. To facilitate primer design addressing these and other constraints, we created a novel computer program (PrimerDesign) that automates this complex procedure. We show its powers and limitations and give examples of successful designs for the analysis of HIV-1 populations. Conclusions PrimerDesign is useful for researchers who want to design DNA primers and probes for analyzing highly variable DNA populations. It can be used to design primers for PCR, RT-PCR, Sanger sequencing, next-generation sequencing, and other experimental protocols targeting highly variable DNA samples. PMID:23965160

  10. Sample preparation composite and replicate strategy case studies for assay of solid oral drug products.

    PubMed

    Nickerson, Beverly; Harrington, Brent; Li, Fasheng; Guo, Michele Xuemei

    2017-11-30

    Drug product assay is one of several tests required for new drug products to ensure the quality of the product at release and throughout the life cycle of the product. Drug product assay testing is typically performed by preparing a composite sample of multiple dosage units to obtain an assay value representative of the batch. In some cases replicate composite samples may be prepared and the reportable assay value is the average value of all the replicates. In previously published work by Harrington et al. (2014) [5], a sample preparation composite and replicate strategy for assay was developed to provide a systematic approach which accounts for variability due to the analytical method and dosage form with a standard error of the potency assay criteria based on compendia and regulatory requirements. In this work, this sample preparation composite and replicate strategy for assay is applied to several case studies to demonstrate the utility of this approach and its application at various stages of pharmaceutical drug product development. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Gorlin-Goltz syndrome.

    PubMed

    Kohli, Munish; Kohli, Monica; Sharma, Naresh; Siddiqui, Saif Rauf; Tulsi, S P S

    2010-01-01

    Gorlin-Goltz syndrome is an inherited autosomal dominant disorder with complete penetrance and extreme variable expressivity. The authors present a case of an 11-year-old girl with typical features of Gorlin-Goltz syndrome with special respect to medical and dental problems which include multiple bony cage deformities like spina bifida with scoliosis having convexity to the left side, presence of an infantile uterus and multiple odonogenic keratocysts in the maxillofacial region.

  12. Beam transport results on the multi-beam MABE accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, P.D.; Alexander, J.A.; Hasti, D.E.

    1985-10-01

    MABE is a multistage, electron beam linear accelerator. The accelerator has been operated in single beam (60 kA, 7 Mev) and multiple beam configurations. This paper deals with the multiple beam configuration in which typically nine approx. = 25 kA injected beams are transported through three accelerating gaps. Experimental results from the machine are discussed, including problems encountered and proposed solutions to those problems.

  13. The Emergence of Autoclitic Frames in Atypically and Typically Developing Children as a Function of Multiple Exemplar Instruction

    ERIC Educational Resources Information Center

    Luke, Nicole; Greer, R. Douglas; Singer-Dudek, Jessica; Keohane, Dolleen-Day

    2011-01-01

    In two experiments, we tested the effect of multiple exemplar instruction (MEI) for training sets on the emergence of autoclitic frames for spatial relations for novel tacts and mands. In Experiment 1, we used a replicated pre- and post-intervention probe design with four students with significant learning disabilities to test for acquisition of…

  14. Outcomes of multiple wire localization for larger breast cancers: when can mastectomy be avoided?

    PubMed

    Kirstein, Laurie J; Rafferty, Elizabeth; Specht, Michelle C; Moore, Richard H; Taghian, Alphonse G; Hughes, Kevin S; Gadd, Michele A; Smith, Barbara L

    2008-09-01

    Mastectomy is often recommended when mammography shows a breast cancer with extensive calcifications. We wished to determine whether the use of multiple localizing wires to guide lumpectomy in this setting was associated with increased rates of breast conservation. We also wanted to identify factors that predicted a poor chance of successful lumpectomy, to avoid multiple lumpectomy attempts in a patient who would ultimately require mastectomy. Records of 153 women with breast cancer who underwent lumpectomy for larger lesions that required multiple wire localization and 196 controls who required only single wire localization were reviewed retrospectively. The number of localizing wires, specimen volume, largest specimen dimension, number of surgical procedures, and rates of breast conservation were scored. Seventy-seven percent of patients requiring multiple wire localization had successful breast conservation, compared with 90% of those needing only single wire localization. Only 28% of multiple wire patients required more than 1 excision to achieve clear margins, compared with 36% of single wire patients (p < 0.01). Breast conservation is possible in the great majority of breast cancer patients whose mammographic lesions require multiple localizing wires for excision. The use of multiple wires can decrease the number of procedures required to obtain clear lumpectomy margins.

  15. Full-order optimal compensators for flow control: the multiple inputs case

    NASA Astrophysics Data System (ADS)

    Semeraro, Onofrio; Pralits, Jan O.

    2018-03-01

    Flow control has been the subject of numerous experimental and theoretical works. We analyze full-order, optimal controllers for large dynamical systems in the presence of multiple actuators and sensors. The full-order controllers do not require any preliminary model reduction or low-order approximation: this feature allows us to assess the optimal performance of an actuated flow without relying on any estimation process or further hypothesis on the disturbances. We start from the original technique proposed by Bewley et al. (Meccanica 51(12):2997-3014, 2016. https://doi.org/10.1007/s11012-016-0547-3), the adjoint of the direct-adjoint (ADA) algorithm. The algorithm is iterative and allows bypassing the solution of the algebraic Riccati equation associated with the optimal control problem, typically infeasible for large systems. In this numerical work, we extend the ADA iteration into a more general framework that includes the design of controllers with multiple, coupled inputs and robust controllers (H_{∞} methods). First, we demonstrate our results by showing the analytical equivalence between the full Riccati solutions and the ADA approximations in the multiple inputs case. In the second part of the article, we analyze the performance of the algorithm in terms of convergence of the solution, by comparing it with analogous techniques. We find an excellent scalability with the number of inputs (actuators), making the method a viable way for full-order control design in complex settings. Finally, the applicability of the algorithm to fluid mechanics problems is shown using the linearized Kuramoto-Sivashinsky equation and the Kármán vortex street past a two-dimensional cylinder.

  16. Role of fully covered self-expandable metal stent for treatment of benign biliary strictures and bile leaks.

    PubMed

    Pausawasadi, Nonthalee; Soontornmanokul, Tanassanee; Rerknimitr, Rungsun

    2012-01-01

    Endoscopic therapy by balloon dilation and placement of multiple large-bore plastic stents is the treatment of choice for benign biliary stricture. This approach is effective but it typically requires multiple endoscopic sessions given the short duration of stent patency. The endoscopic approach for treatment of bile leak involves the placement of a stent with or without biliary sphincterotomy. The self-expandable metal stent (SEMS) has traditionally been used for palliation of malignant biliary strictures given the long duration of stent patency owing to their larger stent diameter. Recently, SEMS has been used in a variety of benign biliary strictures and leaks, especially with the design of the covered self-expandable metal stent (CSEMS), which permits endoscopic-mediated stent removal. The use of CSEMS in benign biliary stricture could potentially result in a decrease in endoscopic sessions and it is technically easier when compared to placement of multiple plastic stents. However, complications such as cholecystitis due to blockage of cystic duct, stent migration, infection and pancreatitis have been reported. The potential subsegmental occlusion of contralateral intrahepatic ducts also limits the use of CSEMS in hilar stricture. Certain techniques and improvement of stent design may overcome these challenges in the future. Thus, CSEMS may be appropriate in only highly selected conditions, such as refractory benign biliary stricture, despite multiple plastic stent placement or difficult to treat bile duct stricture from chronic pancreatitis, and should not be used routinely. This review focuses on the use of fully covered self-expandable metal stent for benign biliary strictures and bile leaks.

  17. Genetic risk and a primary role for cell-mediated immune mechanisms in multiple sclerosis.

    PubMed

    Sawcer, Stephen; Hellenthal, Garrett; Pirinen, Matti; Spencer, Chris C A; Patsopoulos, Nikolaos A; Moutsianas, Loukas; Dilthey, Alexander; Su, Zhan; Freeman, Colin; Hunt, Sarah E; Edkins, Sarah; Gray, Emma; Booth, David R; Potter, Simon C; Goris, An; Band, Gavin; Oturai, Annette Bang; Strange, Amy; Saarela, Janna; Bellenguez, Céline; Fontaine, Bertrand; Gillman, Matthew; Hemmer, Bernhard; Gwilliam, Rhian; Zipp, Frauke; Jayakumar, Alagurevathi; Martin, Roland; Leslie, Stephen; Hawkins, Stanley; Giannoulatou, Eleni; D'alfonso, Sandra; Blackburn, Hannah; Martinelli Boneschi, Filippo; Liddle, Jennifer; Harbo, Hanne F; Perez, Marc L; Spurkland, Anne; Waller, Matthew J; Mycko, Marcin P; Ricketts, Michelle; Comabella, Manuel; Hammond, Naomi; Kockum, Ingrid; McCann, Owen T; Ban, Maria; Whittaker, Pamela; Kemppinen, Anu; Weston, Paul; Hawkins, Clive; Widaa, Sara; Zajicek, John; Dronov, Serge; Robertson, Neil; Bumpstead, Suzannah J; Barcellos, Lisa F; Ravindrarajah, Rathi; Abraham, Roby; Alfredsson, Lars; Ardlie, Kristin; Aubin, Cristin; Baker, Amie; Baker, Katharine; Baranzini, Sergio E; Bergamaschi, Laura; Bergamaschi, Roberto; Bernstein, Allan; Berthele, Achim; Boggild, Mike; Bradfield, Jonathan P; Brassat, David; Broadley, Simon A; Buck, Dorothea; Butzkueven, Helmut; Capra, Ruggero; Carroll, William M; Cavalla, Paola; Celius, Elisabeth G; Cepok, Sabine; Chiavacci, Rosetta; Clerget-Darpoux, Françoise; Clysters, Katleen; Comi, Giancarlo; Cossburn, Mark; Cournu-Rebeix, Isabelle; Cox, Mathew B; Cozen, Wendy; Cree, Bruce A C; Cross, Anne H; Cusi, Daniele; Daly, Mark J; Davis, Emma; de Bakker, Paul I W; Debouverie, Marc; D'hooghe, Marie Beatrice; Dixon, Katherine; Dobosi, Rita; Dubois, Bénédicte; Ellinghaus, David; Elovaara, Irina; Esposito, Federica; Fontenille, Claire; Foote, Simon; Franke, Andre; Galimberti, Daniela; Ghezzi, Angelo; Glessner, Joseph; Gomez, Refujia; Gout, Olivier; Graham, Colin; Grant, Struan F A; Guerini, Franca Rosa; Hakonarson, Hakon; Hall, Per; Hamsten, Anders; Hartung, Hans-Peter; Heard, Rob N; Heath, Simon; Hobart, Jeremy; Hoshi, Muna; Infante-Duarte, Carmen; Ingram, Gillian; Ingram, Wendy; Islam, Talat; Jagodic, Maja; Kabesch, Michael; Kermode, Allan G; Kilpatrick, Trevor J; Kim, Cecilia; Klopp, Norman; Koivisto, Keijo; Larsson, Malin; Lathrop, Mark; Lechner-Scott, Jeannette S; Leone, Maurizio A; Leppä, Virpi; Liljedahl, Ulrika; Bomfim, Izaura Lima; Lincoln, Robin R; Link, Jenny; Liu, Jianjun; Lorentzen, Aslaug R; Lupoli, Sara; Macciardi, Fabio; Mack, Thomas; Marriott, Mark; Martinelli, Vittorio; Mason, Deborah; McCauley, Jacob L; Mentch, Frank; Mero, Inger-Lise; Mihalova, Tania; Montalban, Xavier; Mottershead, John; Myhr, Kjell-Morten; Naldi, Paola; Ollier, William; Page, Alison; Palotie, Aarno; Pelletier, Jean; Piccio, Laura; Pickersgill, Trevor; Piehl, Fredrik; Pobywajlo, Susan; Quach, Hong L; Ramsay, Patricia P; Reunanen, Mauri; Reynolds, Richard; Rioux, John D; Rodegher, Mariaemma; Roesner, Sabine; Rubio, Justin P; Rückert, Ina-Maria; Salvetti, Marco; Salvi, Erika; Santaniello, Adam; Schaefer, Catherine A; Schreiber, Stefan; Schulze, Christian; Scott, Rodney J; Sellebjerg, Finn; Selmaj, Krzysztof W; Sexton, David; Shen, Ling; Simms-Acuna, Brigid; Skidmore, Sheila; Sleiman, Patrick M A; Smestad, Cathrine; Sørensen, Per Soelberg; Søndergaard, Helle Bach; Stankovich, Jim; Strange, Richard C; Sulonen, Anna-Maija; Sundqvist, Emilie; Syvänen, Ann-Christine; Taddeo, Francesca; Taylor, Bruce; Blackwell, Jenefer M; Tienari, Pentti; Bramon, Elvira; Tourbah, Ayman; Brown, Matthew A; Tronczynska, Ewa; Casas, Juan P; Tubridy, Niall; Corvin, Aiden; Vickery, Jane; Jankowski, Janusz; Villoslada, Pablo; Markus, Hugh S; Wang, Kai; Mathew, Christopher G; Wason, James; Palmer, Colin N A; Wichmann, H-Erich; Plomin, Robert; Willoughby, Ernest; Rautanen, Anna; Winkelmann, Juliane; Wittig, Michael; Trembath, Richard C; Yaouanq, Jacqueline; Viswanathan, Ananth C; Zhang, Haitao; Wood, Nicholas W; Zuvich, Rebecca; Deloukas, Panos; Langford, Cordelia; Duncanson, Audrey; Oksenberg, Jorge R; Pericak-Vance, Margaret A; Haines, Jonathan L; Olsson, Tomas; Hillert, Jan; Ivinson, Adrian J; De Jager, Philip L; Peltonen, Leena; Stewart, Graeme J; Hafler, David A; Hauser, Stephen L; McVean, Gil; Donnelly, Peter; Compston, Alastair

    2011-08-10

    Multiple sclerosis is a common disease of the central nervous system in which the interplay between inflammatory and neurodegenerative processes typically results in intermittent neurological disturbance followed by progressive accumulation of disability. Epidemiological studies have shown that genetic factors are primarily responsible for the substantially increased frequency of the disease seen in the relatives of affected individuals, and systematic attempts to identify linkage in multiplex families have confirmed that variation within the major histocompatibility complex (MHC) exerts the greatest individual effect on risk. Modestly powered genome-wide association studies (GWAS) have enabled more than 20 additional risk loci to be identified and have shown that multiple variants exerting modest individual effects have a key role in disease susceptibility. Most of the genetic architecture underlying susceptibility to the disease remains to be defined and is anticipated to require the analysis of sample sizes that are beyond the numbers currently available to individual research groups. In a collaborative GWAS involving 9,772 cases of European descent collected by 23 research groups working in 15 different countries, we have replicated almost all of the previously suggested associations and identified at least a further 29 novel susceptibility loci. Within the MHC we have refined the identity of the HLA-DRB1 risk alleles and confirmed that variation in the HLA-A gene underlies the independent protective effect attributable to the class I region. Immunologically relevant genes are significantly overrepresented among those mapping close to the identified loci and particularly implicate T-helper-cell differentiation in the pathogenesis of multiple sclerosis.

  18. Role of Fully Covered Self-Expandable Metal Stent for Treatment of Benign Biliary Strictures and Bile Leaks

    PubMed Central

    Pausawasadi, Nonthalee; Soontornmanokul, Tanassanee

    2012-01-01

    Endoscopic therapy by balloon dilation and placement of multiple large-bore plastic stents is the treatment of choice for benign biliary stricture. This approach is effective but it typically requires multiple endoscopic sessions given the short duration of stent patency. The endoscopic approach for treatment of bile leak involves the placement of a stent with or without biliary sphincterotomy. The self-expandable metal stent (SEMS) has traditionally been used for palliation of malignant biliary strictures given the long duration of stent patency owing to their larger stent diameter. Recently, SEMS has been used in a variety of benign biliary strictures and leaks, especially with the design of the covered self-expandable metal stent (CSEMS), which permits endoscopic-mediated stent removal. The use of CSEMS in benign biliary stricture could potentially result in a decrease in endoscopic sessions and it is technically easier when compared to placement of multiple plastic stents. However, complications such as cholecystitis due to blockage of cystic duct, stent migration, infection and pancreatitis have been reported. The potential subsegmental occlusion of contralateral intrahepatic ducts also limits the use of CSEMS in hilar stricture. Certain techniques and improvement of stent design may overcome these challenges in the future. Thus, CSEMS may be appropriate in only highly selected conditions, such as refractory benign biliary stricture, despite multiple plastic stent placement or difficult to treat bile duct stricture from chronic pancreatitis, and should not be used routinely. This review focuses on the use of fully covered self-expandable metal stent for benign biliary strictures and bile leaks. PMID:22563290

  19. Space station needs, attributes and architectural options study. Volume 3: Requirements

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A typical system specification format is presented and requirements are compiled. A Program Specification Tree is shown showing a high inclination space station and a low inclination space station with their typical element breakdown, also represented along the top blocks are the interfaces with other systems. The specification format is directed at the Low Inclination space station.

  20. The Resilience of Kepler Multi-systems to Stellar Obliquity

    NASA Astrophysics Data System (ADS)

    Spalding, Christopher; Marx, Noah W.; Batygin, Konstantin

    2018-04-01

    The Kepler mission and its successor K2 have brought forth a cascade of transiting planets. Many of these planetary systems exhibit multiple transiting members. However, a large fraction possesses only a single transiting planet. This high abundance of singles, dubbed the "Kepler Dichotomy," has been hypothesized to arise from significant mutual inclinations between orbits in multi-planet systems. Alternatively, the single-transiting population truly possesses no other planets in the system, but the true origin of the overabundance of single systems remains unresolved. In this work, we propose that planetary systems typically form with a coplanar, multiple-planetary architecture, but that quadrupolar gravitational perturbations from their rapidly-rotating host star subsequently disrupt this primordial coplanarity. We demonstrate that, given sufficient stellar obliquity, even systems beginning with 2 planetary constituents are susceptible to dynamical instability soon after planet formation, as a result of the stellar quadrupole moment. This mechanism stands as a widespread, yet poorly explored pathway toward planetary system instability. Moreover, by requiring that observed multi-systems remain coplanar on Gyr timescales, we are able to place upper limits on the stellar obliquity in systems such as K2-38 (obliquity < 20 degrees), where other methods of measuring spin-orbit misalignment are not currently available.

  1. Identification of driving network of cellular differentiation from single sample time course gene expression data

    NASA Astrophysics Data System (ADS)

    Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing

    Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.

  2. Multiple spectator condensates from inflation

    NASA Astrophysics Data System (ADS)

    Hardwick, Robert J.

    2018-05-01

    We investigate the development of spectator (light test) field condensates due to their quantum fluctuations in a de Sitter inflationary background, making use of the stochastic formalism to describe the system. In this context, a condensate refers to the typical field value found after a coarse-graining using the Hubble scale H, which can be essential to seed the initial conditions required by various post-inflationary processes. We study models with multiple coupled spectators and for the first time we demonstrate that new forms of stationary solution exist (distinct from the standard exponential form) when the potential is asymmetric. Furthermore, we find a critical value for the inter-field coupling as a function of the number of fields above which the formation of stationary condensates collapses to H. Considering some simple two-field example potentials, we are also able to derive a lower limit on the coupling, below which the fluctuations are effectively decoupled, and the standard stationary variance formulae for each field separately can be trusted. These results are all numerically verified by a new publicly available python class (nfield) to solve the coupled Langevin equations over a large number of fields, realisations and timescales. Further applications of this new tool are also discussed.

  3. Thread scheduling for GPU-based OPC simulation on multi-thread

    NASA Astrophysics Data System (ADS)

    Lee, Heejun; Kim, Sangwook; Hong, Jisuk; Lee, Sooryong; Han, Hwansoo

    2018-03-01

    As semiconductor product development based on shrinkage continues, the accuracy and difficulty required for the model based optical proximity correction (MBOPC) is increasing. OPC simulation time, which is the most timeconsuming part of MBOPC, is rapidly increasing due to high pattern density in a layout and complex OPC model. To reduce OPC simulation time, we attempt to apply graphic processing unit (GPU) to MBOPC because OPC process is good to be programmed in parallel. We address some issues that may typically happen during GPU-based OPC simulation in multi thread system, such as "out of memory" and "GPU idle time". To overcome these problems, we propose a thread scheduling method, which manages OPC jobs in multiple threads in such a way that simulations jobs from multiple threads are alternatively executed on GPU while correction jobs are executed at the same time in each CPU cores. It was observed that the amount of GPU peak memory usage decreases by up to 35%, and MBOPC runtime also decreases by 4%. In cases where out of memory issues occur in a multi-threaded environment, the thread scheduler was used to improve MBOPC runtime up to 23%.

  4. ARTS: automated randomization of multiple traits for study design.

    PubMed

    Maienschein-Cline, Mark; Lei, Zhengdeng; Gardeux, Vincent; Abbasi, Taimur; Machado, Roberto F; Gordeuk, Victor; Desai, Ankit A; Saraf, Santosh; Bahroos, Neil; Lussier, Yves

    2014-06-01

    Collecting data from large studies on high-throughput platforms, such as microarray or next-generation sequencing, typically requires processing samples in batches. There are often systematic but unpredictable biases from batch-to-batch, so proper randomization of biologically relevant traits across batches is crucial for distinguishing true biological differences from experimental artifacts. When a large number of traits are biologically relevant, as is common for clinical studies of patients with varying sex, age, genotype and medical background, proper randomization can be extremely difficult to prepare by hand, especially because traits may affect biological inferences, such as differential expression, in a combinatorial manner. Here we present ARTS (automated randomization of multiple traits for study design), which aids researchers in study design by automatically optimizing batch assignment for any number of samples, any number of traits and any batch size. ARTS is implemented in Perl and is available at github.com/mmaiensc/ARTS. ARTS is also available in the Galaxy Tool Shed, and can be used at the Galaxy installation hosted by the UIC Center for Research Informatics (CRI) at galaxy.cri.uic.edu. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. SOMM: A New Service Oriented Middleware for Generic Wireless Multimedia Sensor Networks Based on Code Mobility

    PubMed Central

    Faghih, Mohammad Mehdi; Moghaddam, Mohsen Ebrahimi

    2011-01-01

    Although much research in the area of Wireless Multimedia Sensor Networks (WMSNs) has been done in recent years, the programming of sensor nodes is still time-consuming and tedious. It requires expertise in low-level programming, mainly because of the use of resource constrained hardware and also the low level API provided by current operating systems. The code of the resulting systems has typically no clear separation between application and system logic. This minimizes the possibility of reusing code and often leads to the necessity of major changes when the underlying platform is changed. In this paper, we present a service oriented middleware named SOMM to support application development for WMSNs. The main goal of SOMM is to enable the development of modifiable and scalable WMSN applications. A network which uses the SOMM is capable of providing multiple services to multiple clients at the same time with the specified Quality of Service (QoS). SOMM uses a virtual machine with the ability to support mobile agents. Services in SOMM are provided by mobile agents and SOMM also provides a t space on each node which agents can use to communicate with each other. PMID:22346646

  6. SOMM: A new service oriented middleware for generic wireless multimedia sensor networks based on code mobility.

    PubMed

    Faghih, Mohammad Mehdi; Moghaddam, Mohsen Ebrahimi

    2011-01-01

    Although much research in the area of Wireless Multimedia Sensor Networks (WMSNs) has been done in recent years, the programming of sensor nodes is still time-consuming and tedious. It requires expertise in low-level programming, mainly because of the use of resource constrained hardware and also the low level API provided by current operating systems. The code of the resulting systems has typically no clear separation between application and system logic. This minimizes the possibility of reusing code and often leads to the necessity of major changes when the underlying platform is changed. In this paper, we present a service oriented middleware named SOMM to support application development for WMSNs. The main goal of SOMM is to enable the development of modifiable and scalable WMSN applications. A network which uses the SOMM is capable of providing multiple services to multiple clients at the same time with the specified Quality of Service (QoS). SOMM uses a virtual machine with the ability to support mobile agents. Services in SOMM are provided by mobile agents and SOMM also provides a t space on each node which agents can use to communicate with each other.

  7. Basic consensus document on late-onset hypogonadism.

    PubMed

    Becerra Fernández, Antonio; Enríquez Acosta, Luis

    2008-01-01

    One of the most important elements in men's live is the ability to engage in normal sexual activity; loss of this activity has always been considered especially important. The relationship between sexual activity, as well as other masculine characteristics, and the testicles has been well known since ancient times and has been related to the slow decrease in testosterone secretion with advanced age. Male hypogonadism is one of the most frequent and under-diagnosed endocrine diseases. Several terms have been proposed to refer to clinical situations caused by the age-related decline in male gonadal function; currently, the most widely accepted term is late-onset hypogonadism (LOH). LOH consists of a clinical and biochemical syndrome associated with advanced age (in men), characterized by typical symptoms and reduced serum testosterone concentrations, which can affect multiple organs and systems and reduce quality of life. This syndrome can be treated and the alterations produced can be reversed. To achieve this, a diagnostic protocol that approaches the multiple factors related to the risks and benefits of treatment is required. Copyright © 2008 Sociedad Española de Endocrinología y Nutrición. Published by Elsevier Espana. All rights reserved.

  8. Children's comprehension monitoring of multiple situational dimensions of a narrative.

    PubMed

    Wassenburg, Stephanie I; Beker, Katinka; van den Broek, Paul; van der Schoot, Menno

    Narratives typically consist of information on multiple aspects of a situation. In order to successfully create a coherent representation of the described situation, readers are required to monitor all these situational dimensions during reading. However, little is known about whether these dimensions differ in the ease with which they can be monitored. In the present study, we examined whether children in Grades 4 and 6 monitor four different dimensions (i.e., emotion, causation, time, and space) during reading, using a self-paced reading task containing inconsistencies. Furthermore, to explore what causes failure in inconsistency detection, we differentiated between monitoring processes related to availability and validation of information by manipulating the distance between two pieces of conflicting information. The results indicated that the monitoring processes varied as a function of dimension. Children were able to validate emotional and causal information when it was still active in working memory, but this was not the case for temporal and spatial information. When context and target information were more distant from each other, only emotionally charged information remained available for further monitoring processes. These findings show that the influence of different situational dimensions should be taken into account when studying children's reading comprehension.

  9. Multiple laser pulses in conjunction with an optical clearing agent to improve the curative effect of cutaneous vascular lesions.

    PubMed

    Ma, Jun; Chen, Bin; Li, Dong; Zhang, Yue; Ying, Zhaoxia

    2018-03-14

    Port-wine stain (PWS) birthmark is a congenital microvascular malformation of the skin. A 1064-nm Nd:YAG laser can achieve a deeper treatment, but the weak absorption by blood limits its clinical application. Multiple laser pulses (MLPs) are a potential solution to enhance the curative effect of a Nd:YAG laser. To reduce the pulse number (p n ) required for the thermal destruction of the blood vessel, the effect of glucose in conjunction with MLP was investigated. In vivo experiments were performed on a dorsal skin chamber model. Different concentrations (20, 25, 30, and 40%) of glucose were applied to the sub-dermal side of the hamster skin before laser irradiation. Identical vessels with diameters of 200 ± 30 and 110 ± 20 μm were chosen as representatives of typical PWS vessels. Instant thermal responses of the blood vessel were recorded by a high-speed camera. The required p n for blood vessel damage was compared with that without glucose pretreatment. Results showed that the use of glucose with a concentration of 20% combined with MLP Nd:YAG laser to damage blood vessels is more appropriate because severe hemorrhage or carbonization easily appeared in blood vessels at higher glucose concentration of 25, 30, and 40%. When 20% glycerol is pretreated on the sub-dermal hamster skin, the required p n for blood vessel damage can be significantly decreased for different power densities. For example, p n can be reduced by 40% when the power density is 57 J/cm 2 . In addition, generation of cavitation and bubbles in blood vessels is difficult upon pretreatment with glucose. The combination of glucose with MLP Nd:YAG laser could be an effective protocol for reducing the p n required for blood vessel damage. Randomized controlled trial (RCT) and human trials will be conducted in the future.

  10. PVIScreen

    EPA Pesticide Factsheets

    PVIScreen extends the concepts of a prior model (BioVapor), which accounted for oxygen-driven biodegradation of multiple constituents of petroleum in the soil above the water table. Typically, the model is run 1000 times using various factors.

  11. In Pursuit of Neurophenotypes: The Consequences of Having Autism and a Big Brain

    PubMed Central

    Amaral, David G.; Li, Deana; Libero, Lauren; Solomon, Marjorie; Van de Water, Judy; Mastergeorge, Ann; Naigles, Letitia; Rogers, Sally; Nordahl, Christine Wu

    2017-01-01

    A consensus has emerged that despite common core features, autism spectrum disorder (ASD) has multiple etiologies and various genetic and biological characteristics. The fact that there are likely to be subtypes of ASD has complicated attempts to develop effective therapies. The UC Davis MIND Institute Autism Phenome Project is a longitudinal, multidisciplinary analysis of children with autism and age-matched typically developing controls; nearly 400 families are participating in this study. The overarching goal is to gather sufficient biological, medical, and behavioral data to allow definition of clinically meaningful subtypes of ASD. One reasonable hypothesis is that different subtypes of autism will demonstrate different patterns of altered brain organization or development i.e., different neurophenotypes. In this Commentary, we discuss one neurophenotype that is defined by megalencephaly, or having brain size that is large and disproportionate to body size. We have found that 15% of the boys with autism demonstrate this neurophenotype, though it is far less common in girls. We review behavioral and medical characteristics of the large-brained group of boys with autism in comparison to those with typically sized brains. While brain size in typically developing individuals is positively correlated with cognitive function, the children with autism and larger brains have more severe disabilities and poorer prognosis. This research indicates that phenotyping in autism, like genotyping, requires a very substantial cohort of subjects. Moreover, since brain and behavior relationships may emerge at different times during development, this effort highlights the need for longitudinal analyses to carry out meaningful phenotyping. PMID:28239961

  12. Recirculation of Laser Power in an Atomic Fountain

    NASA Technical Reports Server (NTRS)

    Enzer, Daphna G.; Klipstein, WIlliam M.; Moore, James D.

    2007-01-01

    A new technique for laser-cooling atoms in a cesium atomic fountain frequency standard relies on recirculation of laser light through the atom-collection region of the fountain. The recirculation, accomplished by means of reflections from multiple fixed beam-splitter cubes, is such that each of two laser beams makes three passes. As described below, this recirculation scheme offers several advantages over prior designs, including simplification of the laser system, greater optical power throughput, fewer optical and electrical connections, and simplification of beam power balancing. A typical laser-cooled cesium fountain requires the use of six laser beams arranged as three orthogonal pairs of counter-propagating beams to decelerate the atoms and hold them in a three-dimensional optical trap in vacuum. Typically, these trapping/cooling beams are linearly polarized and are positioned and oriented so that (1) counter-propagating beams in each pair have opposite linear polarizations and (2) three of the six orthogonal beams have the sum of their propagation directions pointing up, while the other three have the sum of their propagation directions pointing down. In a typical prior design, two lasers are used - one to generate the three "up" beams, the other to generate the three "down" beams. For this purpose, the output of each laser is split three ways, then the resulting six beams are delivered to the vacuum system, independently of each other, via optical fibers. The present recirculating design also requires two lasers, but the beams are not split before delivery. Instead, only one "up" beam and one oppositely polarized "down" beam are delivered to the vacuum system, and each of these beams is sent through the collection region three times. The polarization of each beam on each pass through the collection region is set up to yield the same combination of polarization and propagation directions as described above. In comparison with the prior design, the present recirculating design utilizes the available laser light more efficiently, making it possible to trap more atoms at a given laser power or the same number of atoms at a lower laser power. The present design is also simpler in that it requires fewer optical fibers, fiber couplings, and collimators, and fewer photodiodes for monitoring beam powers. Additionally, the present design alleviates the difficulty of maintaining constant ratios among power levels of the beams within each "up" or "down" triplet.

  13. Mathematics anxiety in children with developmental dyscalculia.

    PubMed

    Rubinsten, Orly; Tannock, Rosemary

    2010-07-15

    Math anxiety, defined as a negative affective response to mathematics, is known to have deleterious effects on math performance in the general population. However, the assumption that math anxiety is directly related to math performance, has not yet been validated. Thus, our primary objective was to investigate the effects of math anxiety on numerical processing in children with specific deficits in the acquisition of math skills (Developmental Dyscalculia; DD) by using a novel affective priming task as an indirect measure. Participants (12 children with DD and 11 typically-developing peers) completed a novel priming task in which an arithmetic equation was preceded by one of four types of priming words (positive, neutral, negative or related to mathematics). Children were required to indicate whether the equation (simple math facts based on addition, subtraction, multiplication or division) was true or false. Typically, people respond to target stimuli more quickly after presentation of an affectively-related prime than after one that is unrelated affectively. Participants with DD responded faster to targets that were preceded by both negative primes and math-related primes. A reversed pattern was present in the control group. These results reveal a direct link between emotions, arithmetic and low achievement in math. It is also suggested that arithmetic-affective priming might be used as an indirect measure of math anxiety.

  14. Simplex turbopump design

    NASA Technical Reports Server (NTRS)

    Marsh, Matt; Cowan, Penny

    1994-01-01

    Turbomachinery used in liquid rocket engines typically are composed of complex geometries made from high strength-to-weight super alloys and have long design and fabrication cycle times (3 to 5 years). A simple, low-cost turbopump is being designed in-house to demonstrate the ability to reduce the overall cost to $500K and compress life cycle time to 18 months. The simplex turbopump was designed to provide a discharge pressure of 1500 psia of liquid oxygen at 90 lbm/s. The turbine will be powered by gaseous oxygen. This eliminates the need for an inter-propellant seal typically required to separate the fuel-rich turbine gases from the liquid oxygen pump components. Materials used in the turbine flow paths will utilize existing characterized metals at 800 deg R that are compatible with a warm oxygen environment. This turbopump design would be suitable for integration with a 40 K pound thrust hybrid motor that provides warm oxygen from a tapped-off location to power the turbine. The preliminary and detailed analysis was completed in a year by a multiple discipline, concurrent engineering team. Manpower, schedule, and cost data were tracked during the process for a comparison to the initial goal. The Simplex hardware is the procurement cycle with the expectation of the first test to occur approximately 1.5 months behind the original schedule goal.

  15. [A case of cerebral fat embolism after artificial bone replacement operation for femoral head fracture].

    PubMed

    Kontani, Satoru; Nakamura, Akinobu; Tokumi, Hiroshi; Hirose, Genjirou

    2014-01-01

    A 83 years old woman was slipped and injured with right femoral neck fracture. After three days from the fracture, she underwent an artificial head bone replacement operation. Immediately after surgery, she complained of chest discomfort, nausea and dyspnea. A few hours later, she became comatose. Brain CT showed no abnormality and clinical diagnosis of heart failure was made without pulmonary embolism on enhanced chest CT. Magnetic resonance imaging (MRI) of the brain next day showed multiple small patchy hyperintense lesion in bilateral hemispheres on diffusion-weighted images (DWI), producing a "star field pattern''. Based on Criteria of Gurd, this patient had one major criterion and four minor criteria. And according to the Criteria of Schonfeld, this patient had 5 points, consistent with clinical diagnosis of fat embolism. Because of these criteria, she was diagnosed as cerebral fat embolism syndrome. We started supported care and edaravon. Two weeks after surgery, her condition recovered and remaind to stuporous state even six month after surgery. We experienced a typical case of cerebral fat embolism, after bone surgery with diagnostic findings on MRI-DWI. Diagnosis of cerebral fat embolism syndrome requires a history of long bone fracture and/or replacing surgery with typical finding on MRI images, such as "star field pattern''.

  16. Acoustic metamaterials with synergetic coupling

    NASA Astrophysics Data System (ADS)

    Ma, Fuyin; Huang, Meng; Wu, Jiu Hui

    2017-12-01

    In this paper, we propose a general design concept for acoustic metamaterials that introduces a ubiquitous synergetic behavior into the design procedure, in which the structure of the design is driven by its functional requirements. Since the physical properties of the widely used, resonant-type metamaterials are mainly determined by the eigenmodes of the structure, we first introduce the design concept through the modal displacement distributions on two typical plate-type structures. Next, by employing broadband sound attenuations that involve both the insulation and absorption as the typical targets, two synergetic coupling behaviors are systematically revealed among the dense resonant modes and multi-cell. Furthermore, through plate-type multiple-cell structures assembled from nine oscillators, the design is shown to realize strong broadband attenuations with either the average sound transmission loss (STL) below 2000 Hz higher than 40 dB or the absorption approximately 0.99 in the range of 400-700 Hz wherein the average absorption below 800 Hz remains higher than 0.8. Finally, two multi-cell plate-type samples are fabricated and then used experimentally to measure the STLs in support of the proposed synergetic coupling design method. Both the computational and experimental results demonstrate that the proposed synergetic design concept could effectively initiate a design for metamaterials that offer a new degree of freedom for broadband sound attenuations.

  17. Is There Extra Cost of Institutional Care for MS Patients?

    PubMed Central

    Noyes, Katia; Bajorska, Alina; Weinstock-Guttman, Bianca

    2013-01-01

    Throughout life, patients with multiple sclerosis (MS) require increasing levels of support, rehabilitative services, and eventual skilled nursing facility (SNF) care. There are concerns that access to SNF care for MS patients is limited because of perceived higher costs of their care. This study compares costs of caring for an MS patient versus those of a typical SNF patient. We merged SNF cost report data with the 2001–2006 Nursing Home Minimum Data Set (MDS) to calculate percentage of MS residents-days and facility case-mix indices (CMIs). We estimated the average facility daily cost using hybrid cost functions, adjusted for facility ownership, average facility wages, CMI-adjusted number of SNF days, and percentage of MS residents-days. We describe specific characteristics of SNF with high and low MS volumes and examine any sources of variation in cost. MS patients were no longer more costly than typical SNF patients. A greater proportion of MS patients had no significant effect on facility daily costs (P = 0.26). MS patients were more likely to receive care in government-owned facilities (OR = 1.904) located in the Western (OR = 2.133) and Midwestern (OR = 1.3) parts of the USA (P < 0.05). Cost of SNF care is not a likely explanation for the perceived access barriers that MS patients face. PMID:24163769

  18. Diffuse-Illumination Systems for Growing Plants

    NASA Technical Reports Server (NTRS)

    May, George; Ryan, Robert

    2010-01-01

    Agriculture in both terrestrial and space-controlled environments relies heavily on artificial illumination for efficient photosynthesis. Plant-growth illumination systems require high photon flux in the spectral range corresponding with plant photosynthetic active radiation (PAR) (400 700 nm), high spatial uniformity to promote uniform growth, and high energy efficiency to minimize electricity usage. The proposed plant-growth system takes advantage of the highly diffuse reflective surfaces on the interior of a sphere, hemisphere, or other nearly enclosed structure that is coated with highly reflective materials. This type of surface and structure uniformly mixes discrete light sources to produce highly uniform illumination. Multiple reflections from within the domelike structures are exploited to obtain diffuse illumination, which promotes the efficient reuse of photons that have not yet been absorbed by plants. The highly reflective surfaces encourage only the plant tissue (placed inside the sphere or enclosure) to absorb the light. Discrete light sources, such as light emitting diodes (LEDs), are typically used because of their high efficiency, wavelength selection, and electronically dimmable properties. The light sources are arranged to minimize shadowing and to improve uniformity. Different wavelengths of LEDs (typically blue, green, and red) are used for photosynthesis. Wavelengths outside the PAR range can be added for plant diagnostics or for growth regulation

  19. QUICR-learning for Multi-Agent Coordination

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian K.; Tumer, Kagan

    2006-01-01

    Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.

  20. Nature of multiple-nucleus cluster galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merritt, D.

    1984-05-01

    In models for the evolution of galaxy clusters which include dynamical friction with the dark binding matter, the distribution of galaxies becomes more concentrated to the cluster center with time. In a cluster like Coma, this evolution could increase by a factor of approximately 3 the probability of finding a galaxy very close to the cluster center, without decreasing the typical velocity of such a galaxy significantly below the cluster mean. Such an enhancement is roughly what is needed to explain the large number of first-ranked cluster galaxies which are observed to have extra ''nuclei''; it is also consistent withmore » the high velocities typically measured for these ''nuclei.'' Unlike the cannibalism model, this model predicts that the majority of multiple-nucleus systems are transient phenomena, and not galaxies in the process of merging.« less

  1. A conflict analysis of 4D descent strategies in a metered, multiple-arrival route environment

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Harris, C. S.

    1990-01-01

    A conflict analysis was performed on multiple arrival traffic at a typical metered airport. The Flow Management Evaluation Model (FMEM) was used to simulate arrival operations using Denver Stapleton's arrival route structure. Sensitivities of conflict performance to three different 4-D descent strategies (clear-idle Mach/Constant AirSpeed (CAS), constant descent angle Mach/CAS and energy optimal) were examined for three traffic mixes represented by those found at Denver Stapleton, John F. Kennedy and typical en route metering (ERM) airports. The Monte Carlo technique was used to generate simulation entry point times. Analysis results indicate that the clean-idle descent strategy offers the best compromise in overall performance. Performance measures primarily include susceptibility to conflict and conflict severity. Fuel usage performance is extrapolated from previous descent strategy studies.

  2. Multiple bronchoceles in a non-asthmatic patient with allergic bronchopulmonary aspergillosis.

    PubMed

    Amin, Muhammad Umar; Mahmood, Rabia

    2008-09-01

    Allergic bronchopulmonary aspergillosis (ABPA) is a hypersensitivity reaction due to a fungus, Aspergillus fumigatus. It is typically seen in patients with long-standing asthma. Our patient was a non-asthmatic 18 years old male who presented with chronic cough for 2 years. Peripheral blood eosinophilia and elevated scrum IgE were observed. His x-ray chest revealed v-shaped opacity in the left upper lobe close to the hilum. High resolution computed tomographic scan of the chest revealed multiple dilated bronchi filled with mucous (bronchoceles) and central bronchiectasis (CB) involving main segmental bronchi. Central bronchiectasis (CB) was typical of ABPA but bronchocele formation was a rare manifestation of the disease. The patient was managed with oral prednisolone and was relieved of his symptoms. Occurrence of ABPA in non-asthmatics is very rare and deserves reporting.

  3. Multiple pinhole collimator based X-ray luminescence computed tomography

    PubMed Central

    Zhang, Wei; Zhu, Dianwen; Lun, Michael; Li, Changqing

    2016-01-01

    X-ray luminescence computed tomography (XLCT) is an emerging hybrid imaging modality, which is able to improve the spatial resolution of optical imaging to hundreds of micrometers for deep targets by using superfine X-ray pencil beams. However, due to the low X-ray photon utilization efficiency in a single pinhole collimator based XLCT, it takes a long time to acquire measurement data. Herein, we propose a multiple pinhole collimator based XLCT, in which multiple X-ray beams are generated to scan a sample at multiple positions simultaneously. Compared with the single pinhole based XLCT, the multiple X-ray beam scanning method requires much less measurement time. Numerical simulations and phantom experiments have been performed to demonstrate the feasibility of the multiple X-ray beam scanning method. In one numerical simulation, we used four X-ray beams to scan a cylindrical object with 6 deeply embedded targets. With measurements from 6 angular projections, all 6 targets have been reconstructed successfully. In the phantom experiment, we generated two X-ray pencil beams with a collimator manufactured in-house. Two capillary targets with 0.6 mm edge-to-edge distance embedded in a cylindrical phantom have been reconstructed successfully. With the two beam scanning, we reduced the data acquisition time by 50%. From the reconstructed XLCT images, we found that the Dice similarity of targets is 85.11% and the distance error between two targets is less than 3%. We have measured the radiation dose during XLCT scan and found that the radiation dose, 1.475 mSv, is in the range of a typical CT scan. We have measured the changes of the collimated X-ray beam size and intensity at different distances from the collimator. We have also studied the effects of beam size and intensity in the reconstruction of XLCT. PMID:27446686

  4. How students process equations in solving quantitative synthesis problems? Role of mathematical complexity in students' mathematical performance

    NASA Astrophysics Data System (ADS)

    Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan

    2017-12-01

    We examine students' mathematical performance on quantitative "synthesis problems" with varying mathematical complexity. Synthesis problems are tasks comprising multiple concepts typically taught in different chapters. Mathematical performance refers to the formulation, combination, and simplification of equations. Generally speaking, formulation and combination of equations require conceptual reasoning; simplification of equations requires manipulation of equations as computational tools. Mathematical complexity is operationally defined by the number and the type of equations to be manipulated concurrently due to the number of unknowns in each equation. We use two types of synthesis problems, namely, sequential and simultaneous tasks. Sequential synthesis tasks require a chronological application of pertinent concepts, and simultaneous synthesis tasks require a concurrent application of the pertinent concepts. A total of 179 physics major students from a second year mechanics course participated in the study. Data were collected from written tasks and individual interviews. Results show that mathematical complexity negatively influences the students' mathematical performance on both types of synthesis problems. However, for the sequential synthesis tasks, it interferes only with the students' simplification of equations. For the simultaneous synthesis tasks, mathematical complexity additionally impedes the students' formulation and combination of equations. Several reasons may explain this difference, including the students' different approaches to the two types of synthesis problems, cognitive load, and the variation of mathematical complexity within each synthesis type.

  5. A Tool for the Automated Collection of Space Utilization Data: Three Dimensional Space Utilization Monitor

    NASA Technical Reports Server (NTRS)

    Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.

    2017-01-01

    Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP) and the Behavioral Health and Performance (BHP) Element are conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within the volume. NASA needs methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods exist yet many are obtrusive and require significant post-processing. ?Examplesused in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multi-camera methods ?Due to constraints of space operations many such methods are infeasible. Inertial tracking systems typically rely upon a gravity vector to normalize sensor readings,and traditional IR systems are large and require extensive calibration. ?However, multiple technologies have not been applied to space operations for these purposes. Two of these include: 3D Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) ?Depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR)

  6. Adaptive cyber-attack modeling system

    NASA Astrophysics Data System (ADS)

    Gonsalves, Paul G.; Dougherty, Edward T.

    2006-05-01

    The pervasiveness of software and networked information systems is evident across a broad spectrum of business and government sectors. Such reliance provides an ample opportunity not only for the nefarious exploits of lone wolf computer hackers, but for more systematic software attacks from organized entities. Much effort and focus has been placed on preventing and ameliorating network and OS attacks, a concomitant emphasis is required to address protection of mission critical software. Typical software protection technique and methodology evaluation and verification and validation (V&V) involves the use of a team of subject matter experts (SMEs) to mimic potential attackers or hackers. This manpower intensive, time-consuming, and potentially cost-prohibitive approach is not amenable to performing the necessary multiple non-subjective analyses required to support quantifying software protection levels. To facilitate the evaluation and V&V of software protection solutions, we have designed and developed a prototype adaptive cyber attack modeling system. Our approach integrates an off-line mechanism for rapid construction of Bayesian belief network (BN) attack models with an on-line model instantiation, adaptation and knowledge acquisition scheme. Off-line model construction is supported via a knowledge elicitation approach for identifying key domain requirements and a process for translating these requirements into a library of BN-based cyber-attack models. On-line attack modeling and knowledge acquisition is supported via BN evidence propagation and model parameter learning.

  7. Gorlin-Goltz syndrome

    PubMed Central

    Kohli, Munish; Kohli, Monica; Sharma, Naresh; Siddiqui, Saif Rauf; Tulsi, S.P.S.

    2010-01-01

    Gorlin-Goltz syndrome is an inherited autosomal dominant disorder with complete penetrance and extreme variable expressivity. The authors present a case of an 11-year-old girl with typical features of Gorlin-Goltz syndrome with special respect to medical and dental problems which include multiple bony cage deformities like spina bifida with scoliosis having convexity to the left side, presence of an infantile uterus and multiple odonogenic keratocysts in the maxillofacial region. PMID:22442551

  8. The Effects of Direct Instruction Flashcard and Math Racetrack Procedures on Mastery of Basic Multiplication Facts by Three Elementary School Students

    ERIC Educational Resources Information Center

    Skarr, Adam; Zielinski, Katie; Ruwe, Kellen; Sharp, Hannah; Williams, Randy L.; McLaughlin, T. F.

    2014-01-01

    The purpose of this study was to determine if a typical third-grade boy and fifth-grade girl and a boy with learning disabilities could benefit from the combined use of Direct Instruction (DI) flashcard and math racetrack procedures in an after-school program. The dependent variable was accuracy and fluency of saying basic multiplication facts. A…

  9. Cerebral Metastases of Lung Cancer Mimicking Multiple Ischaemic Lesions - A Case Report and Review of Literature.

    PubMed

    Zacharzewska-Gondek, Anna; Maksymowicz, Hanna; Szymczyk, Małgorzata; Sąsiadek, Marek; Bladowska, Joanna

    2017-01-01

    Restricted diffusion that is found on magnetic resonance diffusion-weighted imaging (DWI) typically indicates acute ischaemic stroke. However, restricted diffusion can also occur in other diseases, like metastatic brain tumours, which we describe in this case report. A 57-year-old male, with a diagnosis of small-cell cancer of the right lung (microcellular anaplastic carcinoma), was admitted with focal neurological symptoms. Initial brain MRI revealed multiple, disseminated lesions that were hyperintense on T2-weighted images and did not enhance after contrast administration; notably, some lesions manifested restricted diffusion on DWI images. Based on these findings, disseminated ischaemic lesions were diagnosed. On follow-up MRI that was performed after 2 weeks, we observed enlargement of the lesions; there were multiple, disseminated, sharply outlined, contrast-enhancing, oval foci with persistent restriction of diffusion. We diagnosed the lesions as disseminated brain metastases due to lung cancer. To our knowledge, this is the first description of a patient with brain metastases that were characterised by restricted diffusion and no contrast enhancement. Multiple, disseminated brain lesions, that are characterised by restricted diffusion on DWI, typically indicate acute or hyperacute ischemic infarcts; however, they can also be due to hypercellular metastases, even if no contrast enhancement is observed. This latter possibility should be considered particularly in patients with cancer.

  10. Lunar Atmosphere and Dust Environment Explorer Integration and Test

    NASA Technical Reports Server (NTRS)

    Wright, Michael R.; McCormick, John L.; Hoffman, Richard G.

    2010-01-01

    Integration and test (I&T) of the Lunar Atmosphere and Dust Environment Explorer (LADEE) is presented. A collaborative NASA project between Goddard Space Flight Center and Ames Research Center, LADEE's mission is to explore the low lunar orbit environment and exosphere for constituents. Its instruments include two spectrometers, a dust detector, and a laser communication technology demonstration. Although a relatively low-cost spacecraft, LADEE has I&T requirements typical of most planetary probes, such as prelaunch contamination control, sterilization, and instrument calibration. To lead to a successful mission, I&T at the spacecraft, instrument, and observatory level must include step-by-step and end-to-end functional, environmental, and performance testing. Due to its compressed development schedule, LADEE I&T planning requires adjusting test flows and sequences to account for long-lead critical-path items and limited spares. A protoflight test-level strategy is also baselined. However, the program benefits from having two independent but collaborative teams of engineers, managers, and technicians that have a wealth of flight project experience. This paper summarizes the LADEE I&T planning, flow, facilities, and probe-unique processes. Coordination of requirements and approaches to I&T when multiple organizations are involved is discussed. Also presented are cost-effective approaches to I&T that are transferable to most any spaceflight project I&T program.

  11. Design, Integration, Certification and Testing of the Orion Crew Module Propulsion System

    NASA Technical Reports Server (NTRS)

    McKay, Heather; Freeman, Rich; Cain, George; Albright, John D.; Schoenberg, Rich; Delventhal, Rex

    2014-01-01

    The Orion Multipurpose Crew Vehicle (MPCV) is NASA's next generation spacecraft for human exploration of deep space. Lockheed Martin is the prime contractor for the design, development, qualification and integration of the vehicle. A key component of the Orion Crew Module (CM) is the Propulsion Reaction Control System, a high-flow hydrazine system used during re-entry to orient the vehicle for landing. The system consists of a completely redundant helium (GHe) pressurization system and hydrazine fuel system with monopropellant thrusters. The propulsion system has been designed, integrated, and qualification tested in support of the Orion program's first orbital flight test, Exploration Flight Test One (EFT-1), scheduled for 2014. A subset of the development challenges and lessons learned from this first flight test campaign will be discussed in this paper for consideration when designing future spacecraft propulsion systems. The CONOPS and human rating requirements of the CM propulsion system are unique when compared with a typical satellite propulsion reaction control system. The system requires a high maximum fuel flow rate. It must operate at both vacuum and sea level atmospheric pressure conditions. In order to meet Orion's human rating requirements, multiple parts of the system must be redundant, and capable of functioning after spacecraft system fault events.

  12. GreenLight Model 960.

    PubMed

    Fernandes, Richard; Carey, Conn; Hynes, James; Papkovsky, Dmitri

    2013-01-01

    The importance of food safety has resulted in a demand for a more rapid, high-throughput method for total viable count (TVC). The industry standard for TVC determination (ISO 4833:2003) is widely used but presents users with some drawbacks. The method is materials- and labor-intensive, requiring multiple agar plates per sample. More importantly, the method is slow, with 72 h typically required for a definitive result. Luxcel Biosciences has developed the GreenLight Model 960, a microtiter plate-based assay providing a rapid high-throughput method of aerobic bacterial load assessment through analysis of microbial oxygen consumption. Results are generated in 1-12 h, depending on microbial load. The mix and measure procedure allows rapid detection of microbial oxygen consumption and equates oxygen consumption to microbial load (CFU/g), providing a simple, sensitive means of assessing the microbial contamination levels in foods (1). As bacteria in the test sample grow and respire, they deplete O2, which is detected as an increase in the GreenLight probe signal above the baseline level (2). The time required to reach this increase in signal can be used to calculate the CFU/g of the original sample, based on a predetermined calibration. The higher the initial microbial load, the earlier this threshold is reached (1).

  13. Evaluation of the concept of pressure proof testing fuselage structures

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Orringer, Oscar

    1991-01-01

    The FAA and NASA have recently completed independent technical evaluations of the concept of pressure proof testing the fuselage of commercial transport airplanes. The results of these evaluations are summarized. The objectives of the evaluations were to establish the potential benefit of the pressure proof test, to quantify the most desirable proof test pressure, and to quantify the required proof test interval. The focus of the evaluations was on multiple-site cracks extending from adjacent rivet holes of a typical fuselage longitudinal lap splice joint. The FAA and NASA do not support pressure proof testing the fuselage of aging commercial transport aircraft. The argument against proof testing is as follows: (1) a single proof test does not insure an indefinite life; therefore, the proof test must be repeated at regular intervals; (2) for a proof factor of 1.33, the required proof test interval must be below 300 flights to account for uncertainties in the evaluation; (3) conducting the proof test at a proof factor of 1.5 would considerably exceed the fuselage design limit load; therefore, it is not consistent with accepted safe practices; and (4) better safety can be assured by implementing enhanced nondestructive inspection requirements, and adequate reliability can be achieved by an inspection interval several times longer than the proof test interval.

  14. An assessment of monitoring requirements and costs of 'Reduced Emissions from Deforestation and Degradation'

    PubMed Central

    Böttcher, Hannes; Eisbrenner, Katja; Fritz, Steffen; Kindermann, Georg; Kraxner, Florian; McCallum, Ian; Obersteiner, Michael

    2009-01-01

    Background Negotiations on a future climate policy framework addressing Reduced Emissions from Deforestation and Degradation (REDD) are ongoing. Regardless of how such a framework will be designed, many technical solutions of estimating forest cover and forest carbon stock change exist to support policy in monitoring and accounting. These technologies typically combine remotely sensed data with ground-based inventories. In this article we assess the costs of monitoring REDD based on available technologies and requirements associated with key elements of REDD policy. Results We find that the design of a REDD policy framework (and specifically its rules) can have a significant impact on monitoring costs. Costs may vary from 0.5 to 550 US$ per square kilometre depending on the required precision of carbon stock and area change detection. Moreover, they follow economies of scale, i.e. single country or project solutions will face relatively higher monitoring costs. Conclusion Although monitoring costs are relatively small compared to other cost items within a REDD system, they should be shared not only among countries but also among sectors, because an integrated monitoring system would have multiple benefits for non-REDD management. Overcoming initialization costs and unequal access to monitoring technologies is crucial for implementation of an integrated monitoring system, and demands for international cooperation. PMID:19709413

  15. SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.

    2014-12-01

    Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.

  16. Changing the Game: Using Integrative Genomics to Probe Virulence Mechanisms of the Stem Rust Pathogen Puccinia graminis f. sp. tritici.

    PubMed

    Figueroa, Melania; Upadhyaya, Narayana M; Sperschneider, Jana; Park, Robert F; Szabo, Les J; Steffenson, Brian; Ellis, Jeff G; Dodds, Peter N

    2016-01-01

    The recent resurgence of wheat stem rust caused by new virulent races of Puccinia graminis f. sp. tritici (Pgt) poses a threat to food security. These concerns have catalyzed an extensive global effort toward controlling this disease. Substantial research and breeding programs target the identification and introduction of new stem rust resistance (Sr) genes in cultivars for genetic protection against the disease. Such resistance genes typically encode immune receptor proteins that recognize specific components of the pathogen, known as avirulence (Avr) proteins. A significant drawback to deploying cultivars with single Sr genes is that they are often overcome by evolution of the pathogen to escape recognition through alterations in Avr genes. Thus, a key element in achieving durable rust control is the deployment of multiple effective Sr genes in combination, either through conventional breeding or transgenic approaches, to minimize the risk of resistance breakdown. In this situation, evolution of pathogen virulence would require changes in multiple Avr genes in order to bypass recognition. However, choosing the optimal Sr gene combinations to deploy is a challenge that requires detailed knowledge of the pathogen Avr genes with which they interact and the virulence phenotypes of Pgt existing in nature. Identifying specific Avr genes from Pgt will provide screening tools to enhance pathogen virulence monitoring, assess heterozygosity and propensity for mutation in pathogen populations, and confirm individual Sr gene functions in crop varieties carrying multiple effective resistance genes. Toward this goal, much progress has been made in assembling a high quality reference genome sequence for Pgt, as well as a Pan-genome encompassing variation between multiple field isolates with diverse virulence spectra. In turn this has allowed prediction of Pgt effector gene candidates based on known features of Avr genes in other plant pathogens, including the related flax rust fungus. Upregulation of gene expression in haustoria and evidence for diversifying selection are two useful parameters to identify candidate Avr genes. Recently, we have also applied machine learning approaches to agnostically predict candidate effectors. Here, we review progress in stem rust pathogenomics and approaches currently underway to identify Avr genes recognized by wheat Sr genes.

  17. 16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...

  18. 16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...

  19. Collaborative Problem Solving in Young Typical Development and HFASD

    ERIC Educational Resources Information Center

    Kimhi, Yael; Bauminger-Zviely, Nirit

    2012-01-01

    Collaborative problem solving (CPS) requires sharing goals/attention and coordinating actions--all deficient in HFASD. Group differences were examined in CPS (HFASD/typical), with a friend versus with a non-friend. Participants included 28 HFASD and 30 typical children aged 3-6 years and their 58 friends and 58 non-friends. Groups were matched on…

  20. 16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...

  1. 16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...

  2. 16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...

  3. 14 CFR Appendix C to Part 1215 - Typical User Activity Timeline

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Typical User Activity Timeline C Appendix C... RELAY SATELLITE SYSTEM (TDRSS) Pt. 1215, App. C Appendix C to Part 1215—Typical User Activity Timeline... mission model. 3 years before launch (Ref. § 1215.109(c). Submit general user requirements to permit...

  4. Emotional and behavioural problems in children with visual impairment, intellectual and multiple disabilities.

    PubMed

    Alimovic, S

    2013-02-01

    Children with multiple impairments have more complex developmental problems than children with a single impairment. We compared children, aged 4 to 11 years, with intellectual disability (ID) and visual impairment to children with single ID, single visual impairment and typical development on 'Child Behavior Check List/4-18' (CBCL/4-18), Parent Report. Children with ID and visual impairment had more emotional and behavioural problems than other groups of children: with single impairment and with typical development (F = 23.81; d.f.1/d.f.2 = 3/156; P < 0.001). All children with special needs had more emotional and behavioural problems than children with typical development. The highest difference was found in attention problems syndrome (F = 30.45; d.f.1/d.f.2 = 3/156; P < 0.001) where all groups of children with impairments had more problems. Children with visual impairment, with and without ID, had more somatic complaints than children with normal vision. Intellectual disability had greater influence on prevalence and kind of emotional and behavioural problems in children than visual impairment. © 2012 The Author. Journal of Intellectual Disability Research © 2012 Blackwell Publishing Ltd.

  5. Will a category cue attract you? Motor output reveals dynamic competition across person construal.

    PubMed

    Freeman, Jonathan B; Ambady, Nalini; Rule, Nicholas O; Johnson, Kerri L

    2008-11-01

    People use social categories to perceive others, extracting category cues to glean membership. Growing evidence for continuous dynamics in real-time cognition suggests, contrary to prevailing social psychological accounts, that person construal may involve dynamic competition between simultaneously active representations. To test this, the authors examined social categorization in real-time by streaming the x, y coordinates of hand movements as participants categorized typical and atypical faces by sex. Though judgments of atypical targets were largely accurate, online motor output exhibited a continuous spatial attraction toward the opposite sex category, indicating dynamic competition between multiple social category alternatives. The authors offer a dynamic continuity account of social categorization and provide converging evidence across categorizations of real male and female faces (containing a typical or an atypical sex-specifying cue) and categorizations of computer-generated male and female faces (with subtly morphed sex-typical or sex-atypical features). In 3 studies, online motor output revealed continuous dynamics underlying person construal, in which multiple simultaneously and partially active category representations gradually cascade into social categorical judgments. Such evidence is challenging for discrete stage-based accounts. (c) 2008 APA, all rights reserved

  6. Area variations in multiple morbidity using a life table methodology.

    PubMed

    Congdon, Peter

    Analysis of healthy life expectancy is typically based on a binary distinction between health and ill-health. By contrast, this paper considers spatial modelling of disease free life expectancy taking account of the number of chronic conditions. Thus the analysis is based on population sub-groups with no disease, those with one disease only, and those with two or more diseases (multiple morbidity). Data on health status is accordingly modelled using a multinomial likelihood. The analysis uses data for 258 small areas in north London, and shows wide differences in the disease burden related to multiple morbidity. Strong associations between area socioeconomic deprivation and multiple morbidity are demonstrated, as well as strong spatial clustering.

  7. Brief announcement: Hypergraph parititioning for parallel sparse matrix-matrix multiplication

    DOE PAGES

    Ballard, Grey; Druinsky, Alex; Knight, Nicholas; ...

    2015-01-01

    The performance of parallel algorithms for sparse matrix-matrix multiplication is typically determined by the amount of interprocessor communication performed, which in turn depends on the nonzero structure of the input matrices. In this paper, we characterize the communication cost of a sparse matrix-matrix multiplication algorithm in terms of the size of a cut of an associated hypergraph that encodes the computation for a given input nonzero structure. Obtaining an optimal algorithm corresponds to solving a hypergraph partitioning problem. Furthermore, our hypergraph model generalizes several existing models for sparse matrix-vector multiplication, and we can leverage hypergraph partitioners developed for that computationmore » to improve application-specific algorithms for multiplying sparse matrices.« less

  8. Pathologic Progression, Possible Origin, and Management of Multiple Primary Intracranial Neuroendocrine Carcinomas.

    PubMed

    Cao, Jingwei; Xu, Wenzhe; Du, Zhenhui; Sun, Bin; Li, Feng; Liu, Yuguang

    2017-10-01

    Primary intracranial neuroendocrine carcinomas (NECs) are extremely rare malignant tumors with no previous reports of multiple ones in the literatures. The clinical presentation, preoperative and reexamined magnetic resonance imaging findings, as well as histopathologic studies of a 56-year-old female subject with multiple intracranial NECs mimicking multiple intracranial meningiomas, who underwent 3 operations with left parietal craniotomy, right occipital parietal craniotomy, and left frontal craniotomy, separately and chronologically, are presented in this article. Noteworthy, the first and second tumors were confirmed as NECs exhibiting histologic characteristics of typical anaplastic meningiomas with features of whorl formation, while the third tumor was a typical NEC with features of organoid cancer nests. In other words, the first 2 lesions were diagnosed as meningioma as opposed to NEC. It was only after the third surgery that the pathology for the first 2 cases was reviewed and had a revised diagnosis. After the third surgical resection, the patient further received whole brain radiotherapy and systemic chemotherapy (temozolomide combined with YH-16). At her 10-month follow-up, the patient achieved a good outcome. Multiple primary intracranial NECs are extremely rare. The tumor might be of arachnoidal or leptomeningeal origin, with histologic patterns that might lead to transformation and/or progression. Maximal surgical resection is warranted for symptomatic mass effect. Postoperative adjuvant treatments including radiotherapy and chemotherapy should be a recommended therapeutic modality. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Patients with chronic dizziness following traumatic head injury typically have multiple diagnoses involving combined peripheral and central vestibular dysfunction.

    PubMed

    Arshad, Q; Roberts, R E; Ahmad, H; Lobo, R; Patel, M; Ham, T; Sharp, D J; Seemungal, B M

    2017-04-01

    We hypothesised that chronic vestibular symptoms (CVS) of imbalance and dizziness post-traumatic head injury (THI) may relate to: (i) the occurrence of multiple simultaneous vestibular diagnoses including both peripheral and central vestibular dysfunction in individual patients increasing the chance of missed diagnoses and suboptimal treatment; (ii) an impaired response to vestibular rehabilitation since the central mechanisms that mediate rehabilitation related brain plasticity may themselves be disrupted. We report the results of a retrospective analysis of both the comprehensive clinical and vestibular laboratory testing of 20 consecutive THI patients with prominent and persisting vestibular symptoms still present at least 6months post THI. Individual THI patients typically had multiple vestibular diagnoses and unique to this group of vestibular patients, often displayed both peripheral and central vestibular dysfunction. Despite expert neuro-otological management, at two years 20% of patients still had persisting vestibular symptoms. In summary, chronic vestibular dysfunction in THI could relate to: (i) the presence of multiple vestibular diagnoses, increasing the risk of 'missed' vestibular diagnoses leading to persisting symptoms; (ii) the impact of brain trauma which may impair brain plasticity mediated repair mechanisms. Apart from alerting physicians to the potential for multiple vestibular diagnoses in THI, future work to identify the specific deficits in brain function mediating poor recovery from post-THI vestibular dysfunction could provide the rationale for developing new therapy for head injury patients whose vestibular symptoms are resistant to treatment. Copyright © 2017. Published by Elsevier B.V.

  10. Protocell design through modular compartmentalization

    PubMed Central

    Miller, David; Booth, Paula J.; Seddon, John M.; Templer, Richard H.; Law, Robert V.; Woscholski, Rudiger; Ces, Oscar; Barter, Laura M. C.

    2013-01-01

    De novo synthetic biological design has the potential to significantly impact upon applications such as energy generation and nanofabrication. Current designs for constructing organisms from component parts are typically limited in scope, as they utilize a cut-and-paste ideology to create simple stepwise engineered protein-signalling pathways. We propose the addition of a new design element that segregates components into lipid-bound ‘proto-organelles’, which are interfaced with response elements and housed within a synthetic protocell. This design is inspired by living cells, which utilize multiple types of signalling molecules to facilitate communication between isolated compartments. This paper presents our design and validation of the components required for a simple multi-compartment protocell machine, for coupling a light transducer to a gene expression system. This represents a general design concept for the compartmentalization of different types of artificial cellular machinery and the utilization of non-protein signal molecules for signal transduction. PMID:23925982

  11. Protocell design through modular compartmentalization.

    PubMed

    Miller, David; Booth, Paula J; Seddon, John M; Templer, Richard H; Law, Robert V; Woscholski, Rudiger; Ces, Oscar; Barter, Laura M C

    2013-10-06

    De novo synthetic biological design has the potential to significantly impact upon applications such as energy generation and nanofabrication. Current designs for constructing organisms from component parts are typically limited in scope, as they utilize a cut-and-paste ideology to create simple stepwise engineered protein-signalling pathways. We propose the addition of a new design element that segregates components into lipid-bound 'proto-organelles', which are interfaced with response elements and housed within a synthetic protocell. This design is inspired by living cells, which utilize multiple types of signalling molecules to facilitate communication between isolated compartments. This paper presents our design and validation of the components required for a simple multi-compartment protocell machine, for coupling a light transducer to a gene expression system. This represents a general design concept for the compartmentalization of different types of artificial cellular machinery and the utilization of non-protein signal molecules for signal transduction.

  12. Reconstruction of bilateral tibial aplasia and split hand-foot syndrome in a father and daughter.

    PubMed

    Al Kaissi, Ali; Ganger, Rudolf; Klaushofer, Klaus; Grill, Franz

    2014-01-01

    Tibial aplasia is of heterogeneous aetiology, the majority of reports are sporadic. We describe the reconstruction procedures in two subjects - a daughter and father manifested autosomal dominant (AD) inheritance of the bilateral tibial aplasia and split hand-foot syndrome. Reconstruction of these patients required multiple surgical procedures and orthoprosthesis was mandatory. The main goal of treatment was to achieve walking. Stabilization of the ankle joint by fibular-talar-chondrodesis on both sides, followed by bilateral Brown-procedure at the knee joint level has been applied accordingly. The outcome was with improved function of the deformed limbs and walking was achieved with simultaneous designation of orthotic fitting. This is the first study encompassing the diagnosis and management of a father and daughter with bilateral tibial aplasia associated with variable split hand/foot deformity without foot ablation. Our patients showed the typical AD pattern of inheritance of split-hand/foot and tibial aplasia.

  13. Single-Axis Three-Beam Amplitude Monopulse Antenna-Signal Processing Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin W.; Bickel, Douglas L.

    2015-05-01

    Typically, when three or more antenna beams along a single axis are required, the answer has been multiple antenna phase-centers, essentially a phase-monopulse system. Such systems and their design parameters are well-reported in the literature. Less appreciated is that three or more antenna beams can also be generated in an amplitude-monopulse fashion. Consequently, design guidelines and performance analysis of such antennas is somewhat under-reported in the literature. We provide discussion herein of three beams arrayed in a single axis with an amplitude-monopulse configuration. Acknowledgements The preparation of this report is the result of an unfunded research and development activity. Sandiamore » National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administ ration under contract DE-AC04-94AL85000.« less

  14. Pathwise upper semi-continuity of random pullback attractors along the time axis

    NASA Astrophysics Data System (ADS)

    Cui, Hongyong; Kloeden, Peter E.; Wu, Fuke

    2018-07-01

    The pullback attractor of a non-autonomous random dynamical system is a time-indexed family of random sets, typically having the form {At(ṡ) } t ∈ R with each At(ṡ) a random set. This paper is concerned with the nature of such time-dependence. It is shown that the upper semi-continuity of the mapping t ↦At(ω) for each ω fixed has an equivalence relationship with the uniform compactness of the local union ∪s∈IAs(ω) , where I ⊂ R is compact. Applied to a semi-linear degenerate parabolic equation with additive noise and a wave equation with multiplicative noise we show that, in order to prove the above locally uniform compactness and upper semi-continuity, no additional conditions are required, in which sense the two properties appear to be general properties satisfied by a large number of real models.

  15. Compensating effect of the coherent synchrotron radiation in bunch compressors

    NASA Astrophysics Data System (ADS)

    Jing, Yichao; Hao, Yue; Litvinenko, Vladimir N.

    2013-06-01

    Typical bunch compression for a high-gain free-electron laser (FEL) requires a large compression ratio. Frequently, this compression is distributed in multiple stages along the beam transport line. However, for a high-gain FEL driven by an energy recovery linac (ERL), compression must be accomplished in a single strong compressor located at the beam line’s end; otherwise the electron beam would be affected severely by coherent synchrotron radiation (CSR) in the ERL’s arcs. In such a scheme, the CSR originating from the strong compressors could greatly degrade the quality of the electron beam. In this paper, we present our design for a bunch compressor that will limit the effect of CSR on the e-beam’s quality. We discuss our findings from a study of such a compressor, and detail its potential for an FEL driven by a multipass ERL developed for the electron-Relativistic Heavy Ion Collider.

  16. Effects of rooting via out-groups on in-group topology in phylogeny.

    PubMed

    Ackerman, Margareta; Brown, Daniel G; Loker, David

    2014-01-01

    Users of phylogenetic methods require rooted trees, because the direction of time depends on the placement of the root. While phylogenetic trees are typically rooted by using an out-group, this mechanism is inappropriate when the addition of an out-group changes the in-group topology. We perform a formal analysis of phylogenetic algorithms under the inclusion of distant out-groups. It turns out that linkage-based algorithms (including UPGMA) and a class of bisecting methods do not modify the topology of the in-group when an out-group is included. By contrast, the popular neighbour joining algorithm fails this property in a strong sense: every data set can have its structure destroyed by some arbitrarily distant outlier. Furthermore, including multiple outliers can lead to an arbitrary topology on the in-group. The standard rooting approach that uses out-groups may be fundamentally unsuited for neighbour joining.

  17. Secondary Use of Patients’ Electronic Records (SUPER): An Approach for Meeting Specific Data Needs of Clinical and Translational Researchers

    PubMed Central

    Sholle, Evan T.; Kabariti, Joseph; Johnson, Stephen B.; Leonard, John P.; Pathak, Jyotishman; Varughese, Vinay I.; Cole, Curtis L.; Campion, Thomas R.

    2017-01-01

    Academic medical centers commonly approach secondary use of electronic health record (EHR) data by implementing centralized clinical data warehouses (CDWs). However, CDWs require extensive resources to model data dimensions and harmonize clinical terminology, which can hinder effective support of the specific and varied data needs of investigators. We hypothesized that an approach that aggregates raw data from source systems, ignores initial modeling typical of CDWs, and transforms raw data for specific research purposes would meet investigator needs. The approach has successfully enabled multiple tools that provide utility to the institutional research enterprise. To our knowledge, this is the first complete description of a methodology for electronic patient data acquisition and provisioning that ignores data harmonization at the time of initial storage in favor of downstream transformation to address specific research questions and applications. PMID:29854228

  18. Spectrum of antimicrobial activity associated with ionic colloidal silver.

    PubMed

    Morrill, Kira; May, Kathleen; Leek, Daniel; Langland, Nicole; Jeane, La Deana; Ventura, Jose; Skubisz, Corey; Scherer, Sean; Lopez, Eric; Crocker, Ephraim; Peters, Rachel; Oertle, John; Nguyen, Krystine; Just, Scott; Orian, Michael; Humphrey, Meaghan; Payne, David; Jacobs, Bertram; Waters, Robert; Langland, Jeffrey

    2013-03-01

    Silver has historically and extensively been used as a broad-spectrum antimicrobial agent. However, the Food and Drug Administration currently does not recognize colloidal silver as a safe and effective antimicrobial agent. The goal of this study was to further evaluate the antimicrobial efficacy of colloidal silver. Several strains of bacteria, fungi, and viruses were grown under multicycle growth conditions in the presence or absence of ionic colloidal silver in order to assess the antimicrobial activity. For bacteria grown under aerobic or anaerobic conditions, significant growth inhibition was observed, although multiple treatments were typically required. For fungal cultures, the effects of ionic colloidal silver varied significantly between different genera. No viral growth inhibition was observed with any strains tested. The study data support ionic colloidal silver as a broad-spectrum antimicrobial agent against aerobic and anaerobic bacteria, while having a more limited and specific spectrum of activity against fungi.

  19. Governance of Transnational Global Health Research Consortia and Health Equity.

    PubMed

    Pratt, Bridget; Hyder, Adnan A

    2016-10-01

    Global health research partnerships are increasingly taking the form of consortia of institutions from high-income countries and low- and middle-income countries that undertake programs of research. These partnerships differ from collaborations that carry out single projects in the multiplicity of their goals, scope of their activities, and nature of their management. Although such consortia typically aim to reduce health disparities between and within countries, what is required for them to do so has not been clearly defined. This article takes a conceptual approach to explore how the governance of transnational global health research consortia should be structured to advance health equity. To do so, it applies an account called shared health governance to derive procedural and substantive guidance. A checklist based on this guidance is proposed to assist research consortia determine where their governance practices strongly promote equity and where they may fall short.

  20. High extraction efficiency ultraviolet light-emitting diode

    DOEpatents

    Wierer, Jonathan; Montano, Ines; Allerman, Andrew A.

    2015-11-24

    Ultraviolet light-emitting diodes with tailored AlGaN quantum wells can achieve high extraction efficiency. For efficient bottom light extraction, parallel polarized light is preferred, because it propagates predominately perpendicular to the QW plane and into the typical and more efficient light escape cones. This is favored over perpendicular polarized light that propagates along the QW plane which requires multiple, lossy bounces before extraction. The thickness and carrier density of AlGaN QW layers have a strong influence on the valence subband structure, and the resulting optical polarization and light extraction of ultraviolet light-emitting diodes. At Al>0.3, thinner QW layers (<2.5 nm are preferred) result in light preferentially polarized parallel to the QW plane. Also, active regions consisting of six or more QWs, to reduce carrier density, and with thin barriers, to efficiently inject carriers in all the QWs, are preferred.

  1. Multiple nodes transfer alignment for airborne missiles based on inertial sensor network

    NASA Astrophysics Data System (ADS)

    Si, Fan; Zhao, Yan

    2017-09-01

    Transfer alignment is an important initialization method for airborne missiles because the alignment accuracy largely determines the performance of the missile. However, traditional alignment methods are limited by complicated and unknown flexure angle, and cannot meet the actual requirement when wing flexure deformation occurs. To address this problem, we propose a new method that uses the relative navigation parameters between the weapons and fighter to achieve transfer alignment. First, in the relative inertial navigation algorithm, the relative attitudes and positions are constantly computed in wing flexure deformation situations. Secondly, the alignment results of each weapon are processed using a data fusion algorithm to improve the overall performance. Finally, the feasibility and performance of the proposed method were evaluated under two typical types of deformation, and the simulation results demonstrated that the new transfer alignment method is practical and has high-precision.

  2. Effective phase function of light scattered at small angles by polydisperse particulate media

    NASA Astrophysics Data System (ADS)

    Turcu, I.

    2008-06-01

    Particles with typical dimensions higher than the light wavelength and relative refraction indexes close to one, scatter light mainly in the forward direction where the scattered light intensity has a narrow peak. For particulate media accomplishing these requirements the light scattered at small angles in a far-field detecting set-up can be described analytically by an effective phase function (EPF) even in the multiple scattering regime. The EPF model which was built for monodispersed systems has been extended to polydispersed media. The main ingredients consist in the replacement of the single particle phase function and of the optical thickness with their corresponding averaged values. Using a Gamma particle size distribution (PSD) as a testing model, the effect of polydispersity was systematically investigated. The increase of the average radius or/and of the PSD standard deviation leads to the decrease of the angular spreading of the small angle scattered light.

  3. MonitoringResources.org—Supporting coordinated and cost-effective natural resource monitoring across organizations

    USGS Publications Warehouse

    Bayer, Jennifer M.; Scully, Rebecca A.; Weltzin, Jake F.

    2018-05-21

    Natural resource managers who oversee the Nation’s resources require data to support informed decision-making at a variety of spatial and temporal scales that often cross typical jurisdictional boundaries such as states, agency regions, and watersheds. These data come from multiple agencies, programs, and sources, often with their own methods and standards for data collection and organization. Coordinating standards and methods is often prohibitively time-intensive and expensive. MonitoringResources.org offers a suite of tools and resources that support coordination of monitoring efforts, cost-effective planning, and sharing of knowledge among organizations. The website was developed by the Pacific Northwest Aquatic Monitoring Partnership—a collaboration of Federal, state, tribal, local, and private monitoring programs—and the U.S. Geological Survey (USGS), with funding from the Bonneville Power Administration and USGS. It is a key component of a coordinated monitoring and information network.

  4. Microfluidic array platform for simultaneous lipid bilayer membrane formation.

    PubMed

    Zagnoni, M; Sandison, M E; Morgan, H

    2009-01-01

    In recent years, protein array technologies have found widespread applications in proteomics. However, new methods for high-throughput analysis of protein-protein and protein-compound interactions are still required. In this paper, an array of lipid bilayer membranes formed within a microfluidic system with integrated electrodes is presented. The system is comprised of three layers that are clamped together, thus rendering the device cleanable and reusable. The device microfluidics enable the simultaneous formation of an array of lipid bilayers using a previously developed air-exposure technique, thereby avoiding the need to manually form individual bilayers. The Ag/AgCl electrodes allow for ion channel measurements, each of the sites being independently addressable. Typically, a 50% yield in simultaneous lipid bilayer formation over 12 sites was obtained and ion channel recordings have been acquired over multiple sites. This system has great potential for the development of an automatable platform of suspended lipid bilayer arrays.

  5. Rare occurrence of heart lesions in Pacific oysters Crassostrea gigas caused by an unknown bacterial infection.

    PubMed

    Meyer, Gary R; Lowe, Geoffrey J; Bower, Susan M

    2017-09-20

    On rare occasions, small cream-coloured cysts have been observed in the heart and pericardial cavity of Pacific oysters Crassostrea gigas from British Columbia, Canada. Histopathology revealed the presence of large colonies of bacteria (up to 800 µm in diameter) causing significant host response and hypertrophy of the heart epithelium. The causative bacteria were characterized as follows: Gram-negative, coccoid to small rod-shaped, typically <1.5 µm in size, cell walls highly endowed with surface fimbriae and division via binary fission. Although these bacteria shared some morphological characteristics with the order Rickettsiales, they did not require an intracellular existence for multiplication. Unfortunately, a cultured isolate was not available, and a retrospective attempt to further characterize the bacteria using DNA sequence analysis of a fragment from the 16S rDNA region proved to be uninformative.

  6. Product differentiation during continuous-flow thermal gradient PCR.

    PubMed

    Crews, Niel; Wittwer, Carl; Palais, Robert; Gale, Bruce

    2008-06-01

    A continuous-flow PCR microfluidic device was developed in which the target DNA product can be detected and identified during its amplification. This in situ characterization potentially eliminates the requirement for further post-PCR analysis. Multiple small targets have been amplified from human genomic DNA, having sizes of 108, 122, and 134 bp. With a DNA dye in the PCR mixture, the amplification and unique melting behavior of each sample is observed from a single fluorescent image. The melting behavior of the amplifying DNA, which depends on its molecular composition, occurs spatially in the thermal gradient PCR device, and can be observed with an optical resolution of 0.1 degrees C pixel(-1). Since many PCR cycles are within the field of view of the CCD camera, melting analysis can be performed at any cycle that contains a significant quantity of amplicon, thereby eliminating the cycle-selection challenges typically associated with continuous-flow PCR microfluidics.

  7. Tuning and Switching Enantioselectivity of Asymmetric Carboligation in an Enzyme through Mutational Analysis of a Single Hot Spot.

    PubMed

    Wechsler, Cindy; Meyer, Danilo; Loschonsky, Sabrina; Funk, Lisa-Marie; Neumann, Piotr; Ficner, Ralf; Brodhun, Florian; Müller, Michael; Tittmann, Kai

    2015-12-01

    Enantioselective bond making and breaking is a hallmark of enzyme action, yet switching the enantioselectivity of the reaction is a difficult undertaking, and typically requires extensive screening of mutant libraries and multiple mutations. Here, we demonstrate that mutational diversification of a single catalytic hot spot in the enzyme pyruvate decarboxylase gives access to both enantiomers of acyloins acetoin and phenylacetylcarbinol, important pharmaceutical precursors, in the case of acetoin even starting from the unselective wild-type protein. Protein crystallography was used to rationalize these findings and to propose a mechanistic model of how enantioselectivity is controlled. In a broader context, our studies highlight the efficiency of mechanism-inspired and structure-guided rational protein design for enhancing and switching enantioselectivity of enzymatic reactions, by systematically exploring the biocatalytic potential of a single hot spot. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Automated Delineation of Lung Tumors from CT Images Using a Single Click Ensemble Segmentation Approach

    PubMed Central

    Gu, Yuhua; Kumar, Virendra; Hall, Lawrence O; Goldgof, Dmitry B; Li, Ching-Yen; Korn, René; Bendtsen, Claus; Velazquez, Emmanuel Rios; Dekker, Andre; Aerts, Hugo; Lambin, Philippe; Li, Xiuli; Tian, Jie; Gatenby, Robert A; Gillies, Robert J

    2012-01-01

    A single click ensemble segmentation (SCES) approach based on an existing “Click&Grow” algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated. PMID:23459617

  9. Cloning of DOG1, a quantitative trait locus controlling seed dormancy in Arabidopsis.

    PubMed

    Bentsink, Leónie; Jowett, Jemma; Hanhart, Corrie J; Koornneef, Maarten

    2006-11-07

    Genetic variation for seed dormancy in nature is a typical quantitative trait controlled by multiple loci on which environmental factors have a strong effect. Finding the genes underlying dormancy quantitative trait loci is a major scientific challenge, which also has relevance for agriculture and ecology. In this study we describe the identification of the DELAY OF GERMINATION 1 (DOG1) gene previously identified as a quantitative trait locus involved in the control of seed dormancy. This gene was isolated by a combination of positional cloning and mutant analysis and is absolutely required for the induction of seed dormancy. DOG1 is a member of a small gene family of unknown molecular function, with five members in Arabidopsis. The functional natural allelic variation present in Arabidopsis is caused by polymorphisms in the cis-regulatory region of the DOG1 gene and results in considerable expression differences between the DOG1 alleles of the accessions analyzed.

  10. Development of a 402.5 MHz 140 kW Inductive Output Tube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Lawrence Ives; Michael Read, Robert Jackson

    2012-05-09

    This report contains the results of Phase I of an SBIR to develop a Pulsed Inductive Output Tube (IOT) with 140 kW at 400 MHz for powering H-proton beams. A number of sources, including single beam and multiple beam klystrons, can provide this power, but the IOT provides higher efficiency. Efficiencies exceeding 70% are routinely achieved. The gain is typically limited to approximately 24 dB; however, the availability of highly efficient, solid state drivers reduces the significance of this limitation, particularly at lower frequencies. This program initially focused on developing a 402 MHz IOT; however, the DOE requirement for thismore » device was terminated during the program. The SBIR effort was refocused on improving the IOT design codes to more accurately simulate the time dependent behavior of the input cavity, electron gun, output cavity, and collector. Significant improvement was achieved in modeling capability and simulation accuracy.« less

  11. Are referring doctors ready for enterprise and community wide immediate image and report access?

    PubMed

    Wadley, Brian D; Hayward, Ulrike; Trambert, Michael; Kywi, Alberto; Hartzman, Steve

    2002-01-01

    At most medical centers film-based radiology requires that single or multiple copies of patient exams and reports be distributed for results communication. A successful picture archiving and communication system (PACS) should provide a means to improve upon this inefficient paradigm, with universal access to imagery and exam results on demand at the user's convenience. Enterprise and community-wide experience with universal PACS access is reviewed. Referring physicians were surveyed about their experience with PACS, with regard to acceptance, productivity, frequency of usage, and impact on patient care. Web audit trails were used to assess physician usage. Film printing logs were reviewed. The filmless paradigm was highly regarded and frequently used by nearly all users. Significant productivity benefits were gleaned by all of the referring physicians. Patient quality of care benefitted from more efficient communication of results. Very small quantities of film were used for printing of exams, typically for patient copies.

  12. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments is based upon a novel approach that relies on the global momentum conservation of the closed fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. A numerical example illustrates the method's application to prediction of bulk fluid behavior during a spacecraft ullage settling maneuver.

  13. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q.

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments relies upon the global momentum conservation of the fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. Numerical examples illustrate the method's application to predicting bulk fluid motion including lateral propellant slosh in low-g conditions.

  14. Creating targeted initial populations for genetic product searches in heterogeneous markets

    NASA Astrophysics Data System (ADS)

    Foster, Garrett; Turner, Callaway; Ferguson, Scott; Donndelinger, Joseph

    2014-12-01

    Genetic searches often use randomly generated initial populations to maximize diversity and enable a thorough sampling of the design space. While many of these initial configurations perform poorly, the trade-off between population diversity and solution quality is typically acceptable for small-scale problems. Navigating complex design spaces, however, often requires computationally intelligent approaches that improve solution quality. This article draws on research advances in market-based product design and heuristic optimization to strategically construct 'targeted' initial populations. Targeted initial designs are created using respondent-level part-worths estimated from discrete choice models. These designs are then integrated into a traditional genetic search. Two case study problems of differing complexity are presented to illustrate the benefits of this approach. In both problems, targeted populations lead to computational savings and product configurations with improved market share of preferences. Future research efforts to tailor this approach and extend it towards multiple objectives are also discussed.

  15. It's a Trap! A Review of MOMA and Other Ion Traps in Space or Under Development

    NASA Technical Reports Server (NTRS)

    Arevalo, R., Jr.; Brinckerhoff, W. B.; Mahaffy, P. R.; van Amerom, F. H. W.; Danell, R. M.; Pinnick, V. T.; Li, X.; Hovmand, L.; Getty, S. A.; Goesmann, F.; hide

    2014-01-01

    Since the Viking Program, quadrupole mass spectrometer (QMS) instruments have been used to explore a wide survey of planetary targets in our solar system, including (from the inner to outer reaches): Venus (Pioneer); our moon (LADEE); Mars (Viking, Phoenix, and Mars Science Laboratory); and, Saturns largest moon Titan (Cassini-Huygens). More recently, however, ion trap mass spectrometer (ITMS) instruments have found a niche as smaller, versatile alternatives to traditional quadrupole mass analyzers, capable of in situ characterization of planetary environments and the search for organic matter. For example, whereas typical QMS systems are limited to a mass range up to 500 Da and normally require multiple RF frequencies and pressures of less than 10(exp -6) mbar for optimal operation, ITMS instruments commonly reach upwards of 1000 Da or more on a single RF frequency, and function in higher pressure environments up to 10(exp -3) mbar.

  16. Variable Melt Production Rate of the Kerguelen HotSpot Due To Long-Term Plume-Ridge Interaction

    NASA Astrophysics Data System (ADS)

    Bredow, Eva; Steinberger, Bernhard

    2018-01-01

    For at least 120 Myr, the Kerguelen plume has distributed enormous amounts of magmatic rocks over various igneous provinces between India, Australia, and Antarctica. Previous attempts to reconstruct the complex history of this plume have revealed several characteristics that are inconsistent with properties typically associated with plumes. To explore the geodynamic behavior of the Kerguelen hotspot, and in particular address these inconsistencies, we set up a regional viscous flow model with the mantle convection code ASPECT. Our model features complex time-dependent boundary conditions in order to explicitly simulate the surrounding conditions of the Kerguelen plume. We show that a constant plume influx can result in a variable magma production rate if the plume interacts with nearby spreading ridges and that a dismembered plume, multiple plumes, or solitary waves in the plume conduit are not required to explain the fluctuating magma output and other unusual characteristics attributed to the Kerguelen hotspot.

  17. Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation

    PubMed Central

    Phillips, James C.; Sun, Yanhua; Jain, Nikhil; Bohm, Eric J.; Kalé, Laxmikant V.

    2014-01-01

    Currently deployed petascale supercomputers typically use toroidal network topologies in three or more dimensions. While these networks perform well for topology-agnostic codes on a few thousand nodes, leadership machines with 20,000 nodes require topology awareness to avoid network contention for communication-intensive codes. Topology adaptation is complicated by irregular node allocation shapes and holes due to dedicated input/output nodes or hardware failure. In the context of the popular molecular dynamics program NAMD, we present methods for mapping a periodic 3-D grid of fixed-size spatial decomposition domains to 3-D Cray Gemini and 5-D IBM Blue Gene/Q toroidal networks to enable hundred-million atom full machine simulations, and to similarly partition node allocations into compact domains for smaller simulations using multiple-copy algorithms. Additional enabling techniques are discussed and performance is reported for NCSA Blue Waters, ORNL Titan, ANL Mira, TACC Stampede, and NERSC Edison. PMID:25594075

  18. A review of video security training and assessment-systems and their applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cellucci, J.; Hall, R.J.

    1991-01-01

    This paper reports that during the last 10 years computer-aided video data collection and playback systems have been used as nuclear facility security training and assessment tools with varying degrees of success. These mobile systems have been used by trained security personnel for response force training, vulnerability assessment, force-on-force exercises and crisis management. Typically, synchronous recordings from multiple video cameras, communications audio, and digital sensor inputs; are played back to the exercise participants and then edited for training and briefing. Factors that have influence user acceptance include: frequency of use, the demands placed on security personnel, fear of punishment, usermore » training requirements and equipment cost. The introduction of S-VHS video and new software for scenario planning, video editing and data reduction; should bring about a wider range of security applications and supply the opportunity for significant cost sharing with other user groups.« less

  19. Sample diversity and premise typicality in inductive reasoning: evidence for developmental change.

    PubMed

    Rhodes, Marjorie; Brickman, Daniel; Gelman, Susan A

    2008-08-01

    Evaluating whether a limited sample of evidence provides a good basis for induction is a critical cognitive task. We hypothesized that whereas adults evaluate the inductive strength of samples containing multiple pieces of evidence by attending to the relations among the exemplars (e.g., sample diversity), six-year-olds would attend to the degree to which each individual exemplar in a sample independently appears informative (e.g., premise typicality). To test these hypotheses, participants were asked to select between diverse and non-diverse samples to help them learn about basic-level animal categories. Across various between-subject conditions (N=133), we varied the typicality present in the diverse and non-diverse samples. We found that adults reliably selected to examine diverse over non-diverse samples, regardless of exemplar typicality, six-year-olds preferred to examine samples containing typical exemplars, regardless of sample diversity, and nine-year-olds were somewhat in the midst of this developmental transition.

  20. Modeling Multiple-Core Updraft Plume Rise for an Aerial Ignition Prescribed Burn by Coupling Daysmoke with a Cellular Automata Fire Model

    Treesearch

    G. L Achtemeier; S. L. Goodrick; Y. Liu

    2012-01-01

    Smoke plume rise is critically dependent on plume updraft structure. Smoke plumes from landscape burns (forest and agricultural burns) are typically structured into “sub-plumes” or multiple-core updrafts with the number of updraft cores depending on characteristics of the landscape, fire, fuels, and weather. The number of updraft cores determines the efficiency of...

  1. Safe and Secure Services Based on NGN

    NASA Astrophysics Data System (ADS)

    Fukazawa, Tomoo; Nisase, Takemi; Kawashima, Masahisa; Hariu, Takeo; Oshima, Yoshihito

    Next Generation Network (NGN), which has been undergoing standardization as it has developed, is expected to create new services that converge the fixed and mobile networks. This paper introduces the basic requirements for NGN in terms of security and explains the standardization activities, in particular, the requirements for the security function described in Y.2701 discussed in ITU-T SG-13. In addition to the basic NGN security function, requirements for NGN authentication are also described from three aspects: security, deployability, and service. As examples of authentication implementation, three profiles-namely, fixed, nomadic, and mobile-are defined in this paper. That is, the “fixed profile” is typically for fixed-line subscribers, the “nomadic profile” basically utilizes WiFi access points, and the “mobile profile” provides ideal NGN mobility for mobile subscribers. All three of these profiles satisfy the requirements from security aspects. The three profiles are compared from the viewpoint of requirements for deployability and service. After showing that none of the three profiles can fulfill all of the requirements, we propose that multiple profiles should be used by NGN providers. As service and application examples, two promising NGN applications are proposed. The first is a strong authentication mechanism that makes Web applications more safe and secure even against password theft. It is based on NGN ID federation function. The second provides an easy peer-to-peer broadband virtual private network service aimed at safe and secure communication for personal/SOHO (small office, home office) users, based on NGN SIP (session initiation protocol) session control.

  2. Projecting cumulative benefits of multiple river restoration projects: an example from the Sacramento-San Joaquin River system in California

    USGS Publications Warehouse

    Kondolf, G. Mathias; Angermeier, Paul L.; Cummins, Kenneth; Dunne, Thomas; Healey, Michael; Kimmerer, Wim; Moyle, Peter B.; Murphy, Dennis; Patten, Duncan; Railsback, Steve F.; Reed, Denise J.; Spies, Robert B.; Twiss, Robert

    2008-01-01

    Despite increasingly large investments, the potential ecological effects of river restoration programs are still small compared to the degree of human alterations to physical and ecological function. Thus, it is rarely possible to “restore” pre-disturbance conditions; rather restoration programs (even large, well-funded ones) will nearly always involve multiple small projects, each of which can make some modest change to selected ecosystem processes and habitats. At present, such projects are typically selected based on their attributes as individual projects (e.g., consistency with programmatic goals of the funders, scientific soundness, and acceptance by local communities), and ease of implementation. Projects are rarely prioritized (at least explicitly) based on how they will cumulatively affect ecosystem function over coming decades. Such projections require an understanding of the form of the restoration response curve, or at least that we assume some plausible relations and estimate cumulative effects based thereon. Drawing on our experience with the CALFED Bay-Delta Ecosystem Restoration Program in California, we consider potential cumulative system-wide benefits of a restoration activity extensively implemented in the region: isolating/filling abandoned floodplain gravel pits captured by rivers to reduce predation of outmigrating juvenile salmon by exotic warmwater species inhabiting the pits. We present a simple spreadsheet model to show how different assumptions about gravel pit bathymetry and predator behavior would affect the cumulative benefits of multiple pit-filling and isolation projects, and how these insights could help managers prioritize which pits to fill.

  3. Evaluating and Evolving Metadata in Multiple Dialects

    NASA Astrophysics Data System (ADS)

    Kozimor, J.; Habermann, T.; Powers, L. A.; Gordon, S.

    2016-12-01

    Despite many long-term homogenization efforts, communities continue to develop focused metadata standards along with related recommendations and (typically) XML representations (aka dialects) for sharing metadata content. Different representations easily become obstacles to sharing information because each representation generally requires a set of tools and skills that are designed, built, and maintained specifically for that representation. In contrast, community recommendations are generally described, at least initially, at a more conceptual level and are more easily shared. For example, most communities agree that dataset titles should be included in metadata records although they write the titles in different ways. This situation has led to the development of metadata repositories that can ingest and output metadata in multiple dialects. As an operational example, the NASA Common Metadata Repository (CMR) includes three different metadata dialects (DIF, ECHO, and ISO 19115-2). These systems raise a new question for metadata providers: if I have a choice of metadata dialects, which should I use and how do I make that decision? We have developed a collection of metadata evaluation tools that can be used to evaluate metadata records in many dialects for completeness with respect to recommendations from many organizations and communities. We have applied these tools to over 8000 collection and granule metadata records in four different dialects. This large collection of identical content in multiple dialects enables us to address questions about metadata and dialect evolution and to answer those questions quantitatively. We will describe those tools and results from evaluating the NASA CMR metadata collection.

  4. Study for identification of Beneficial uses of Space (BUS). Volume 3: Appendices

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The quantification of required specimen(s) from space processing experiments, the typical EMI measurements and estimates of a typical RF source, and the integration of commercial payloads into spacelab were considered.

  5. Continuous subcutaneous insulin infusion in diabetes: patient populations, safety, efficacy, and pharmacoeconomics

    PubMed Central

    Battelino, Tadej; Danne, Thomas; Hovorka, Roman; Jarosz‐Chobot, Przemyslawa; Renard, Eric

    2015-01-01

    Summary The level of glycaemic control necessary to achieve optimal short‐term and long‐term outcomes in subjects with type 1 diabetes mellitus (T1DM) typically requires intensified insulin therapy using multiple daily injections or continuous subcutaneous insulin infusion. For continuous subcutaneous insulin infusion, the insulins of choice are the rapid‐acting insulin analogues, insulin aspart, insulin lispro and insulin glulisine. The advantages of continuous subcutaneous insulin infusion over multiple daily injections in adult and paediatric populations with T1DM include superior glycaemic control, lower insulin requirements and better health‐related quality of life/patient satisfaction. An association between continuous subcutaneous insulin infusion and reduced hypoglycaemic risk is more consistent in children/adolescents than in adults. The use of continuous subcutaneous insulin infusion is widely recommended in both adult and paediatric T1DM populations but is limited in pregnant patients and those with type 2 diabetes mellitus. All available rapid‐acting insulin analogues are approved for use in adult, paediatric and pregnant populations. However, minimum patient age varies (insulin lispro: no minimum; insulin aspart: ≥2 years; insulin glulisine: ≥6 years) and experience in pregnancy ranges from extensive (insulin aspart, insulin lispro) to limited (insulin glulisine). Although more expensive than multiple daily injections, continuous subcutaneous insulin infusion is cost‐effective in selected patient groups. This comprehensive review focuses on the European situation and summarises evidence for the efficacy and safety of continuous subcutaneous insulin infusion, particularly when used with rapid‐acting insulin analogues, in adult, paediatric and pregnant populations. The review also discusses relevant European guidelines; reviews issues that surround use of this technology; summarises the effects of continuous subcutaneous insulin infusion on patients' health‐related quality of life; reviews relevant pharmacoeconomic data; and discusses recent advances in pump technology, including the development of closed‐loop ‘artificial pancreas’ systems. © 2015 The Authors. Diabetes/Metabolism Research and Reviews Published by John Wiley & Sons Ltd. PMID:25865292

  6. Probing Hypergiant Mass Loss with Adaptive Optics Imaging and Polarimetry in the Infrared: MMT-Pol and LMIRCam Observations of IRC +10420 and VY Canis Majoris

    NASA Astrophysics Data System (ADS)

    Shenoy, Dinesh P.; Jones, Terry J.; Packham, Chris; Lopez-Rodriguez, Enrique

    2015-07-01

    We present 2-5 μm adaptive optics (AO) imaging and polarimetry of the famous hypergiant stars IRC +10420 and VY Canis Majoris. The imaging polarimetry of IRC +10420 with MMT-Pol at 2.2 μ {m} resolves nebular emission with intrinsic polarization of 30%, with a high surface brightness indicating optically thick scattering. The relatively uniform distribution of this polarized emission both radially and azimuthally around the star confirms previous studies that place the scattering dust largely in the plane of the sky. Using constraints on scattered light consistent with the polarimetry at 2.2 μ {m}, extrapolation to wavelengths in the 3-5 μm band predicts a scattered light component significantly below the nebular flux that is observed in our Large Binocular Telescope/LMIRCam 3-5 μm AO imaging. Under the assumption this excess emission is thermal, we find a color temperature of ˜500 K is required, well in excess of the emissivity-modified equilibrium temperature for typical astrophysical dust. The nebular features of VY CMa are found to be highly polarized (up to 60%) at 1.3 μm, again with optically thick scattering required to reproduce the observed surface brightness. This star’s peculiar nebular feature dubbed the “Southwest Clump” is clearly detected in the 3.1 μm polarimetry as well, which, unlike IRC +10420, is consistent with scattered light alone. The high intrinsic polarizations of both hypergiants’ nebulae are compatible with optically thick scattering for typical dust around evolved dusty stars, where the depolarizing effect of multiple scatters is mitigated by the grains’ low albedos. Observations reported here were obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona.

  7. R2R--software to speed the depiction of aesthetic consensus RNA secondary structures.

    PubMed

    Weinberg, Zasha; Breaker, Ronald R

    2011-01-04

    With continuing identification of novel structured noncoding RNAs, there is an increasing need to create schematic diagrams showing the consensus features of these molecules. RNA structural diagrams are typically made either with general-purpose drawing programs like Adobe Illustrator, or with automated or interactive programs specific to RNA. Unfortunately, the use of applications like Illustrator is extremely time consuming, while existing RNA-specific programs produce figures that are useful, but usually not of the same aesthetic quality as those produced at great cost in Illustrator. Additionally, most existing RNA-specific applications are designed for drawing single RNA molecules, not consensus diagrams. We created R2R, a computer program that facilitates the generation of aesthetic and readable drawings of RNA consensus diagrams in a fraction of the time required with general-purpose drawing programs. Since the inference of a consensus RNA structure typically requires a multiple-sequence alignment, the R2R user annotates the alignment with commands directing the layout and annotation of the RNA. R2R creates SVG or PDF output that can be imported into Adobe Illustrator, Inkscape or CorelDRAW. R2R can be used to create consensus sequence and secondary structure models for novel RNA structures or to revise models when new representatives for known RNA classes become available. Although R2R does not currently have a graphical user interface, it has proven useful in our efforts to create 100 schematic models of distinct noncoding RNA classes. R2R makes it possible to obtain high-quality drawings of the consensus sequence and structural models of many diverse RNA structures with a more practical amount of effort. R2R software is available at http://breaker.research.yale.edu/R2R and as an Additional file.

  8. Sample-Clock Phase-Control Feedback

    NASA Technical Reports Server (NTRS)

    Quirk, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy

    2012-01-01

    To demodulate a communication signal, a receiver must recover and synchronize to the symbol timing of a received waveform. In a system that utilizes digital sampling, the fidelity of synchronization is limited by the time between the symbol boundary and closest sample time location. To reduce this error, one typically uses a sample clock in excess of the symbol rate in order to provide multiple samples per symbol, thereby lowering the error limit to a fraction of a symbol time. For systems with a large modulation bandwidth, the required sample clock rate is prohibitive due to current technological barriers and processing complexity. With precise control of the phase of the sample clock, one can sample the received signal at times arbitrarily close to the symbol boundary, thus obviating the need, from a synchronization perspective, for multiple samples per symbol. Sample-clock phase-control feedback was developed for use in the demodulation of an optical communication signal, where multi-GHz modulation bandwidths would require prohibitively large sample clock frequencies for rates in excess of the symbol rate. A custom mixedsignal (RF/digital) offset phase-locked loop circuit was developed to control the phase of the 6.4-GHz clock that samples the photon-counting detector output. The offset phase-locked loop is driven by a feedback mechanism that continuously corrects for variation in the symbol time due to motion between the transmitter and receiver as well as oscillator instability. This innovation will allow significant improvements in receiver throughput; for example, the throughput of a pulse-position modulation (PPM) with 16 slots can increase from 188 Mb/s to 1.5 Gb/s.

  9. Biospecimen Complexity-the Next Challenge for Cancer Research Biobanks?

    PubMed

    Watson, Peter H

    2017-02-15

    Purpose: Biospecimens (e.g., tissues, bloods, fluids) are critical for translational cancer research to generate the necessary knowledge to guide implementation of precision medicine. Rising demand and the need for higher quality biospecimens are already evident. Experimental Design: The recent increase in requirement for biospecimen complexity in terms of linked biospecimen types, multiple preservation formats, and longitudinal data was explored by assessing trends in cancer research publications from 2000 to 2014. Results: A PubMed search shows that there has been an increase in both raw numbers and the relative proportion (adjusted for total numbers of articles in each period) of the subgroups of articles typically associated with the use of biospecimens and both dense treatment and/or outcomes data and multiple biospecimen formats. Conclusions: Increasing biospecimen complexity is a largely unrecognized and new pressure on cancer research biobanks. New approaches to cancer biospecimen resources are needed such as the implementation of more efficient and dynamic consent mechanisms, stronger participant involvement in biobank governance, development of requirements for registration of collections, and models to establish stock targets for biobanks. In particular, the latter two approaches would enable funders to establish a better balance between biospecimen supply and research demand, reduce expenditure on duplicate collections, and encourage increased efficiency of biobanks to respond to the research need for more complex cases. This in turn would also enable biobanks to focus more on quality and standardization that are surely factors in the even more important arena of research reproducibility. Clin Cancer Res; 23(4); 894-8. ©2016 AACR . ©2016 American Association for Cancer Research.

  10. The Precision Formation Flying Integrated Analysis Tool (PFFIAT)

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Lyon, Richard G.; Sears, Edie; Lu, Victor

    2004-01-01

    Several space missions presently in the concept phase (e.g. Stellar Imager, Submillimeter Probe of Evolutionary Cosmic Structure, Terrestrial Planet Finder) plan to use multiple spacecraft flying in precise formation to synthesize unprecedently large aperture optical systems. These architectures present challenges to the attitude and position determination and control system; optical performance is directly coupled to spacecraft pointing with typical control requirements being on the scale of milliarcseconds and nanometers. To investigate control strategies, rejection of environmental disturbances, and sensor and actuator requirements, a capability is needed to model both the dynamical and optical behavior of such a distributed telescope system. This paper describes work ongoing at NASA Goddard Space Flight Center toward the integration of a set of optical analysis tools (Optical System Characterization and Analysis Research software, or OSCAR) with the Formation Flying Test Bed (FFTB). The resulting system is called the Precision Formation Flying Integrated Analysis Tool (PFFIAT), and it provides the capability to simulate closed-loop control of optical systems composed of elements mounted on multiple spacecraft. The attitude and translation spacecraft dynamics are simulated in the FFTB, including effects of the space environment (e.g. solar radiation pressure, differential orbital motion). The resulting optical configuration is then processed by OSCAR to determine an optical image. From this image, wavefront sensing (e.g. phase retrieval) techniques are being developed to derive attitude and position errors. These error signals will be fed back to the spacecraft control systems, completing the control loop. A simple case study is presented to demonstrate the present capabilities of the tool.

  11. The Precision Formation Flying Integrated Analysis Tool (PFFIAT)

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Lyon, Richard G.; Sears, Edie; Lu, Victor

    2004-01-01

    Several space missions presently in the concept phase (e.g. Stellar Imager, Sub- millimeter Probe of Evolutionary Cosmic Structure, Terrestrial Planet Finder) plan to use multiple spacecraft flying in precise formation to synthesize unprecedently large aperture optical systems. These architectures present challenges to the attitude and position determination and control system; optical performance is directly coupled to spacecraft pointing with typical control requirements being on the scale of milliarcseconds and nanometers. To investigate control strategies, rejection of environmental disturbances, and sensor and actuator requirements, a capability is needed to model both the dynamical and optical behavior of such a distributed telescope system. This paper describes work ongoing at NASA Goddard Space Flight Center toward the integration of a set of optical analysis tools (Optical System Characterization and Analysis Research software, or OSCAR) with the Formation J?lying Test Bed (FFTB). The resulting system is called the Precision Formation Flying Integrated Analysis Tool (PFFIAT), and it provides the capability to simulate closed-loop control of optical systems composed of elements mounted on multiple spacecraft. The attitude and translation spacecraft dynamics are simulated in the FFTB, including effects of the space environment (e.g. solar radiation pressure, differential orbital motion). The resulting optical configuration is then processed by OSCAR to determine an optical image. From this image, wavefront sensing (e.g. phase retrieval) techniques are being developed to derive attitude and position errors. These error signals will be fed back to the spacecraft control systems, completing the control loop. A simple case study is presented to demonstrate the present capabilities of the tool.

  12. Remediation in the Context of the Competencies: A Survey of Pediatrics Residency Program Directors

    PubMed Central

    Riebschleger, Meredith P.; Haftel, Hilary M.

    2013-01-01

    Background The 6 competencies defined by the Accreditation Council for Graduate Medical Education provide the framework of assessment for trainees in the US graduate medical education system, but few studies have investigated their impact on remediation. Methods We obtained data via an anonymous online survey of pediatrics residency program directors. For the purposes of the survey, remediation was defined as “any form of additional training, supervision, or assistance above that required for a typical resident.” Respondents were asked to quantify 3 groups of residents: (1) residents requiring remediation; (2) residents whose training was extended for remediation purposes; and (3) residents whose training was terminated owing to issues related to remediation. For each group, the proportion of residents with deficiencies in each of the 6 competencies was calculated. Results In all 3 groups, deficiencies in medical knowledge and patient care were most common; deficiencies in professionalism and communication were moderately common; and deficiencies in systems-based practice and practice-based learning and improvement were least common. Residents whose training was terminated were more likely to have deficiencies in multiple competencies. Conclusion Although medical knowledge and patient care are reported most frequently, deficiencies in any of the 6 competencies can lead to the need for remediation in pediatrics residents. Residents who are terminated are more likely to have deficits in multiple competencies. It will be critical to develop and refine tools to measure achievement in all 6 competencies as the graduate medical education community may be moving further toward individualized training schedules and competency-based, rather than time-based, training. PMID:24404228

  13. One-step production of multiple emulsions: microfluidic, polymer-stabilized and particle-stabilized approaches.

    PubMed

    Clegg, Paul S; Tavacoli, Joe W; Wilde, Pete J

    2016-01-28

    Multiple emulsions have great potential for application in food science as a means to reduce fat content or for controlled encapsulation and release of actives. However, neither production nor stability is straightforward. Typically, multiple emulsions are prepared via two emulsification steps and a variety of approaches have been deployed to give long-term stability. It is well known that multiple emulsions can be prepared in a single step by harnessing emulsion inversion, although the resulting emulsions are usually short lived. Recently, several contrasting methods have been demonstrated which give rise to stable multiple emulsions via one-step production processes. Here we review the current state of microfluidic, polymer-stabilized and particle-stabilized approaches; these rely on phase separation, the role of electrolyte and the trapping of solvent with particles respectively.

  14. Malignant transformation of solitary spinal osteochondroma in two mature dogs.

    PubMed

    Green, E M; Adams, W M; Steinberg, H

    1999-01-01

    Canine osteochondroma is an uncommon bony tumor that arises in skeletally immature animals. Consequently, clinical signs typically occur in young dogs as a result of impingement of normal structures by the tumor. Radiographically, osteochondromas are benign in appearance. They are well circumscribed and cause no bony lysis nor periosteal proliferation. Osteochondromas may occur in two forms; solitary or multiple. Although histology and biologic behavior are identical, when in the multiple form the condition has been termed multiple cartilaginous exostoses. Malignant transformation of multiple cartilaginous exostoses has been reported in three mature dogs. We report two dogs with malignant transformation of solitary spinal osteochondromas. Both underwent transformation to osteosarcoma. Despite the benign radiographic appearance of osteochondromas and multiple cartilaginous exostoses, clinical signs should alert the clinician to the possibility of malignant transformation.

  15. Application of WATERSHED ECOLOGICAL RISK ASSESSMENT Methods to Watershed Management

    EPA Science Inventory

    Watersheds are frequently used to study and manage environmental resources because hydrologic boundaries define the flow of contaminants and other stressors. Ecological assessments of watersheds are complex because watersheds typically overlap multiple jurisdictional boundaries,...

  16. Sudan Black B masks Mycobacterium avium subspecies paratuberculosis immunofluorescent antibody labeling

    USDA-ARS?s Scientific Manuscript database

    Autofluorescence and non-specific immunofluorescent labeling are common challenges associated with immunofluorescence experiments. Autofluorescence typically demonstrates a broad emission spectrum, increasing the potential for overlap with experiments that utilize multiple fluorophores. During immun...

  17. Distraction and drowsiness in motorcoach drivers : research brief.

    DOT National Transportation Integrated Search

    2016-11-01

    Motorcoach crasheswhen they occurcan involve multiple injuries and deaths, beyond what is typically experienced in light vehicle crashes. Driver error is often cited as a factor in these crashes, with distraction and drowsiness being primary co...

  18. Application of Watershed Ecological Risk Assessment Methods to Watershed Management

    EPA Science Inventory

    Watersheds are frequently used to study and manage environmental resources because hydrologic boundaries define the flow of contaminants and other stressors. Ecological assessments of watersheds are complex because watersheds typically overlap multiple jurisdictional boundaries,...

  19. Bioregional monitoring design and occupancy estimation for two Sierra Nevadan amphibian taxa

    EPA Science Inventory

    Land-management agencies need quantitative, statistically rigorous monitoring data, often at large spatial and temporal scales, to support resource-management decisions. Monitoring designs typically must accommodate multiple ecological, logistical, political, and economic objec...

  20. Single transmission line interrogated multiple channel data acquisition system

    DOEpatents

    Fasching, George E.; Keech, Jr., Thomas W.

    1980-01-01

    A single transmission line interrogated multiple channel data acquisition system is provided in which a plurality of remote station/sensor circuits each monitors a specific process variable and each transmits measurement values over a single transmission line to a master interrogating station when addressed by said master interrogating station. Typically, as many as 330 remote stations may be parallel connected to the transmission line which may exceed 7,000 feet. The interrogation rate is typically 330 stations/second. The master interrogating station samples each station according to a shared, charging transmit-receive cycle. All remote station address signals, all data signals from the remote stations/sensors and all power for all of the remote station/sensors are transmitted via a single continuous terminated coaxial cable. A means is provided for periodically and remotely calibrating all remote sensors for zero and span. A provision is available to remotely disconnect any selected sensor station from the main transmission line.

  1. Multiple Imputation For Combined-Survey Estimation With Incomplete Regressors In One But Not Both Surveys

    PubMed Central

    Rendall, Michael S.; Ghosh-Dastidar, Bonnie; Weden, Margaret M.; Baker, Elizabeth H.; Nazarov, Zafar

    2013-01-01

    Within-survey multiple imputation (MI) methods are adapted to pooled-survey regression estimation where one survey has more regressors, but typically fewer observations, than the other. This adaptation is achieved through: (1) larger numbers of imputations to compensate for the higher fraction of missing values; (2) model-fit statistics to check the assumption that the two surveys sample from a common universe; and (3) specificying the analysis model completely from variables present in the survey with the larger set of regressors, thereby excluding variables never jointly observed. In contrast to the typical within-survey MI context, cross-survey missingness is monotonic and easily satisfies the Missing At Random (MAR) assumption needed for unbiased MI. Large efficiency gains and substantial reduction in omitted variable bias are demonstrated in an application to sociodemographic differences in the risk of child obesity estimated from two nationally-representative cohort surveys. PMID:24223447

  2. Instantaneous network RTK in Orange County, California

    NASA Astrophysics Data System (ADS)

    Bock, Y.

    2003-04-01

    The Orange County Real Time GPS Network (OCRTN) is an upgrade of a sub-network of SCIGN sites in southern California to low latency (1-2 sec), high-rate (1 Hz) data streaming, analysis, and dissemination. The project is a collaborative effort of the California Spatial Reference Center (CSRC) and the Orange County Public Resource and Facilities Division, with partners from the geophysical community, local and state government, and the private sector. Currently, ten sites are streaming 1 Hz raw data (Ashtech binary MBEN format) by means of dedicated, point-to-point radio modems to a network hub that translates the asynchronous serial data to TCP/IP and onto a PC workstation residing on a local area network. Software residing on the PC allows multiple clients to access the raw data simultaneously though TCP/IP. One of the clients is a Geodetics RTD server that receives and archives (1) the raw 1 Hz network data, (2) estimates of instantaneous positions and zenith tropospheric delays for quality control and detection of ground motion, and (3) RINEX data to decimated to 30 seconds. Data recovery is typically 99-100%. The server also produces 1 Hz RTCM data (messages 18, 19, 3 and 22) that are available by means of TCP/IP to RTK clients with wireless Internet modems. Coverage is excellent throughout the county. The server supports standard RTK users and is compatible with existing GPS instrumentation. Typical latency is 1-2 s, with initialization times of several seconds to minutes OCRTN site spacing is 10-15 km. In addition, the server supports “smart clients” who can retrieve data from the closest n sites (typically 3) and obtain an instantaneous network RTK position with 1-2 s latency. This mode currently requires a PDA running the RTD client software, and a wireless card. Since there is no initialization and re-initialization required this approach is well suited to support high-precision (centimeter-level) dynamic applications such as intelligent transportation and aircraft landing. We will discuss the results of field tests of this system, indicating that instantaneous network RTK can be performed accurately and reliably. If an Internet connection is available we will present a real-time demonstration.

  3. Occupancy estimation and the closure assumption

    USGS Publications Warehouse

    Rota, Christopher T.; Fletcher, Robert J.; Dorazio, Robert M.; Betts, Matthew G.

    2009-01-01

    1. Recent advances in occupancy estimation that adjust for imperfect detection have provided substantial improvements over traditional approaches and are receiving considerable use in applied ecology. To estimate and adjust for detectability, occupancy modelling requires multiple surveys at a site and requires the assumption of 'closure' between surveys, i.e. no changes in occupancy between surveys. Violations of this assumption could bias parameter estimates; however, little work has assessed model sensitivity to violations of this assumption or how commonly such violations occur in nature. 2. We apply a modelling procedure that can test for closure to two avian point-count data sets in Montana and New Hampshire, USA, that exemplify time-scales at which closure is often assumed. These data sets illustrate different sampling designs that allow testing for closure but are currently rarely employed in field investigations. Using a simulation study, we then evaluate the sensitivity of parameter estimates to changes in site occupancy and evaluate a power analysis developed for sampling designs that is aimed at limiting the likelihood of closure. 3. Application of our approach to point-count data indicates that habitats may frequently be open to changes in site occupancy at time-scales typical of many occupancy investigations, with 71% and 100% of species investigated in Montana and New Hampshire respectively, showing violation of closure across time periods of 3 weeks and 8 days respectively. 4. Simulations suggest that models assuming closure are sensitive to changes in occupancy. Power analyses further suggest that the modelling procedure we apply can effectively test for closure. 5. Synthesis and applications. Our demonstration that sites may be open to changes in site occupancy over time-scales typical of many occupancy investigations, combined with the sensitivity of models to violations of the closure assumption, highlights the importance of properly addressing the closure assumption in both sampling designs and analysis. Furthermore, inappropriately applying closed models could have negative consequences when monitoring rare or declining species for conservation and management decisions, because violations of closure typically lead to overestimates of the probability of occurrence.

  4. Impact of non-uniform correlation structure on sample size and power in multiple-period cluster randomised trials.

    PubMed

    Kasza, J; Hemming, K; Hooper, R; Matthews, Jns; Forbes, A B

    2017-01-01

    Stepped wedge and cluster randomised crossover trials are examples of cluster randomised designs conducted over multiple time periods that are being used with increasing frequency in health research. Recent systematic reviews of both of these designs indicate that the within-cluster correlation is typically taken account of in the analysis of data using a random intercept mixed model, implying a constant correlation between any two individuals in the same cluster no matter how far apart in time they are measured: within-period and between-period intra-cluster correlations are assumed to be identical. Recently proposed extensions allow the within- and between-period intra-cluster correlations to differ, although these methods require that all between-period intra-cluster correlations are identical, which may not be appropriate in all situations. Motivated by a proposed intensive care cluster randomised trial, we propose an alternative correlation structure for repeated cross-sectional multiple-period cluster randomised trials in which the between-period intra-cluster correlation is allowed to decay depending on the distance between measurements. We present results for the variance of treatment effect estimators for varying amounts of decay, investigating the consequences of the variation in decay on sample size planning for stepped wedge, cluster crossover and multiple-period parallel-arm cluster randomised trials. We also investigate the impact of assuming constant between-period intra-cluster correlations instead of decaying between-period intra-cluster correlations. Our results indicate that in certain design configurations, including the one corresponding to the proposed trial, a correlation decay can have an important impact on variances of treatment effect estimators, and hence on sample size and power. An R Shiny app allows readers to interactively explore the impact of correlation decay.

  5. Molecular image-directed biopsies: improving clinical biopsy selection in patients with multiple tumors

    NASA Astrophysics Data System (ADS)

    Harmon, Stephanie A.; Tuite, Michael J.; Jeraj, Robert

    2016-10-01

    Site selection for image-guided biopsies in patients with multiple lesions is typically based on clinical feasibility and physician preference. This study outlines the development of a selection algorithm that, in addition to clinical requirements, incorporates quantitative imaging data for automatic identification of candidate lesions for biopsy. The algorithm is designed to rank potential targets by maximizing a lesion-specific score, incorporating various criteria separated into two categories: (1) physician-feasibility category including physician-preferred lesion location and absolute volume scores, and (2) imaging-based category including various modality and application-specific metrics. This platform was benchmarked in two clinical scenarios, a pre-treatment setting and response-based setting using imaging from metastatic prostate cancer patients with high disease burden (multiple lesions) undergoing conventional treatment and receiving whole-body [18F]NaF PET/CT scans pre- and mid-treatment. Targeting of metastatic lesions was robust to different weighting ratios and candidacy for biopsy was physician confirmed. Lesion ranked as top targets for biopsy remained so for all patients in pre-treatment and post-treatment biopsy selection after sensitivity testing was completed for physician-biased or imaging-biased scenarios. After identifying candidates, biopsy feasibility was evaluated by a physician and confirmed for 90% (32/36) of high-ranking lesions, of which all top choices were confirmed. The remaining cases represented lesions with high anatomical difficulty for targeting, such as proximity to sciatic nerve. This newly developed selection method was successfully used to quantitatively identify candidate lesions for biopsies in patients with multiple lesions. In a prospective study, we were able to successfully plan, develop, and implement this technique for the selection of a pre-treatment biopsy location.

  6. Genetic risk and a primary role for cell-mediated immune mechanisms in multiple sclerosis

    PubMed Central

    Sawcer, Stephen; Hellenthal, Garrett; Pirinen, Matti; Spencer, Chris C.A.; Patsopoulos, Nikolaos A.; Moutsianas, Loukas; Dilthey, Alexander; Su, Zhan; Freeman, Colin; Hunt, Sarah E.; Edkins, Sarah; Gray, Emma; Booth, David R.; Potter, Simon C.; Goris, An; Band, Gavin; Oturai, Annette Bang; Strange, Amy; Saarela, Janna; Bellenguez, Céline; Fontaine, Bertrand; Gillman, Matthew; Hemmer, Bernhard; Gwilliam, Rhian; Zipp, Frauke; Jayakumar, Alagurevathi; Martin, Roland; Leslie, Stephen; Hawkins, Stanley; Giannoulatou, Eleni; D’alfonso, Sandra; Blackburn, Hannah; Boneschi, Filippo Martinelli; Liddle, Jennifer; Harbo, Hanne F.; Perez, Marc L.; Spurkland, Anne; Waller, Matthew J; Mycko, Marcin P.; Ricketts, Michelle; Comabella, Manuel; Hammond, Naomi; Kockum, Ingrid; McCann, Owen T.; Ban, Maria; Whittaker, Pamela; Kemppinen, Anu; Weston, Paul; Hawkins, Clive; Widaa, Sara; Zajicek, John; Dronov, Serge; Robertson, Neil; Bumpstead, Suzannah J.; Barcellos, Lisa F.; Ravindrarajah, Rathi; Abraham, Roby; Alfredsson, Lars; Ardlie, Kristin; Aubin, Cristin; Baker, Amie; Baker, Katharine; Baranzini, Sergio E.; Bergamaschi, Laura; Bergamaschi, Roberto; Bernstein, Allan; Berthele, Achim; Boggild, Mike; Bradfield, Jonathan P.; Brassat, David; Broadley, Simon A.; Buck, Dorothea; Butzkueven, Helmut; Capra, Ruggero; Carroll, William M.; Cavalla, Paola; Celius, Elisabeth G.; Cepok, Sabine; Chiavacci, Rosetta; Clerget-Darpoux, Françoise; Clysters, Katleen; Comi, Giancarlo; Cossburn, Mark; Cournu-Rebeix, Isabelle; Cox, Mathew B.; Cozen, Wendy; Cree, Bruce A.C.; Cross, Anne H.; Cusi, Daniele; Daly, Mark J.; Davis, Emma; de Bakker, Paul I.W.; Debouverie, Marc; D’hooghe, Marie Beatrice; Dixon, Katherine; Dobosi, Rita; Dubois, Bénédicte; Ellinghaus, David; Elovaara, Irina; Esposito, Federica; Fontenille, Claire; Foote, Simon; Franke, Andre; Galimberti, Daniela; Ghezzi, Angelo; Glessner, Joseph; Gomez, Refujia; Gout, Olivier; Graham, Colin; Grant, Struan F.A.; Guerini, Franca Rosa; Hakonarson, Hakon; Hall, Per; Hamsten, Anders; Hartung, Hans-Peter; Heard, Rob N.; Heath, Simon; Hobart, Jeremy; Hoshi, Muna; Infante-Duarte, Carmen; Ingram, Gillian; Ingram, Wendy; Islam, Talat; Jagodic, Maja; Kabesch, Michael; Kermode, Allan G.; Kilpatrick, Trevor J.; Kim, Cecilia; Klopp, Norman; Koivisto, Keijo; Larsson, Malin; Lathrop, Mark; Lechner-Scott, Jeannette S.; Leone, Maurizio A.; Leppä, Virpi; Liljedahl, Ulrika; Bomfim, Izaura Lima; Lincoln, Robin R.; Link, Jenny; Liu, Jianjun; Lorentzen, Åslaug R.; Lupoli, Sara; Macciardi, Fabio; Mack, Thomas; Marriott, Mark; Martinelli, Vittorio; Mason, Deborah; McCauley, Jacob L.; Mentch, Frank; Mero, Inger-Lise; Mihalova, Tania; Montalban, Xavier; Mottershead, John; Myhr, Kjell-Morten; Naldi, Paola; Ollier, William; Page, Alison; Palotie, Aarno; Pelletier, Jean; Piccio, Laura; Pickersgill, Trevor; Piehl, Fredrik; Pobywajlo, Susan; Quach, Hong L.; Ramsay, Patricia P.; Reunanen, Mauri; Reynolds, Richard; Rioux, John D.; Rodegher, Mariaemma; Roesner, Sabine; Rubio, Justin P.; Rückert, Ina-Maria; Salvetti, Marco; Salvi, Erika; Santaniello, Adam; Schaefer, Catherine A.; Schreiber, Stefan; Schulze, Christian; Scott, Rodney J.; Sellebjerg, Finn; Selmaj, Krzysztof W.; Sexton, David; Shen, Ling; Simms-Acuna, Brigid; Skidmore, Sheila; Sleiman, Patrick M.A.; Smestad, Cathrine; Sørensen, Per Soelberg; Søndergaard, Helle Bach; Stankovich, Jim; Strange, Richard C.; Sulonen, Anna-Maija; Sundqvist, Emilie; Syvänen, Ann-Christine; Taddeo, Francesca; Taylor, Bruce; Blackwell, Jenefer M.; Tienari, Pentti; Bramon, Elvira; Tourbah, Ayman; Brown, Matthew A.; Tronczynska, Ewa; Casas, Juan P.; Tubridy, Niall; Corvin, Aiden; Vickery, Jane; Jankowski, Janusz; Villoslada, Pablo; Markus, Hugh S.; Wang, Kai; Mathew, Christopher G.; Wason, James; Palmer, Colin N.A.; Wichmann, H-Erich; Plomin, Robert; Willoughby, Ernest; Rautanen, Anna; Winkelmann, Juliane; Wittig, Michael; Trembath, Richard C.; Yaouanq, Jacqueline; Viswanathan, Ananth C.; Zhang, Haitao; Wood, Nicholas W.; Zuvich, Rebecca; Deloukas, Panos; Langford, Cordelia; Duncanson, Audrey; Oksenberg, Jorge R.; Pericak-Vance, Margaret A.; Haines, Jonathan L.; Olsson, Tomas; Hillert, Jan; Ivinson, Adrian J.; De Jager, Philip L.; Peltonen, Leena; Stewart, Graeme J.; Hafler, David A.; Hauser, Stephen L.; McVean, Gil; Donnelly, Peter; Compston, Alastair

    2011-01-01

    Multiple sclerosis (OMIM 126200) is a common disease of the central nervous system in which the interplay between inflammatory and neurodegenerative processes typically results in intermittent neurological disturbance followed by progressive accumulation of disability.1 Epidemiological studies have shown that genetic factors are primarily responsible for the substantially increased frequency of the disease seen in the relatives of affected individuals;2,3 and systematic attempts to identify linkage in multiplex families have confirmed that variation within the Major Histocompatibility Complex (MHC) exerts the greatest individual effect on risk.4 Modestly powered Genome-Wide Association Studies (GWAS)5-10 have enabled more than 20 additional risk loci to be identified and have shown that multiple variants exerting modest individual effects play a key role in disease susceptibility.11 Most of the genetic architecture underlying susceptibility to the disease remains to be defined and is anticipated to require the analysis of sample sizes that are beyond the numbers currently available to individual research groups. In a collaborative GWAS involving 9772 cases of European descent collected by 23 research groups working in 15 different countries, we have replicated almost all of the previously suggested associations and identified at least a further 29 novel susceptibility loci. Within the MHC we have refined the identity of the DRB1 risk alleles and confirmed that variation in the HLA-A gene underlies the independent protective effect attributable to the Class I region. Immunologically relevant genes are significantly over-represented amongst those mapping close to the identified loci and particularly implicate T helper cell differentiation in the pathogenesis of multiple sclerosis. PMID:21833088

  7. Optimal space communications techniques. [using digital and phase locked systems for signal processing

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1974-01-01

    Digital multiplication of two waveforms using delta modulation (DM) is discussed. It is shown that while conventional multiplication of two N bit words requires N2 complexity, multiplication using DM requires complexity which increases linearly with N. Bounds on the signal-to-quantization noise ratio (SNR) resulting from this multiplication are determined and compared with the SNR obtained using standard multiplication techniques. The phase locked loop (PLL) system, consisting of a phase detector, voltage controlled oscillator, and a linear loop filter, is discussed in terms of its design and system advantages. Areas requiring further research are identified.

  8. Developing Scientific Reasoning Through Drawing Cross-Sections

    NASA Astrophysics Data System (ADS)

    Hannula, K. A.

    2012-12-01

    Cross-sections and 3D models of subsurface geology are typically based on incomplete information (whether surface geologic mapping, well logs, or geophysical data). Creating and evaluating those models requires spatial and quantitative thinking skills (including penetrative thinking, understanding of horizontality, mental rotation and animation, and scaling). However, evaluating the reasonableness of a cross-section or 3D structural model also requires consideration of multiple possible geometries and geologic histories. Teaching students to create good models requires application of the scientific methods of the geosciences (such as evaluation of multiple hypotheses and combining evidence from multiple techniques). Teaching these critical thinking skills, especially combined with teaching spatial thinking skills, is challenging. My Structural Geology and Advanced Structural Geology courses have taken two different approaches to developing both the abilities to visualize and to test multiple models. In the final project in Structural Geology (a 3rd year course with a pre-requisite sophomore mapping course), students create a viable cross-section across part of the Wyoming thrust belt by hand, based on a published 1:62,500 geologic map. The cross-section must meet a number of geometric criteria (such as the template constraint), but is not required to balance. Each student tries many potential geometries while trying to find a viable solution. In most cases, the students don't visualize the implications of the geometries that they try, but have to draw them and then erase their work if it does not meet the criteria for validity. The Advanced Structural Geology course used Midland Valley's Move suite to test the cross-sections that they made in Structural Geology, mostly using the flexural slip unfolding algorithm and testing whether the resulting line lengths balanced. In both exercises, students seemed more confident in the quality of their cross-sections when the sections were easy to visualize. Students in Structural Geology are proud of their cross-sections once they were inked and colored. Students in Advanced Structural Geology were confident in their digitized cross-sections, even before they had tried to balance them or had tested whether they were kinematically plausible. In both cases, visually attractive models seemed easier to believe. Three-dimensional models seemed even more convincing: if students could visualize the model, they also thought it should work geometrically and kinematically, whether they had tested it or not. Students were more inclined to test their models when they had a clear set of criteria that would indicate success or failure. However, future development of new ideas about the kinematic and/or mechanical development of structures may force the students to also decide which criteria fit their problem the best. Combining both kinds of critical thinking (evaluating techniques and evaluating their results) in the same assignment may be challenging.

  9. Developmental changes in attention to faces and bodies in static and dynamic scenes.

    PubMed

    Stoesz, Brenda M; Jakobson, Lorna S

    2014-01-01

    Typically developing individuals show a strong visual preference for faces and face-like stimuli; however, this may come at the expense of attending to bodies or to other aspects of a scene. The primary goal of the present study was to provide additional insight into the development of attentional mechanisms that underlie perception of real people in naturalistic scenes. We examined the looking behaviors of typical children, adolescents, and young adults as they viewed static and dynamic scenes depicting one or more people. Overall, participants showed a bias to attend to faces more than on other parts of the scenes. Adding motion cues led to a reduction in the number, but an increase in the average duration of face fixations in single-character scenes. When multiple characters appeared in a scene, motion-related effects were attenuated and participants shifted their gaze from faces to bodies, or made off-screen glances. Children showed the largest effects related to the introduction of motion cues or additional characters, suggesting that they find dynamic faces difficult to process, and are especially prone to look away from faces when viewing complex social scenes-a strategy that could reduce the cognitive and the affective load imposed by having to divide one's attention between multiple faces. Our findings provide new insights into the typical development of social attention during natural scene viewing, and lay the foundation for future work examining gaze behaviors in typical and atypical development.

  10. [GP medication prioritisation in older patients with multiple comorbidities recently discharged from hospital: a case-based bottom-up approach].

    PubMed

    Herrmann, M L H; von Waldegg, G H; Kip, M; Lehmann, B; Andrusch, S; Straub, H; Robra, B-P

    2015-01-01

    After the hospital discharge of older patients with multiple morbidities, GPs are often faced with the task of prioritising the patients' drug regimens so as to reduce the risk of overmedication. How do GPs prioritise such medications in multimorbid elderly patients at the transition between inpatient and home care? The experience by the GPs is documented in typical case vignettes. 44 GPs in Sachsen-Anhalt were recruited--they were engaged in focus group discussions and interviewed using semi-standardised questionnaires. Typical case vignettes were developed, relevant to the everyday care that elderly patients would typically receive from their GPs with respect to their drug optimisation. According to the results of the focus groups, the following issues affect GPs' decisions: drug and patient safety, their own competence in the health system, patient health literacy, evidence base, communication between secondary and primary care (and their respective influences on each other). When considering individual cases, patient safety, patient wishes, and quality of life were central. This is demonstrated by the drug dispositions of one exemplary case vignette. GPs do prioritise drug regimens with rational criteria. Initial problem delineation, process documentation and the design of a transferable product are interlinking steps in the development of case vignettes. Care issues of drug therapy in elderly patients with multiple morbidities should be investigated further with larger representative samples in order to clarify whether the criteria used here are applied contextually or consistently. Embedding case vignettes into further education concepts is also likely to be useful. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Orbiter Kapton wire operational requirements and experience

    NASA Technical Reports Server (NTRS)

    Peterson, R. V.

    1994-01-01

    The agenda of this presentation includes the Orbiter wire selection requirements, the Orbiter wire usage, fabrication and test requirements, typical wiring installations, Kapton wire experience, NASA Kapton wire testing, summary, and backup data.

  12. Precise Orbit Determination for LEO Spacecraft Using GNSS Tracking Data from Multiple Antennas

    NASA Technical Reports Server (NTRS)

    Kuang, Da; Bertiger, William; Desai, Shailen; Haines, Bruce

    2010-01-01

    To support various applications, certain Earth-orbiting spacecrafts (e.g., SRTM, COSMIC) use multiple GNSS antennas to provide tracking data for precise orbit determination (POD). POD using GNSS tracking data from multiple antennas poses some special technical issues compared to the typical single-antenna approach. In this paper, we investigate some of these issues using both real and simulated data. Recommendations are provided for POD with multiple GNSS antennas and for antenna configuration design. The observability of satellite position with multiple antennas data is compared against single antenna case. The impact of differential clock (line biases) and line-of-sight (up, along-track, and cross-track) on kinematic and reduced-dynamic POD is evaluated. The accuracy of monitoring the stability of the spacecraft structure by simultaneously performing POD of the spacecraft and relative positioning of the multiple antennas is also investigated.

  13. Structure and function of polyglycine hydrolases

    USDA-ARS?s Scientific Manuscript database

    Polyglycine hydrolases (PGH)s are secreted fungal endoproteases that cleave polyglycine linkers of targeted plant defense chitinases. Unlike typical endoproteases that cleave a specific peptide bond, these 640 amino acid glycoproteins selectively cleave one of multiple peptide bonds within polyglyci...

  14. Introduction to acoustic emission

    NASA Technical Reports Server (NTRS)

    Possa, G.

    1983-01-01

    Typical acoustic emission signal characteristics are described and techniques which localize the signal source by processing the acoustic delay data from multiple sensors are discussed. The instrumentation, which includes sensors, amplifiers, pulse counters, a minicomputer and output devices is examined. Applications are reviewed.

  15. Band-edge absorption coefficients from photoluminescence in semiconductor multiple quantum wells

    NASA Technical Reports Server (NTRS)

    Kost, Alan; Zou, Yao; Dapkus, P. D.; Garmire, Elsa; Lee, H. C.

    1989-01-01

    A novel approach to determining absorption coefficients in thin films using luminescence is described. The technique avoids many of the difficulties typically encountered in measurements of thin samples, Fabry-Perot effects, for example, and can be applied to a variety of materials. The absorption edge for GaAs/AlGaAs multiple quantum well structures, with quantum well widths ranging from 54 to 193 A is examined. Urbach (1953) parameters and excitonic linewidths are tabulated.

  16. Ergodicity-breaking bifurcations and tunneling in hyperbolic transport models

    NASA Astrophysics Data System (ADS)

    Giona, M.; Brasiello, A.; Crescitelli, S.

    2015-11-01

    One of the main differences between parabolic transport, associated with Langevin equations driven by Wiener processes, and hyperbolic models related to generalized Kac equations driven by Poisson processes, is the occurrence in the latter of multiple stable invariant densities (Frobenius multiplicity) in certain regions of the parameter space. This phenomenon is associated with the occurrence in linear hyperbolic balance equations of a typical bifurcation, referred to as the ergodicity-breaking bifurcation, the properties of which are thoroughly analyzed.

  17. Expanding the spectrum of neuronal pathology in multiple system atrophy

    PubMed Central

    Cykowski, Matthew D.; Coon, Elizabeth A.; Powell, Suzanne Z.; Jenkins, Sarah M.; Benarroch, Eduardo E.; Low, Phillip A.; Schmeichel, Ann M.

    2015-01-01

    Multiple system atrophy is a sporadic alpha-synucleinopathy that typically affects patients in their sixth decade of life and beyond. The defining clinical features of the disease include progressive autonomic failure, parkinsonism, and cerebellar ataxia leading to significant disability. Pathologically, multiple system atrophy is characterized by glial cytoplasmic inclusions containing filamentous alpha-synuclein. Neuronal inclusions also have been reported but remain less well defined. This study aimed to further define the spectrum of neuronal pathology in 35 patients with multiple system atrophy (20 male, 15 female; mean age at death 64.7 years; median disease duration 6.5 years, range 2.2 to 15.6 years). The morphologic type, topography, and frequencies of neuronal inclusions, including globular cytoplasmic (Lewy body-like) neuronal inclusions, were determined across a wide spectrum of brain regions. A correlation matrix of pathologic severity also was calculated between distinct anatomic regions of involvement (striatum, substantia nigra, olivary and pontine nuclei, hippocampus, forebrain and thalamus, anterior cingulate and neocortex, and white matter of cerebrum, cerebellum, and corpus callosum). The major finding was the identification of widespread neuronal inclusions in the majority of patients, not only in typical disease-associated regions (striatum, substantia nigra), but also within anterior cingulate cortex, amygdala, entorhinal cortex, basal forebrain and hypothalamus. Neuronal inclusion pathology appeared to follow a hierarchy of region-specific susceptibility, independent of the clinical phenotype, and the severity of pathology was duration-dependent. Neuronal inclusions also were identified in regions not previously implicated in the disease, such as within cerebellar roof nuclei. Lewy body-like inclusions in multiple system atrophy followed the stepwise anatomic progression of Lewy body-spectrum disease inclusion pathology in 25.7% of patients with multiple system atrophy, including a patient with visual hallucinations. Further, the presence of Lewy body-like inclusions in neocortex, but not hippocampal alpha-synuclein pathology, was associated with cognitive impairment (P = 0.002). However, several cases had the presence of isolated Lewy body-like inclusions at atypical sites (e.g. thalamus, deep cerebellar nuclei) that are not typical for Lewy body-spectrum disease. Finally, interregional correlations (rho ≥ 0.6) in pathologic glial and neuronal lesion burden suggest shared mechanisms of disease progression between both discrete anatomic regions (e.g. basal forebrain and hippocampus) and cell types (neuronal and glial inclusions in frontal cortex and white matter, respectively). These findings suggest that in addition to glial inclusions, neuronal pathology plays an important role in the developmental and progression of multiple system atrophy. See Halliday (doi:10.1093/brain/awv151) for a scientific commentary on this article. PMID:25981961

  18. Autonomous long-range open area fire detection and reporting

    NASA Astrophysics Data System (ADS)

    Engelhaupt, Darell E.; Reardon, Patrick J.; Blackwell, Lisa; Warden, Lance; Ramsey, Brian D.

    2005-03-01

    Approximately 5 billion dollars in US revenue was lost in 2003 due to open area fires. In addition many lives are lost annually. Early detection of open area fires is typically performed by manned observatories, random reporting and aerial surveillance. Optical IR flame detectors have been developed previously. They typically have experienced high false alarms and low flame detection sensitivity due to interference from solar and other causes. Recently a combination of IR detectors has been used in a two or three color mode to reduce false alarms from solar, or background sources. A combination of ultra-violet C (UVC) and near infra-red (NIR) detectors has also been developed recently for flame discrimination. Relatively solar-blind basic detectors are now available but typically detect at only a few tens of meters at ~ 1 square meter fuel flame. We quantify the range and solar issues for IR and visible detectors and qualitatively define UV sensor requirements in terms of the mode of operation, collection area issues and flame signal output by combustion photochemistry. We describe innovative flame signal collection optics for multiple wavelengths using UV and IR as low false alarm detection of open area fires at long range (8-10 km/m2) in daylight (or darkness). A circular array detector and UV-IR reflective and refractive devices including cylindrical or toroidal lens elements for the IR are described. The dispersion in a refractive cylindrical IR lens characterizes the fire and allows a stationary line or circle generator to locate the direction and different flame IR "colors" from a wide FOV. The line generator will produce spots along the line corresponding to the fire which can be discriminated with a linear detector. We demonstrate prototype autonomous sensors with RF digital reporting from various sites.

  19. Should essays and other "open-ended"-type questions retain a place in written summative assessment in clinical medicine?

    PubMed

    Hift, Richard J

    2014-11-28

    Written assessments fall into two classes: constructed-response or open-ended questions, such as the essay and a number of variants of the short-answer question, and selected-response or closed-ended questions; typically in the form of multiple-choice. It is widely believed that constructed response written questions test higher order cognitive processes in a manner that multiple-choice questions cannot, and consequently have higher validity. An extensive review of the literature suggests that in summative assessment neither premise is evidence-based. Well-structured open-ended and multiple-choice questions appear equivalent in their ability to assess higher cognitive functions, and performance in multiple-choice assessments may correlate more highly than the open-ended format with competence demonstrated in clinical practice following graduation. Studies of construct validity suggest that both formats measure essentially the same dimension, at least in mathematics, the physical sciences, biology and medicine. The persistence of the open-ended format in summative assessment may be due to the intuitive appeal of the belief that synthesising an answer to an open-ended question must be both more cognitively taxing and similar to actual experience than is selecting a correct response. I suggest that cognitive-constructivist learning theory would predict that a well-constructed context-rich multiple-choice item represents a complex problem-solving exercise which activates a sequence of cognitive processes which closely parallel those required in clinical practice, hence explaining the high validity of the multiple-choice format. The evidence does not support the proposition that the open-ended assessment format is superior to the multiple-choice format, at least in exit-level summative assessment, in terms of either its ability to test higher-order cognitive functioning or its validity. This is explicable using a theory of mental models, which might predict that the multiple-choice format will have higher validity, a statement for which some empiric support exists. Given the superior reliability and cost-effectiveness of the multiple-choice format consideration should be given to phasing out open-ended format questions in summative assessment. Whether the same applies to non-exit-level assessment and formative assessment is a question which remains to be answered; particularly in terms of the educational effect of testing, an area which deserves intensive study.

  20. Improving Socialization for High School Students with ASD by Using their Preferred Interests

    PubMed Central

    Koegel, Robert; Kim, Sunny; Koegel, Lynn; Schwartzman, Ben

    2013-01-01

    There has been a paucity of research on effective social interventions for adolescents with ASD in inclusive high school settings. The literature, however, suggests that incorporating the student with ASD’s special interests into activities may help improve their socialization with typical peers. Within the context of a multiple baseline across participants design, we implemented lunchtime activities incorporating the adolescent with ASD’s preferred interests that were similar to ongoing activities already available at the schools. Results showed this increased both level of engagement and their rate of initiations made to typical peers. Social validation measures suggest that both adolescents with ASD and typical peers enjoyed participating in these activities and that the results generalized to other similar activities. PMID:23361918

Top