The Case for Individualized Goal Attainment Scaling Measurement in Elder Abuse Interventions.
Burnes, David; Lachs, Mark S
2017-01-01
Research available to inform the development of effective community-based elder abuse protective response interventions is severely limited. Elder abuse intervention research is constrained by a lack of research capacity, including sensitive and responsive outcome measures that can assess change in case status over the course of intervention. Given the heterogeneous nature of elder abuse, standard scales can lack the flexibility necessary to capture the diverse range of individually relevant issues across cases. In this paper, we seek to address this gap by proposing the adaptation and use of an innovative measurement strategy-goal attainment scaling-in the context of elder protection. Goal attainment scaling is an individualized, client-centered outcome measurement approach that has the potential to address existing measurement challenges constraining progress in elder abuse intervention research. © The Author(s) 2015.
Isotopic And Geochemical Investigations Of Meteorites
NASA Technical Reports Server (NTRS)
Walker, Richard J.
2005-01-01
The primary goals of our research over the past four years are to constrain the timing of certain early planetary accretion/differentiation events, and to constrain the proportions and provenance of materials involved in these processes. This work was achieved via the analysis and interpretation of long- and short-lived isotope systems, and the study of certain trace elements. Our research targeted these goals primarily via the application of the Re-187, Os-187, Pt-190 Os-186 Tc-98 Ru-99 and Tc-99 Ru-99 isotopic systems, and the determination/modeling of abundances of the highly siderophile elements (HSE; including Re, Os, Ir, Ru, Pd, Pt, and maybe Tc). The specific events we examined include the segregation and crystallization histories of asteroidal cores, the accretion and metamorphic histories of chondrites and chondrite components, and the accretionary and differentiation histories of Mars and the Moon.
Mission Data System Java Edition Version 7
NASA Technical Reports Server (NTRS)
Reinholtz, William K.; Wagner, David A.
2013-01-01
The Mission Data System framework defines closed-loop control system abstractions from State Analysis including interfaces for state variables, goals, estimators, and controllers that can be adapted to implement a goal-oriented control system. The framework further provides an execution environment that includes a goal scheduler, execution engine, and fault monitor that support the expression of goal network activity plans. Using these frameworks, adapters can build a goal-oriented control system where activity coordination is verified before execution begins (plan time), and continually during execution. Plan failures including violations of safety constraints expressed in the plan can be handled through automatic re-planning. This version optimizes a number of key interfaces and features to minimize dependencies, performance overhead, and improve reliability. Fault diagnosis and real-time projection capabilities are incorporated. This version enhances earlier versions primarily through optimizations and quality improvements that raise the technology readiness level. Goals explicitly constrain system states over explicit time intervals to eliminate ambiguity about intent, as compared to command-oriented control that only implies persistent intent until another command is sent. A goal network scheduling and verification process ensures that all goals in the plan are achievable before starting execution. Goal failures at runtime can be detected (including predicted failures) and handled by adapted response logic. Responses can include plan repairs (try an alternate tactic to achieve the same goal), goal shedding, ignoring the fault, cancelling the plan, or safing the system.
ERIC Educational Resources Information Center
Dowd, Jason E.; Roy, Christopher P.; Thompson, Robert J., Jr.; Reynolds, Julie A.
2015-01-01
The Department of Chemistry at Duke University has endeavored to expand participation in undergraduate honors thesis research while maintaining the quality of the learning experience. Accomplishing this goal has been constrained by limited departmental resources (including faculty time) and increased diversity in students' preparation to engage in…
Secure Service Proxy: A CoAP(s) Intermediary for a Securer and Smarter Web of Things
Van den Abeele, Floris; Moerman, Ingrid; Demeester, Piet
2017-01-01
As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things. PMID:28696393
Secure Service Proxy: A CoAP(s) Intermediary for a Securer and Smarter Web of Things.
Van den Abeele, Floris; Moerman, Ingrid; Demeester, Piet; Hoebeke, Jeroen
2017-07-11
As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things.
Franklin, Marika; Lewis, Sophie; Willis, Karen; Rogers, Anne; Venville, Annie; Smith, Lorraine
2018-06-01
A person-centered approach to goal-setting, involving collaboration between patients and health professionals, is advocated in policy to support self-management. However, this is difficult to achieve in practice, reducing the potential effectiveness of self-management support. Drawing on observations of consultations between patients and health professionals, we examined how goal-setting is shaped in patient-provider interactions. Analysis revealed three distinct interactional styles. In controlled interactions, health professionals determine patients' goals based on biomedical reference points and present these goals as something patients should do. In constrained interactions, patients are invited to present goals, yet health professionals' language and questions orientate goals toward biomedical issues. In flexible interactions, patients and professionals both contribute to goal-setting, as health professionals use less directive language, create openings, and allow patients to decide on their goals. Findings suggest that interactional style of health professionals could be the focus of interventions when aiming to increase the effectiveness of goal-setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X; Belcher, AH; Wiersma, R
Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimizationmore » and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also used significantly less computer memory.« less
Maffei, Giovanni; Santos-Pata, Diogo; Marcos, Encarni; Sánchez-Fibla, Marti; Verschure, Paul F M J
2015-12-01
Animals successfully forage within new environments by learning, simulating and adapting to their surroundings. The functions behind such goal-oriented behavior can be decomposed into 5 top-level objectives: 'how', 'why', 'what', 'where', 'when' (H4W). The paradigms of classical and operant conditioning describe some of the behavioral aspects found in foraging. However, it remains unclear how the organization of their underlying neural principles account for these complex behaviors. We address this problem from the perspective of the Distributed Adaptive Control theory of mind and brain (DAC) that interprets these two paradigms as expressing properties of core functional subsystems of a layered architecture. In particular, we propose DAC-X, a novel cognitive architecture that unifies the theoretical principles of DAC with biologically constrained computational models of several areas of the mammalian brain. DAC-X supports complex foraging strategies through the progressive acquisition, retention and expression of task-dependent information and associated shaping of action, from exploration to goal-oriented deliberation. We benchmark DAC-X using a robot-based hoarding task including the main perceptual and cognitive aspects of animal foraging. We show that efficient goal-oriented behavior results from the interaction of parallel learning mechanisms accounting for motor adaptation, spatial encoding and decision-making. Together, our results suggest that the H4W problem can be solved by DAC-X building on the insights from the study of classical and operant conditioning. Finally, we discuss the advantages and limitations of the proposed biologically constrained and embodied approach towards the study of cognition and the relation of DAC-X to other cognitive architectures. Copyright © 2015 Elsevier Ltd. All rights reserved.
Next generation terminology infrastructure to support interprofessional care planning.
Collins, Sarah; Klinkenberg-Ramirez, Stephanie; Tsivkin, Kira; Mar, Perry L; Iskhakova, Dina; Nandigam, Hari; Samal, Lipika; Rocha, Roberto A
2017-11-01
Develop a prototype of an interprofessional terminology and information model infrastructure that can enable care planning applications to facilitate patient-centered care, learn care plan linkages and associations, provide decision support, and enable automated, prospective analytics. The study steps included a 3 step approach: (1) Process model and clinical scenario development, and (2) Requirements analysis, and (3) Development and validation of information and terminology models. Components of the terminology model include: Health Concerns, Goals, Decisions, Interventions, Assessments, and Evaluations. A terminology infrastructure should: (A) Include discrete care plan concepts; (B) Include sets of profession-specific concerns, decisions, and interventions; (C) Communicate rationales, anticipatory guidance, and guidelines that inform decisions among the care team; (D) Define semantic linkages across clinical events and professions; (E) Define sets of shared patient goals and sub-goals, including patient stated goals; (F) Capture evaluation toward achievement of goals. These requirements were mapped to AHRQ Care Coordination Measures Framework. This study used a constrained set of clinician-validated clinical scenarios. Terminology models for goals and decisions are unavailable in SNOMED CT, limiting the ability to evaluate these aspects of the proposed infrastructure. Defining and linking subsets of care planning concepts appears to be feasible, but also essential to model interprofessional care planning for common co-occurring conditions and chronic diseases. We recommend the creation of goal dynamics and decision concepts in SNOMED CT to further enable the necessary models. Systems with flexible terminology management infrastructure may enable intelligent decision support to identify conflicting and aligned concerns, goals, decisions, and interventions in shared care plans, ultimately decreasing documentation effort and cognitive burden for clinicians and patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Minimal complexity control law synthesis
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.
1989-01-01
A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.
2014-07-25
composition of simple temporal structures to a speaker diarization task with the goal of segmenting conference audio in the presence of an unknown number of...application domains including neuroimaging, diverse document selection, speaker diarization , stock modeling, and target tracking. We detail each of...recall performance than competing methods in a task of discovering articles preferred by the user • a gold-standard speaker diarization method, as
Observing Solar Radio Bursts from the Lunar Surface
NASA Technical Reports Server (NTRS)
MacDowall, R. J.; Gopalswamy, N.; Kaiser, M. L.; Lazio, T. J.; Jones, D. L.; Bale, S. D.; Burns, J.; Kasper, J. C.; Weiler, K. W.
2011-01-01
Locating low frequency radio observatories on the lunar surface has a number of advantages, including fixes locations for the antennas and no terrestrial interference on the far side of the moon. Here, we describe the Radio Observatory for Lunar Sortie Science (ROLSS), a concept for a low frequency, radio imaging interferometric array designed to study particle acceleration in the corona and inner heliosphere. ROLSS would be deployed during an early lunar sortie or by a robotic rover as part of an unmanned landing. The prime science mission is to image type II and type III solar radio bursts with the aim of determining the sites at and mechanisms by which the radiating particles are accelerated. Secondary science goals include constraining the density of the lunar ionosphere by searching for a low radio frequency cutoff of the solar radio emissions and constraining the low energy electron population in astrophysical sources. Furthermore, ROLSS serves a pathfinder function for larger lunar radio arrays designed for faint sources.
The CAnadian NIRISS Unbiased Cluster Survey (CANUCS)
NASA Astrophysics Data System (ADS)
Ravindranath, Swara; NIRISS GTO Team
2017-06-01
CANUCS GTO program is a JWST spectroscopy and imaging survey of five massive galaxy clusters and ten parallel fields using the NIRISS low-resolution grisms, NIRCam imaging and NIRSpec multi-object spectroscopy. The primary goal is to understand the evolution of low mass galaxies across cosmic time. The resolved emission line maps and line ratios for many galaxies, with some at resolution of 100pc via the magnification by gravitational lensing will enable determining the spatial distribution of star formation, dust and metals. Other science goals include the detection and characterization of galaxies within the reionization epoch, using multiply-imaged lensed galaxies to constrain cluster mass distributions and dark matter substructure, and understanding star-formation suppression in the most massive galaxy clusters. In this talk I will describe the science goals of the CANUCS program. The proposed prime and parallel observations will be presented with details of the implementation of the observation strategy using JWST proposal planning tools.
Merging Disparate Data and Numerical Model Results for Dynamically Constrained Nowcasts
1999-09-30
of Delaware Newark, DE 19716 phone: (302) 831-6836 fax: (302) 831-6838 email: brucel @udel.edu Award #: N000149910052 http://newark.cms.udel.edu... brucel /hrd.html LONG-TERM GOALS The long term goal of our research is to quantify submesoscale dynamical processes and understand their interactions
A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.
Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa
2018-02-01
Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.
Characterizing biospheric carbon balance using CO2 observations from the OCO-2 satellite
NASA Astrophysics Data System (ADS)
Miller, Scot M.; Michalak, Anna M.; Yadav, Vineet; Tadić, Jovan M.
2018-05-01
NASA's Orbiting Carbon Observatory 2 (OCO-2) satellite launched in summer of 2014. Its observations could allow scientists to constrain CO2 fluxes across regions or continents that were previously difficult to monitor. This study explores an initial step toward that goal; we evaluate the extent to which current OCO-2 observations can detect patterns in biospheric CO2 fluxes and constrain monthly CO2 budgets. Our goal is to guide top-down, inverse modeling studies and identify areas for future improvement. We find that uncertainties and biases in the individual OCO-2 observations are comparable to the atmospheric signal from biospheric fluxes, particularly during Northern Hemisphere winter when biospheric fluxes are small. A series of top-down experiments indicate how these errors affect our ability to constrain monthly biospheric CO2 budgets. We are able to constrain budgets for between two and four global regions using OCO-2 observations, depending on the month, and we can constrain CO2 budgets at the regional level (i.e., smaller than seven global biomes) in only a handful of cases (16 % of all regions and months). The potential of the OCO-2 observations, however, is greater than these results might imply. A set of synthetic data experiments suggests that retrieval errors have a salient effect. Advances in retrieval algorithms and to a lesser extent atmospheric transport modeling will improve the results. In the interim, top-down studies that use current satellite observations are best-equipped to constrain the biospheric carbon balance across only continental or hemispheric regions.
Inventory of Amphibians and Reptiles in Southern Colorado Plateau National Parks
Persons, Trevor B.; Nowak, Erika M.
2006-01-01
In fiscal year 2000, the National Park Service (NPS) initiated a nationwide program to inventory vertebrates andvascular plants within the National Parks, and an inventory plan was developed for the 19 park units in the Southern Colorado Plateau Inventory & Monitoring Network. We surveyed 12 parks in this network for reptiles and amphibians between 2001 and 2003. The overall goals of our herpetofaunal inventories were to document 90% of the species present, identify park-specific species of special concern, and, based on the inventory results, make recommendations for the development of an effective monitoring program. We used the following standardized herpetological methods to complete the inventories: time-area constrained searches, visual encounter ('general') surveys, and nighttime road cruising. We also recorded incidental species sightings and surveyed existing literature and museum specimen databases. We found 50 amphibian and reptile species during fieldwork. These included 1 salamander, 11 anurans, 21 lizards, and 17 snakes. Literature reviews, museum specimen data records, and personal communications with NPS staff added an additional eight species, including one salamander, one turtle, one lizard, and five snakes. It was necessary to use a variety of methods to detect all species in each park. Randomly-generated 1-ha time-area constrained searches and night drives produced the fewest species and individuals of all the methods, while general surveys and randomly-generated 10-ha time-areas constrained searches produced the most. Inventory completeness was likely compromised by a severe drought across the region during our surveys. In most parks we did not come close to the goal of detecting 90% of the expected species present; however, we did document several species range extensions. Effective monitoring programs for herpetofauna on the Colorado Plateau should use a variety of methods to detect species, and focus on taxa-specific methods. Randomly-generated plots must take into account microhabitat and aquatic features to be effective at sampling for herpetofauna.
Merging Disparate Data and Numerical Model Results for Dynamically Constrained Nowcasts
2000-09-30
of Delaware Newark, DE 19716 phone: (302) 831-6836 fax: (302) 831-6838 email: brucel @udel.edu Award #: N000149910052 http://newark.cms.udel.edu... brucel /hrd.html LONG-TERM GOALS Our long-term goal is to quantify submesoscale dynamical processes in the ocean so that we can better understand
OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods
NASA Technical Reports Server (NTRS)
Heath, Christopher M.; Gray, Justin S.
2012-01-01
The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.
NASA Astrophysics Data System (ADS)
Tanaka, Katsumasa; O'Neill, Brian C.
2018-04-01
The Paris Agreement stipulates that global warming be stabilized at well below 2 °C above pre-industrial levels, with aims to further constrain this warming to 1.5 °C. However, it also calls for reducing net anthropogenic greenhouse gas (GHG) emissions to zero during the second half of this century. Here, we use a reduced-form integrated assessment model to examine the consistency between temperature- and emission-based targets. We find that net zero GHG emissions are not necessarily required to remain below 1.5 °C or 2 °C, assuming either target can be achieved without overshoot. With overshoot, however, the emissions goal is consistent with the temperature targets, and substantial negative emissions are associated with reducing warming after it peaks. Temperature targets are put at risk by late achievement of emissions goals and the use of some GHG emission metrics. Refinement of Paris Agreement emissions goals should include a focus on net zero CO2—not GHG—emissions, achieved early in the second half of the century.
DSLR Double Star Astrometry Using an Alt-Az Telescope
NASA Astrophysics Data System (ADS)
Frey, Thomas; Haworth, David
2014-07-01
The goal of this project was to determine if the double star's angular separation and position angle measurements could be successfully measured with a motor driven, alt-azimuth Dobsonian-mounted Newtonian telescope (without a field rotator), and a digital single-lens reflex (DSLR) camera. Additionally, the project was constrained by using as much existing equipment as much as possible, including an Apple MacBook Pro laptop and a Canon T2i camera. This project was additionally challenging because the first author had no experience with astrophotography.
Absolute Stability Analysis of a Phase Plane Controlled Spacecraft
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; Plummer, Michael; Bedrossian, Nazareth; Hall, Charles; Jackson, Mark; Spanos, Pol
2010-01-01
Many aerospace attitude control systems utilize phase plane control schemes that include nonlinear elements such as dead zone and ideal relay. To evaluate phase plane control robustness, stability margin prediction methods must be developed. Absolute stability is extended to predict stability margins and to define an abort condition. A constrained optimization approach is also used to design flex filters for roll control. The design goal is to optimize vehicle tracking performance while maintaining adequate stability margins. Absolute stability is shown to provide satisfactory stability constraints for the optimization.
NASA Astrophysics Data System (ADS)
Gervás, Pablo
2016-04-01
Most poetry-generation systems apply opportunistic approaches where algorithmic procedures are applied to explore the conceptual space defined by a given knowledge resource in search of solutions that might be aesthetically valuable. Aesthetical value is assumed to arise from compliance to a given poetic form - such as rhyme or metrical regularity - or from evidence of semantic relations between the words in the resulting poems that can be interpreted as rhetorical tropes - such as similes, analogies, or metaphors. This approach tends to fix a priori the aesthetic parameters of the results, and imposes no constraints on the message to be conveyed. The present paper describes an attempt to initiate a shift in this balance, introducing means for constraining the output to certain topics and allowing a looser mechanism for constraining form. This goal arose as a result of the need to produce poems for a themed collection commissioned to be included in a book. The solution adopted explores an approach to creativity where the goals are not solely aesthetic and where the results may be surprising in their poetic form. An existing computer poet, originally developed to produce poems in a given form but with no specific constraints on their content, is put to the task of producing a set of poems with explicit restrictions on content, and allowing for an exploration of poetic form. Alternative generation methods are devised to overcome the difficulties, and the various insights arising from these new methods and the impact they have on the set of resulting poems are discussed in terms of their potential contribution to better poetry-generation systems.
Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network
2012-01-01
Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Conclusions Yeast 5 expands and refines the computational reconstruction of yeast metabolism and improves the predictive accuracy of a stoichiometrically constrained yeast metabolic model. It differs from previous reconstructions and models by emphasizing the distinction between the yeast metabolic reconstruction and the stoichiometrically constrained model, and makes both available as Additional file 4 and Additional file 5 and at http://yeast.sf.net/ as separate systems biology markup language (SBML) files. Through this separation, we intend to make the modeling process more accessible, explicit, transparent, and reproducible. PMID:22663945
Fisher, Jane Rw; de Mello, Meena Cabral; Izutsu, Takashi; Tran, Tuan
2011-01-07
Mental health problems in women during pregnancy and after childbirth and their adverse consequences for child health and development have received sustained detailed attention in high-income countries. In contrast, evidence has only been generated more recently in resource-constrained settings.In June 2007 the United Nations Population Fund, the World Health Organization, the Key Centre for Women's Health in Society, a WHO Collaborating Centre for Women's Health and the Research and Training Centre for Community Development in Vietnam convened the first international expert meeting on maternal mental health and child health and development in resource-constrained settings. It aimed to appraise the evidence about the nature, prevalence and risks for common perinatal mental disorders in women; the consequences of these for child health and development and ameliorative strategies in these contexts.The substantial disparity in rates of perinatal mental disorders between women living in high- and low-income settings, suggests social rather than biological determinants. Risks in resource-constrained contexts include: poverty; crowded living situations; limited reproductive autonomy; unintended pregnancy; lack of empathy from the intimate partner; rigid gender stereotypes about responsibility for household work and infant care; family violence; poor physical health and discrimination. Development is adversely affected if infants lack day-to-day interactions with a caregiver who can interpret their cues, and respond effectively. Women with compromised mental health are less able to provide sensitive, responsive infant care. In resource-constrained settings infants whose mothers are depressed are less likely to thrive and to receive optimal care than those whose mothers are well.The meeting outcome is the Hanoi Expert Statement (Additional file 1). It argues that the Millennium Development Goals to improve maternal health, reduce child mortality, promote gender equality and empower women, achieve universal primary education and eradicate extreme poverty and hunger cannot be attained without a specific focus on women's mental health. It was co-signed by the international expert group; relevant WHO and UNFPA departmental representatives and international authorities. They concur that social rather than medical responses are required. Improvements in maternal mental health require a cross-sectoral response addressing poverty reduction, women's rights, social protection, violence prevention, education and gender in addition to health.
NASA Astrophysics Data System (ADS)
Bartell, Richard J.; Perram, Glen P.; Fiorino, Steven T.; Long, Scott N.; Houle, Marken J.; Rice, Christopher A.; Manning, Zachary P.; Bunch, Dustin W.; Krizo, Matthew J.; Gravley, Liesebet E.
2005-06-01
The Air Force Institute of Technology's Center for Directed Energy has developed a software model, the High Energy Laser End-to-End Operational Simulation (HELEEOS), under the sponsorship of the High Energy Laser Joint Technology Office (JTO), to facilitate worldwide comparisons across a broad range of expected engagement scenarios of expected performance of a diverse range of weight-constrained high energy laser system types. HELEEOS has been designed to meet JTO's goals of supporting a broad range of analyses applicable to the operational requirements of all the military services, constraining weapon effectiveness through accurate engineering performance assessments allowing its use as an investment strategy tool, and the establishment of trust among military leaders. HELEEOS is anchored to respected wave optics codes and all significant degradation effects, including thermal blooming and optical turbulence, are represented in the model. The model features operationally oriented performance metrics, e.g. dwell time required to achieve a prescribed probability of kill and effective range. Key features of HELEEOS include estimation of the level of uncertainty in the calculated Pk and generation of interactive nomographs to allow the user to further explore a desired parameter space. Worldwide analyses are enabled at five wavelengths via recently available databases capturing climatological, seasonal, diurnal, and geographical spatial-temporal variability in atmospheric parameters including molecular and aerosol absorption and scattering profiles and optical turbulence strength. Examples are provided of the impact of uncertainty in weight-power relationships, coupled with operating condition variability, on results of performance comparisons between chemical and solid state lasers.
Forced to be free? Increasing patient autonomy by constraining it
2014-01-01
It is universally accepted in bioethics that doctors and other medical professionals have an obligation to procure the informed consent of their patients. Informed consent is required because patients have the moral right to autonomy in furthering the pursuit of their most important goals. In the present work, it is argued that evidence from psychology shows that human beings are subject to a number of biases and limitations as reasoners, which can be expected to lower the quality of their decisions and which therefore make it more difficult for them to pursue their most important goals by giving informed consent. It is further argued that patient autonomy is best promoted by constraining the informed consent procedure. By limiting the degree of freedom patients have to choose, the good that informed consent is supposed to protect can be promoted. PMID:22318413
Forced to be free? Increasing patient autonomy by constraining it.
Levy, Neil
2014-05-01
It is universally accepted in bioethics that doctors and other medical professionals have an obligation to procure the informed consent of their patients. Informed consent is required because patients have the moral right to autonomy in furthering the pursuit of their most important goals. In the present work, it is argued that evidence from psychology shows that human beings are subject to a number of biases and limitations as reasoners, which can be expected to lower the quality of their decisions and which therefore make it more difficult for them to pursue their most important goals by giving informed consent. It is further argued that patient autonomy is best promoted by constraining the informed consent procedure. By limiting the degree of freedom patients have to choose, the good that informed consent is supposed to protect can be promoted.
NASA Astrophysics Data System (ADS)
Kent, T.
2011-12-01
The goal of this study is to constrain the most recent thermal alteration of two drill cores (HSB2/HSB4) from the Island of Akutan in the Aleutian Islands of Alaska. These cores are characterized by identifying mineralogy using x-ray diffraction spectra, energy dispersive spectroscopy with a scanning electron microscope and optical mineralogy. This is then compared with the coincident thermal data gathered on site in order to help constrain the most recent thermal activity of this dynamic resource. Using multiple temperature diagnostic minerals and their paragenesis, a relative thermal history is produced of expansive propylitic alteration. When combined with the wireline temperature gradients of the cores a model of downward migration emerges. Shallow occurrences of high temperature minerals that lie above the boiling point to depth curve indicate higher hydrostatic pressures in the past which can be attributed to a combination of glacial effects, including a significant amount of glacial erosion that is recognized due to a lack of significant clay cap to the geothermal resource.
Nuclear physics from Lattice QCD
NASA Astrophysics Data System (ADS)
Shanahan, Phiala
2017-09-01
I will discuss the current state and future scope of numerical Lattice Quantum Chromodynamics (LQCD) calculations of nuclear matrix elements. The goal of the program is to provide direct QCD calculations of nuclear observables relevant to experimental programs, including double-beta decay matrix elements, nuclear corrections to axial matrix elements relevant to long-baseline neutrino experiments and nuclear sigma terms needed for theory predictions of dark matter cross-sections at underground detectors. I will discuss the progress and challenges on these fronts, and also address recent work constraining a gluonic analogue of the EMC effect, which will be measurable at a future electron-ion collider.
Geo-Statistical Approach to Estimating Asteroid Exploration Parameters
NASA Technical Reports Server (NTRS)
Lincoln, William; Smith, Jeffrey H.; Weisbin, Charles
2011-01-01
NASA's vision for space exploration calls for a human visit to a near earth asteroid (NEA). Potential human operations at an asteroid include exploring a number of sites and analyzing and collecting multiple surface samples at each site. In this paper two approaches to formulation and scheduling of human exploration activities are compared given uncertain information regarding the asteroid prior to visit. In the first approach a probability model was applied to determine best estimates of mission duration and exploration activities consistent with exploration goals and existing prior data about the expected aggregate terrain information. These estimates were compared to a second approach or baseline plan where activities were constrained to fit within an assumed mission duration. The results compare the number of sites visited, number of samples analyzed per site, and the probability of achieving mission goals related to surface characterization for both cases.
Aircraft wake vortex measurements at Denver International Airport
DOT National Transportation Integrated Search
2004-05-10
Airport capacity is constrained, in part, by spacing requirements associated with the wake vortex hazard. NASA's Wake Vortex Avoidance Project has a goal to establish the feasibility of reducing this spacing while maintaining safety. Passive acoustic...
An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function
NASA Technical Reports Server (NTRS)
Wrenn, Gregory A.
1989-01-01
A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.
Analogical and category-based inference: a theoretical integration with Bayesian causal models.
Holyoak, Keith J; Lee, Hee Seung; Lu, Hongjing
2010-11-01
A fundamental issue for theories of human induction is to specify constraints on potential inferences. For inferences based on shared category membership, an analogy, and/or a relational schema, it appears that the basic goal of induction is to make accurate and goal-relevant inferences that are sensitive to uncertainty. People can use source information at various levels of abstraction (including both specific instances and more general categories), coupled with prior causal knowledge, to build a causal model for a target situation, which in turn constrains inferences about the target. We propose a computational theory in the framework of Bayesian inference and test its predictions (parameter-free for the cases we consider) in a series of experiments in which people were asked to assess the probabilities of various causal predictions and attributions about a target on the basis of source knowledge about generative and preventive causes. The theory proved successful in accounting for systematic patterns of judgments about interrelated types of causal inferences, including evidence that analogical inferences are partially dissociable from overall mapping quality.
Breakthrough Propulsion Physics Project: Project Management Methods
NASA Technical Reports Server (NTRS)
Millis, Marc G.
2004-01-01
To leap past the limitations of existing propulsion, the NASA Breakthrough Propulsion Physics (BPP) Project seeks further advancements in physics from which new propulsion methods can eventually be derived. Three visionary breakthroughs are sought: (1) propulsion that requires no propellant, (2) propulsion that circumvents existing speed limits, and (3) breakthrough methods of energy production to power such devices. Because these propulsion goals are presumably far from fruition, a special emphasis is to identify credible research that will make measurable progress toward these goals in the near-term. The management techniques to address this challenge are presented, with a special emphasis on the process used to review, prioritize, and select research tasks. This selection process includes these key features: (a) research tasks are constrained to only address the immediate unknowns, curious effects or critical issues, (b) reliability of assertions is more important than the implications of the assertions, which includes the practice where the reviewers judge credibility rather than feasibility, and (c) total scores are obtained by multiplying the criteria scores rather than by adding. Lessons learned and revisions planned are discussed.
Multiple utility constrained multi-objective programs using Bayesian theory
NASA Astrophysics Data System (ADS)
Abbasian, Pooneh; Mahdavi-Amiri, Nezam; Fazlollahtabar, Hamed
2018-03-01
A utility function is an important tool for representing a DM's preference. We adjoin utility functions to multi-objective optimization problems. In current studies, usually one utility function is used for each objective function. Situations may arise for a goal to have multiple utility functions. Here, we consider a constrained multi-objective problem with each objective having multiple utility functions. We induce the probability of the utilities for each objective function using Bayesian theory. Illustrative examples considering dependence and independence of variables are worked through to demonstrate the usefulness of the proposed model.
Giva, Karen R N; Duma, Sinegugu E
2015-08-31
Problem-based learning (PBL) was introduced in Malawi in 2002 in order to improve the nursing education system and respond to the acute nursing human resources shortage. However, its implementation has been very slow throughout the country. The objectives of the study were to explore and describe the goals that were identified by the college to facilitate the implementation of PBL, the resources of the organisation that facilitated the implementation of PBL, the factors related to sources of students that facilitated the implementation of PBL, and the influence of the external system of the organisation on facilitating the implementation of PBL, and to identify critical success factors that could guide the implementation of PBL in nursing education in Malawi. This is an ethnographic, exploratory and descriptive qualitative case study. Purposive sampling was employed to select the nursing college, participants and documents for review.Three data collection methods, including semi-structured interviews, participant observation and document reviews, were used to collect data. The four steps of thematic analysis were used to analyse data from all three sources. Four themes and related subthemes emerged from the triangulated data sources. The first three themes and their subthemes are related to the characteristics related to successful implementation of PBL in a human resource-constrained nursing college, whilst the last theme is related to critical success factors that contribute to successful implementation of PBL in a human resource-constrained country like Malawi. This article shows that implementation of PBL is possible in a human resource-constrained country if there is political commitment and support.
NASA Technical Reports Server (NTRS)
Postma, Barry Dirk
2005-01-01
This thesis discusses application of a robust constrained optimization approach to control design to develop an Auto Balancing Controller (ABC) for a centrifuge rotor to be implemented on the International Space Station. The design goal is to minimize a performance objective of the system, while guaranteeing stability and proper performance for a range of uncertain plants. The Performance objective is to minimize the translational response of the centrifuge rotor due to a fixed worst-case rotor imbalance. The robustness constraints are posed with respect to parametric uncertainty in the plant. The proposed approach to control design allows for both of these objectives to be handled within the framework of constrained optimization. The resulting controller achieves acceptable performance and robustness characteristics.
Public Input on Stream Monitoring in the Willamette Valley, Oregon
The goal of environmental monitoring is to track resource condition, and thereby support environmental knowledge and management. Judgments are inevitable during monitoring design regarding what resource features will be assessed. Constraining what to measure given a complex envir...
Public input on stream monitoring in the Willamette Valley, Oregon - ACES
The goal of environmental monitoring is to track resource condition, and thereby support environmental knowledge and management. Judgments are inevitable during monitoring design regarding what resource features will be assessed. Constraining what to measure given a complex envir...
What Do They Want to Know? Public Input on Stream Monitoring
The goal of environmental monitoring is to track resource condition, and thereby support environmental knowledge and management. Judgments are inevitable during monitoring design regarding what resource features will be assessed. Constraining what to measure in a complex environm...
Frantz, J M; Rhoda, A J
2017-03-01
Interprofessional education is seen as a vehicle to facilitate collaborative practice and, therefore, address the complex health needs of populations. A number of concerns have, however, been raised with the implementation of interprofessional education. The three core concerns raised in the literature and addressed in the article include the lack of an explicit framework, challenges operationalising interprofessional education and practice, and the lack of critical mass in terms of human resources to drive activities related to interprofessional education and practice. This article aims to present lessons learnt when attempting to overcome the main challenges and implementing interprofessional education activities in a resource-constrained higher education setting in South Africa. Boyer's model of scholarship, which incorporates research, teaching integration, and application, was used to address the challenge of a lack of a framework in which to conceptualise the activities of interprofressional education. In addition, a scaffolding approach to teaching activities within a curriculum was used to operationalise interprofessional education and practice. Faculty development initiatives were additionally used to develop a critical mass that focused on driving interprofessional education. Lessons learnt highlighted that if a conceptual model is agreed upon by all, it allows for a more focused approach, and both human and financial resources may be channelled towards a common goal which may assist resource-constrained institutions in successfully implementing interprofessional activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Albin, E-mail: albin.fredriksson@raysearchlabs.com; Hårdemark, Björn; Forsgren, Anders
2015-07-15
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goalsmore » to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.« less
A global approach to kinematic path planning to robots with holonomic and nonholonomic constraints
NASA Technical Reports Server (NTRS)
Divelbiss, Adam; Seereeram, Sanjeev; Wen, John T.
1993-01-01
Robots in applications may be subject to holonomic or nonholonomic constraints. Examples of holonomic constraints include a manipulator constrained through the contact with the environment, e.g., inserting a part, turning a crank, etc., and multiple manipulators constrained through a common payload. Examples of nonholonomic constraints include no-slip constraints on mobile robot wheels, local normal rotation constraints for soft finger and rolling contacts in grasping, and conservation of angular momentum of in-orbit space robots. The above examples all involve equality constraints; in applications, there are usually additional inequality constraints such as robot joint limits, self collision and environment collision avoidance constraints, steering angle constraints in mobile robots, etc. The problem of finding a kinematically feasible path that satisfies a given set of holonomic and nonholonomic constraints, of both equality and inequality types is addressed. The path planning problem is first posed as a finite time nonlinear control problem. This problem is subsequently transformed to a static root finding problem in an augmented space which can then be iteratively solved. The algorithm has shown promising results in planning feasible paths for redundant arms satisfying Cartesian path following and goal endpoint specifications, and mobile vehicles with multiple trailers. In contrast to local approaches, this algorithm is less prone to problems such as singularities and local minima.
Solid-phase synthesis of diverse peptide tertiary amides by reductive amination.
Pels, Kevin; Kodadek, Thomas
2015-03-09
The synthesis of libraries of conformationally constrained peptide-like oligomers is an important goal in combinatorial chemistry. In this regard an attractive building block is the N-alkylated peptide, also known as a peptide tertiary amide (PTA). PTAs are conformationally constrained because of allylic 1,3 strain interactions. We report here an improved synthesis of these species on solid supports through the use of reductive amination chemistry using amino acid-terminated, bead-displayed oligomers and diverse aldehydes. The utility of this chemistry is demonstrated by the synthesis of a library of 10,000 mixed peptoid-PTA oligomers.
Method of texturing a superconductive oxide precursor
DeMoranville, Kenneth L.; Li, Qi; Antaya, Peter D.; Christopherson, Craig J.; Riley, Jr., Gilbert N.; Seuntjens, Jeffrey M.
1999-01-01
A method of forming a textured superconductor wire includes constraining an elongated superconductor precursor between two constraining elongated members placed in contact therewith on opposite sides of the superconductor precursor, and passing the superconductor precursor with the two constraining members through flat rolls to form the textured superconductor wire. The method includes selecting desired cross-sectional shape and size constraining members to control the width of the formed superconductor wire. A textured superconductor wire formed by the method of the invention has regular-shaped, curved sides and is free of flashing. A rolling assembly for single-pass rolling of the elongated precursor superconductor includes two rolls, two constraining members, and a fixture for feeding the precursor superconductor and the constraining members between the rolls. In alternate embodiments of the invention, the rolls can have machined regions which will contact only the elongated constraining members and affect the lateral deformation and movement of those members during the rolling process.
Theory of constraints for publicly funded health systems.
Sadat, Somayeh; Carter, Michael W; Golden, Brian
2013-03-01
Originally developed in the context of publicly traded for-profit companies, theory of constraints (TOC) improves system performance through leveraging the constraint(s). While the theory seems to be a natural fit for resource-constrained publicly funded health systems, there is a lack of literature addressing the modifications required to adopt TOC and define the goal and performance measures. This paper develops a system dynamics representation of the classical TOC's system-wide goal and performance measures for publicly traded for-profit companies, which forms the basis for developing a similar model for publicly funded health systems. The model is then expanded to include some of the factors that affect system performance, providing a framework to apply TOC's process of ongoing improvement in publicly funded health systems. Future research is required to more accurately define the factors affecting system performance and populate the model with evidence-based estimates for various parameters in order to use the model to guide TOC's process of ongoing improvement.
Physics capabilities of the SNO+ experiment
NASA Astrophysics Data System (ADS)
Arushanova, E.; Back, A. R.;
2017-09-01
SNO+ will soon enter its first phase of physics data-taking. The Canadian-based detector forms part of the SNOLAB underground facility, in a Sudbury nickel mine; its location providing more than two kilometres of rock overburden. We present an overview of the SNO+ experiment and its physics capabilities. Our primary goal is the search for neutrinoless double-beta decay, where our expected sensitivity would place an upper limit of 1.9 × 1026 y, at 90% CL, on the half-life of neutrinoless double-beta decay in 130Te. We also intend to build on the success of SNO by studying the solar neutrino spectrum. In the unloaded scintillator phase SNO+ has the ability to make precision measurements of the fluxes of low-energy pep neutrinos and neutrinos from the CNO cycle. Other physics goals include: determining the spectrum of reactor antineutrinos, to further constrain Δ {m}122; detecting neutrinos produced by a galactic supernova and investigating certain modes of nucleon decay.
Vogelsang, David A; Bonnici, Heidi M; Bergström, Zara M; Ranganath, Charan; Simons, Jon S
2016-08-01
To remember a previous event, it is often helpful to use goal-directed control processes to constrain what comes to mind during retrieval. Behavioral studies have demonstrated that incidental learning of new "foil" words in a recognition test is superior if the participant is trying to remember studied items that were semantically encoded compared to items that were non-semantically encoded. Here, we applied subsequent memory analysis to fMRI data to understand the neural mechanisms underlying the "foil effect". Participants encoded information during deep semantic and shallow non-semantic tasks and were tested in a subsequent blocked memory task to examine how orienting retrieval towards different types of information influences the incidental encoding of new words presented as foils during the memory test phase. To assess memory for foils, participants performed a further surprise old/new recognition test involving foil words that were encountered during the previous memory test blocks as well as completely new words. Subsequent memory effects, distinguishing successful versus unsuccessful incidental encoding of foils, were observed in regions that included the left inferior frontal gyrus and posterior parietal cortex. The left inferior frontal gyrus exhibited disproportionately larger subsequent memory effects for semantic than non-semantic foils, and significant overlap in activity during semantic, but not non-semantic, initial encoding and foil encoding. The results suggest that orienting retrieval towards different types of foils involves re-implementing the neurocognitive processes that were involved during initial encoding. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
A Goal Programming/Constrained Regression Review of the Bell System Breakup.
1985-05-01
characteristically employ. 4 .- - -. . ,. - - ;--.. . . .. 2. MULTI-PRODUCT COST MODEL AND DATA DETAILS When technical efficiency (i.e. zero waste ) can be assumed...assumming, but we believe that it was probably technical (= zero waste ) efficiency by virtue of the following reasons. Scale efficien- cy was a
Complete to Compete: Common College Completion Metrics
ERIC Educational Resources Information Center
Reyna, Ryan
2010-01-01
Governors face unprecedented demands across state government to deliver vital services in an environment of constrained resources. Higher education is no exception. States must increase the number of high-quality college graduates within available funding to meet workforce needs and compete globally. To meet this goal, policymakers--including…
NASA Astrophysics Data System (ADS)
Bompard, E.; Ma, Y. C.; Ragazzi, E.
2006-03-01
Competition has been introduced in the electricity markets with the goal of reducing prices and improving efficiency. The basic idea which stays behind this choice is that, in competitive markets, a greater quantity of the good is exchanged at a lower price, leading to higher market efficiency. Electricity markets are pretty different from other commodities mainly due to the physical constraints related to the network structure that may impact the market performance. The network structure of the system on which the economic transactions need to be undertaken poses strict physical and operational constraints. Strategic interactions among producers that game the market with the objective of maximizing their producer surplus must be taken into account when modeling competitive electricity markets. The physical constraints, specific of the electricity markets, provide additional opportunity of gaming to the market players. Game theory provides a tool to model such a context. This paper discussed the application of game theory to physical constrained electricity markets with the goal of providing tools for assessing the market performance and pinpointing the critical network constraints that may impact the market efficiency. The basic models of game theory specifically designed to represent the electricity markets will be presented. IEEE30 bus test system of the constrained electricity market will be discussed to show the network impacts on the market performances in presence of strategic bidding behavior of the producers.
Aiding the search: Examining individual differences in multiply-constrained problem solving.
Ellis, Derek M; Brewer, Gene A
2018-07-01
Understanding and resolving complex problems is of vital importance in daily life. Problems can be defined by the limitations they place on the problem solver. Multiply-constrained problems are traditionally examined with the compound remote associates task (CRAT). Performance on the CRAT is partially dependent on an individual's working memory capacity (WMC). These findings suggest that executive processes are critical for problem solving and that there are reliable individual differences in multiply-constrained problem solving abilities. The goals of the current study are to replicate and further elucidate the relation between WMC and CRAT performance. To achieve these goals, we manipulated preexposure to CRAT solutions and measured WMC with complex-span tasks. In Experiment 1, we report evidence that preexposure to CRAT solutions improved problem solving accuracy, WMC was correlated with problem solving accuracy, and that WMC did not moderate the effect of preexposure on problem solving accuracy. In Experiment 2, we preexposed participants to correct and incorrect solutions. We replicated Experiment 1 and found that WMC moderates the effect of exposure to CRAT solutions such that high WMC participants benefit more from preexposure to correct solutions than low WMC (although low WMC participants have preexposure benefits as well). Broadly, these results are consistent with theories of working memory and problem solving that suggest a mediating role of attention control processes. Published by Elsevier Inc.
Medication management strategies used by older adults with heart failure: A systems-based analysis.
Mickelson, Robin S; Holden, Richard J
2017-09-01
Older adults with heart failure use strategies to cope with the constraining barriers impeding medication management. Strategies are behavioral adaptations that allow goal achievement despite these constraining conditions. When strategies do not exist, are ineffective or maladaptive, medication performance and health outcomes are at risk. While constraints to medication adherence are described in literature, strategies used by patients to manage medications are less well-described or understood. Guided by cognitive engineering concepts, the aim of this study was to describe and analyze the strategies used by older adults with heart failure to achieve their medication management goals. This mixed methods study employed an empirical strategies analysis method to elicit medication management strategies used by older adults with heart failure. Observation and interview data collected from 61 older adults with heart failure and 31 caregivers were analyzed using qualitative content analysis to derive categories, patterns and themes within and across cases. Data derived thematic sub-categories described planned and ad hoc methods of strategic adaptations. Stable strategies proactively adjusted the medication management process, environment, or the patients themselves. Patients applied situational strategies (planned or ad hoc) to irregular or unexpected situations. Medication non-adherence was a strategy employed when life goals conflicted with medication adherence. The health system was a source of constraints without providing commensurate strategies. Patients strived to control their medication system and achieve goals using adaptive strategies. Future patient self-mangement research can benefit from methods and theories used to study professional work, such as strategies analysis.
2011-01-01
Background Over the past two decades, there has been an increasing focus on quality of life outcomes in urological diseases. Patient-reported outcomes research has relied on structured assessments that constrain interpretation of the impact of disease and treatments. In this study, we present content analysis and psychometric evaluation of the Quality of Life Appraisal Profile. Our evaluation of this measure is a prelude to a prospective comparison of quality of life outcomes of reconstructive procedures after cystectomy. Methods Fifty patients with bladder cancer were interviewed prior to surgery using the Quality of Life Appraisal Profile. Patients also completed the EORTC QLQ-C30 and demographics. Analysis included content coding of personal goal statements generated by the Appraisal Profile, examination of the relationship of goal attainment to content, and association of goal-based measures with QLQ-C30 scales. Results Patients reported an average of 10 personal goals, reflecting motivational themes of achievement, problem solving, avoidance of problems, maintaining desired circumstances, letting go of roles and responsibilities, acceptance of undesirable situations, and attaining milestones. 503 goal statements were coded using 40 different content categories. Progress toward goal attainment was positively correlated with relationships and activities goals, but negatively correlated with health concerns. Associations among goal measures provided evidence for construct validity. Goal content also differed according to age, gender, employment, and marital status, lending further support for construct validity. QLQ-C30 functioning and symptom scales were correlated with goal content, but not with progress toward goal attainment, suggesting that patients may calibrate progress ratings relative to their specific goals. Alternately, progress may reflect a unique aspect of quality of life untapped by more standard scales. Conclusions The Brief Quality of Life Appraisal Profile was associated with measures of motivation, goal content and progress, as well as relationships with demographic and standard quality of life measures. This measure identifies novel concerns and issues in treating patients with bladder cancer, necessary for a more comprehensive evaluations of their health-related quality of life. PMID:21324146
The role of fire in ecosystem management
Jerry T. Williams
1995-01-01
USDA Forest Service management practices have significantly changed. Past practices were predicated on a strong public expectation for commodity production and protection from the forces of nature that were perceived to threaten that goal. Fire suppression, selective logging, intensive grazing, constrained prescribed burning, and a general emphasis on wood fiber...
Communicative Competence in American Trial Courtrooms.
ERIC Educational Resources Information Center
Johnson, Bruce C.
Gumperz and Hymes have stated the theoretical goal of one type of sociolinguistic investigation as the characterization of communicative competence, "what a speaker needs to know to communicate effectively in culturally significant settings." This knowledge is seen as describable by a set of rules constraining verbal behavior in reference to…
Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks
Bistra Dilkina; Rachel Houtman; Carla P. Gomes; Claire A. Montgomery; Kevin S. McKelvey; Katherine Kendall; Tabitha A. Graves; Richard Bernstein; Michael K. Schwartz
2016-01-01
Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a...
Pareto-optimal estimates that constrain mean California precipitation change
NASA Astrophysics Data System (ADS)
Langenbrunner, B.; Neelin, J. D.
2017-12-01
Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.
NASA Astrophysics Data System (ADS)
Das, Rajarshi
2014-03-01
The Tokai to Kamioka (T2K) Experiment is a long-baseline neutrino oscillation experiment located in Japan with the primary goal to precisely measure multiple neutrino flavor oscillation parameters. An off-axis muon neutrino beam with an energy that peaks at 600 MeV is generated at the JPARC facility and directed towards the kiloton Super-Kamiokande (SK) water Cherenkov detector located 295 km away. The rates of electron neutrino and muon neutrino interactions are measured at SK and compared with expected model values. This yields a measurement of the neutrino oscillation parameters sinq and sinq. Measurements from a Near Detector that is 280 m downstream of the neutrino beam target are used to constrain uncertainties in the beam flux prediction and neutrino interaction rates. We present a measurement of inclusive charged current neutrino interactions on water. We used several sub-detectors in the ND280 complex, including a Pi-Zero detector (P0D) that has alternating planes of plastic scintillator and water bag layers, a time projection chamber (TPC) and fine-grained detector (FGD) to detect and reconstruct muons from neutrino charged current events. Finally, we describe a ``forward-fitting'' technique that is used to constrain the beam flux and cross section as an input for the neutrino oscillation analysis and also to extract a flux-averaged inclusive charged current cross section on water.
Huang, Julie Y; Bargh, John A
2014-04-01
We propose the Selfish Goal model, which holds that a person's behavior is driven by psychological processes called goals that guide his or her behavior, at times in contradictory directions. Goals can operate both consciously and unconsciously, and when activated they can trigger downstream effects on a person's information processing and behavioral possibilities that promote only the attainment of goal end-states (and not necessarily the overall interests of the individual). Hence, goals influence a person as if the goals themselves were selfish and interested only in their own completion. We argue that there is an evolutionary basis to believe that conscious goals evolved from unconscious and selfish forms of pursuit. This theoretical framework predicts the existence of unconscious goal processes capable of guiding behavior in the absence of conscious awareness and control (the automaticity principle), the ability of the most motivating or active goal to constrain a person's information processing and behavior toward successful completion of that goal (the reconfiguration principle), structural similarities between conscious and unconscious goal pursuit (the similarity principle), and goal influences that produce apparent inconsistencies or counterintuitive behaviors in a person's behavior extended over time (the inconsistency principle). Thus, we argue that a person's behaviors are indirectly selected at the goal level but expressed (and comprehended) at the individual level.
An analysis of the impact of LHC Run I proton-lead data on nuclear parton densities.
Armesto, Néstor; Paukkunen, Hannu; Penín, José Manuel; Salgado, Carlos A; Zurita, Pía
2016-01-01
We report on an analysis of the impact of available experimental data on hard processes in proton-lead collisions during Run I at the large hadron collider on nuclear modifications of parton distribution functions. Our analysis is restricted to the EPS09 and DSSZ global fits. The measurements that we consider comprise production of massive gauge bosons, jets, charged hadrons and pions. This is the first time a study of nuclear PDFs includes this number of different observables. The goal of the paper is twofold: (i) checking the description of the data by nPDFs, as well as the relevance of these nuclear effects, in a quantitative manner; (ii) testing the constraining power of these data in eventual global fits, for which we use the Bayesian reweighting technique. We find an overall good, even too good, description of the data, indicating that more constraining power would require a better control over the systematic uncertainties and/or the proper proton-proton reference from LHC Run II. Some of the observables, however, show sizeable tension with specific choices of proton and nuclear PDFs. We also comment on the corresponding improvements as regards the theoretical treatment.
High-Fidelity Multidisciplinary Design Optimization of Aircraft Configurations
NASA Technical Reports Server (NTRS)
Martins, Joaquim R. R. A.; Kenway, Gaetan K. W.; Burdette, David; Jonsson, Eirikur; Kennedy, Graeme J.
2017-01-01
To evaluate new airframe technologies we need design tools based on high-fidelity models that consider multidisciplinary interactions early in the design process. The overarching goal of this NRA is to develop tools that enable high-fidelity multidisciplinary design optimization of aircraft configurations, and to apply these tools to the design of high aspect ratio flexible wings. We develop a geometry engine that is capable of quickly generating conventional and unconventional aircraft configurations including the internal structure. This geometry engine features adjoint derivative computation for efficient gradient-based optimization. We also added overset capability to a computational fluid dynamics solver, complete with an adjoint implementation and semiautomatic mesh generation. We also developed an approach to constraining buffet and started the development of an approach for constraining utter. On the applications side, we developed a new common high-fidelity model for aeroelastic studies of high aspect ratio wings. We performed optimal design trade-o s between fuel burn and aircraft weight for metal, conventional composite, and carbon nanotube composite wings. We also assessed a continuous morphing trailing edge technology applied to high aspect ratio wings. This research resulted in the publication of 26 manuscripts so far, and the developed methodologies were used in two other NRAs. 1
Mobile learning in resource-constrained environments: a case study of medical education.
Pimmer, Christoph; Linxen, Sebastian; Gröhbiel, Urs; Jha, Anil Kumar; Burg, Günter
2013-05-01
The achievement of the millennium development goals may be facilitated by the use of information and communication technology in medical and health education. This study intended to explore the use and impact of educational technology in medical education in resource-constrained environments. A multiple case study was conducted in two Nepalese teaching hospitals. The data were analysed using activity theory as an analytical basis. There was little evidence for formal e-learning, but the findings indicate that students and residents adopted mobile technologies, such as mobile phones and small laptops, as cultural tools for surprisingly rich 'informal' learning in a very short time. These tools allowed learners to enhance (a) situated learning, by immediately connecting virtual information sources to their situated experiences; (b) cross-contextual learning by documenting situated experiences in the form of images and videos and re-using the material for later reflection and discussion and (c) engagement with educational content in social network communities. By placing the students and residents at the centre of the new learning activities, this development has begun to affect the overall educational system. Leveraging these tools is closely linked to the development of broad media literacy, including awareness of ethical and privacy issues.
NASA Astrophysics Data System (ADS)
Turtle, E. P.; McEwen, A. S.; Collins, G. C.; Fletcher, L. N.; Hansen, C. J.; Hayes, A.; Hurford, T., Jr.; Kirk, R. L.; Barr, A.; Nimmo, F.; Patterson, G.; Quick, L. C.; Soderblom, J. M.; Thomas, N.
2015-12-01
The Europa Imaging System will transform our understanding of Europa through global decameter-scale coverage, three-dimensional maps, and unprecedented meter-scale imaging. EIS combines narrow-angle and wide-angle cameras (NAC and WAC) designed to address high-priority Europa science and reconnaissance goals. It will: (A) Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar; (B) Constrain formation processes of surface features and the potential for current activity by characterizing endogenic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure, and by searching for evidence of recent activity, including potential plumes; and (C) Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. The NAC provides very high-resolution, stereo reconnaissance, generating 2-km-wide swaths at 0.5-m pixel scale from 50-km altitude, and uses a gimbal to enable independent targeting. NAC observations also include: near-global (>95%) mapping of Europa at ≤50-m pixel scale (to date, only ~14% of Europa has been imaged at ≤500 m/pixel, with best pixel scale 6 m); regional and high-resolution stereo imaging at <1-m/pixel; and high-phase-angle observations for plume searches. The WAC is designed to acquire pushbroom stereo swaths along flyby ground-tracks, generating digital topographic models with 32-m spatial scale and 4-m vertical precision from 50-km altitude. These data support characterization of cross-track clutter for radar sounding. The WAC also performs pushbroom color imaging with 6 broadband filters (350-1050 nm) to map surface units and correlations with geologic features and topography. EIS will provide comprehensive data sets essential to fulfilling the goal of exploring Europa to investigate its habitability and perform collaborative science with other investigations, including cartographic and geologic maps, regional and high-resolution digital topography, GIS products, color and photometric data products, a geodetic control network tied to radar altimetry, and a database of plume-search observations.
NASA Technical Reports Server (NTRS)
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
Overview: Exobiology in solar system exploration
NASA Technical Reports Server (NTRS)
Carle, Glenn C.; Schwartz, Deborah E.
1992-01-01
In Aug. 1988, the NASA Ames Research Center held a three-day symposium in Sunnyvale, California, to discuss the subject of exobiology in the context of exploration of the solar system. Leading authorities in exobiology presented invited papers and assisted in setting future goals. The goals they set were as follows: (1) review relevant knowledge learned from planetary exploration programs; (2) detail some of the information that is yet to be obtained; (3) describe future missions and how exobiologists, as well as other scientists, can participate; and (4) recommend specific ways exobiology questions can be addressed on future exploration missions. These goals are in agreement with those of the Solar System Exploration Committee (SSEC) of the NASA Advisory Council. Formed in 1980 to respond to the planetary exploration strategies set forth by the Space Science Board of the National Academy of Sciences' Committee on Planetary and Lunar Exploration (COMPLEX), the SSEC's main function is to review the entire planetary program. The committee formulated a long-term plan (within a constrained budget) that would ensure a vital, exciting, and scientifically valuable effort through the turn of the century. The SSEC's goals include the following: determining the origin, evolution, and present state of the solar system; understanding Earth through comparative planetology studies; and revealing the relationship between the chemical and physical evolution of the solar system and the appearance of life. The SSEC's goals are consistent with the over-arching goal of NASA's Exobiology Program, which provides the critical framework and support for basic research. The research is divided into the following four elements: (1) cosmic evolution of the biogenic compounds; (2) prebiotic evolution; (3) origin and early evolution of life; and (4) evolution of advanced life.
Seed availability constrains plant species sorting along a soil fertility gradient
Bryan L. Foster; Erin J. Questad; Cathy D. Collins; Cheryl A. Murphy; Timothy L. Dickson; Val H. Smith
2011-01-01
1. Spatial variation in species composition within and among communities may be caused by deterministic, niche-based species sorting in response to underlying environmental heterogeneity as well as by stochastic factors such as dispersal limitation and variable species pools. An important goal in ecology is to reconcile deterministic and stochastic perspectives of...
Beyond Test Prep: Moving the Mission Statement into the Classroom
ERIC Educational Resources Information Center
Jones, Emily
2010-01-01
A random walk through the mission statements of independent schools shows an admirable determination to educate students for an unknowable future, for creativity and problem solving, for responsible citizenship, for resiliency. Nevertheless, many of these same schools are constrained to work towards their mission-central goals in time stolen from…
LSST: Cadence Design and Simulation
NASA Astrophysics Data System (ADS)
Cook, Kem H.; Pinto, P. A.; Delgado, F.; Miller, M.; Petry, C.; Saha, A.; Gee, P. A.; Tyson, J. A.; Ivezic, Z.; Jones, L.; LSST Collaboration
2009-01-01
The LSST Project has developed an operations simulator to investigate how best to observe the sky to achieve its multiple science goals. The simulator has a sophisticated model of the telescope and dome to properly constrain potential observing cadences. This model has also proven useful for investigating various engineering issues ranging from sizing of slew motors, to design of cryogen lines to the camera. The simulator is capable of balancing cadence goals from multiple science programs, and attempts to minimize time spent slewing as it carries out these goals. The operations simulator has been used to demonstrate a 'universal' cadence which delivers the science requirements for a deep cosmology survey, a Near Earth Object Survey and good sampling in the time domain. We will present the results of simulating 10 years of LSST operations using realistic seeing distributions, historical weather data, scheduled engineering downtime and current telescope and camera parameters. These simulations demonstrate the capability of the LSST to deliver a 25,000 square degree survey probing the time domain including 20,000 square degrees for a uniform deep, wide, fast survey, while effectively surveying for NEOs over the same area. We will also present our plans for future development of the simulator--better global minimization of slew time and eventual transition to a scheduler for the real LSST.
Finding intrinsic rewards by embodied evolution and constrained reinforcement learning.
Uchibe, Eiji; Doya, Kenji
2008-12-01
Understanding the design principle of reward functions is a substantial challenge both in artificial intelligence and neuroscience. Successful acquisition of a task usually requires not only rewards for goals, but also for intermediate states to promote effective exploration. This paper proposes a method for designing 'intrinsic' rewards of autonomous agents by combining constrained policy gradient reinforcement learning and embodied evolution. To validate the method, we use Cyber Rodent robots, in which collision avoidance, recharging from battery packs, and 'mating' by software reproduction are three major 'extrinsic' rewards. We show in hardware experiments that the robots can find appropriate 'intrinsic' rewards for the vision of battery packs and other robots to promote approach behaviors.
NASA Technical Reports Server (NTRS)
Wolszczan, Alexander; Kulkarni, Shrinivas R; Anderson, Stuart B.
2003-01-01
The objective of this proposal was to continue investigations of neutron star planetary systems in an effort to describe and understand their origin, orbital dynamics, basic physical properties and their relationship to planets around normal stars. This research represents an important element of the process of constraining the physics of planet formation around various types of stars. The research goals of this project included long-term timing measurements of the planets pulsar, PSR B1257+12, to search for more planets around it and to study the dynamics of the whole system, and sensitive searches for millisecond pulsars to detect further examples of old, rapidly spinning neutron stars with planetary systems. The instrumentation used in our project included the 305-m Arecibo antenna with the Penn State Pulsar Machine (PSPM), the 100-m Green Bank Telescope with the Berkeley- Caltech Pulsar Machine (BCPM), and the 100-m Effelsberg and 64-m Parkes telescopes equipped with the observatory supplied backend hardware.
PANATIKI: A Network Access Control Implementation Based on PANA for IoT Devices
Sanchez, Pedro Moreno; Lopez, Rafa Marin; Gomez Skarmeta, Antonio F.
2013-01-01
Internet of Things (IoT) networks are the pillar of recent novel scenarios, such as smart cities or e-healthcare applications. Among other challenges, these networks cover the deployment and interaction of small devices with constrained capabilities and Internet protocol (IP)-based networking connectivity. These constrained devices usually require connection to the Internet to exchange information (e.g., management or sensing data) or access network services. However, only authenticated and authorized devices can, in general, establish this connection. The so-called authentication, authorization and accounting (AAA) services are in charge of performing these tasks on the Internet. Thus, it is necessary to deploy protocols that allow constrained devices to verify their credentials against AAA infrastructures. The Protocol for Carrying Authentication for Network Access (PANA) has been standardized by the Internet engineering task force (IETF) to carry the Extensible Authentication Protocol (EAP), which provides flexible authentication upon the presence of AAA. To the best of our knowledge, this paper is the first deep study of the feasibility of EAP/PANA for network access control in constrained devices. We provide light-weight versions and implementations of these protocols to fit them into constrained devices. These versions have been designed to reduce the impact in standard specifications. The goal of this work is two-fold: (1) to demonstrate the feasibility of EAP/PANA in IoT devices; (2) to provide the scientific community with the first light-weight interoperable implementation of EAP/PANA for constrained devices in the Contiki operating system (Contiki OS), called PANATIKI. The paper also shows a testbed, simulations and experimental results obtained from real and simulated constrained devices. PMID:24189332
PANATIKI: a network access control implementation based on PANA for IoT devices.
Moreno Sanchez, Pedro; Marin Lopez, Rafa; Gomez Skarmeta, Antonio F
2013-11-01
Internet of Things (IoT) networks are the pillar of recent novel scenarios, such as smart cities or e-healthcare applications. Among other challenges, these networks cover the deployment and interaction of small devices with constrained capabilities and Internet protocol (IP)-based networking connectivity. These constrained devices usually require connection to the Internet to exchange information (e.g., management or sensing data) or access network services. However, only authenticated and authorized devices can, in general, establish this connection. The so-called authentication, authorization and accounting (AAA) services are in charge of performing these tasks on the Internet. Thus, it is necessary to deploy protocols that allow constrained devices to verify their credentials against AAA infrastructures. The Protocol for Carrying Authentication for Network Access (PANA) has been standardized by the Internet engineering task force (IETF) to carry the Extensible Authentication Protocol (EAP), which provides flexible authentication upon the presence of AAA. To the best of our knowledge, this paper is the first deep study of the feasibility of EAP/PANA for network access control in constrained devices. We provide light-weight versions and implementations of these protocols to fit them into constrained devices. These versions have been designed to reduce the impact in standard specifications. The goal of this work is two-fold: (1) to demonstrate the feasibility of EAP/PANA in IoT devices; (2) to provide the scientific community with the first light-weight interoperable implementation of EAP/PANA for constrained devices in the Contiki operating system (Contiki OS), called PANATIKI. The paper also shows a testbed, simulations and experimental results obtained from real and simulated constrained devices.
'Constrained collaboration': Patient empowerment discourse as resource for countervailing power.
Vinson, Alexandra H
2016-11-01
Countervailing powers constrain the authority and autonomy of the medical profession. One countervailing power is patient consumerism, a movement with roots in health social movements. Patient empowerment discourses that emerge from health social movements suggest that active patienthood is a normative good, and that patients should inform themselves, claim their expertise, and participate in their care. Yet, little is known about how patient empowerment is understood by physicians. Drawing on ethnographic fieldwork in an American medical school, this article examines how physicians teach medical students to carry out patient encounters while adhering to American cultural expectations of a collaborative physician-patient relationship. Overt medical paternalism is characterised by professors as 'here's the orders' paternalism, and shown to be counterproductive to 'closing the deal' - achieving patient agreement to a course of treatment. To explain how physicians accomplish their therapeutic goals without violating cultural mandates of patient empowerment I develop the concept of 'constrained collaboration'. This analysis of constrained collaboration contrasts with structural-level narratives of diminishing professional authority and contributes to a theory of the micro-level reproduction of medical authority as a set of interactional practices. © 2016 Foundation for the Sociology of Health & Illness.
Constrained Low-Rank Learning Using Least Squares-Based Regularization.
Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong
2017-12-01
Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.
Energy access and living standards: some observations on recent trends
NASA Astrophysics Data System (ADS)
Rao, Narasimha D.; Pachauri, Shonali
2017-02-01
A subset of Sustainable Development Goals pertains to improving people’s living standards at home. These include the provision of access to electricity, clean cooking energy, improved water and sanitation. We examine historical progress in energy access in relation to other living standards. We assess regional patterns in the pace of progress and relative priority accorded to these different services. Countries in sub-Saharan Africa would have to undergo unprecedented rates of improvement in energy access in order to achieve the goal of universal electrification by 2030. World over, access to clean cooking fuels and sanitation facilities consistently lag improved water and electricity access by a large margin. These two deprivations are more concentrated among poor countries, and poor people in middle income countries. They are also correlated to health risks faced disproportionately by women. However, some Asian countries have been able to achieve faster progress in electrification at lower income levels compared to industrialized countries’ earlier efforts. These examples offer hope that future efforts need not be constrained by historical rates of progress.
Pleiades Experiments on the NIF: Phase II-C
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benstead, James; Morton, John; Guymer, Thomas
2015-06-08
Pleiades was a radiation transport campaign fielded at the National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory (LLNL) between 2011 and 2014. The primary goals of the campaign were to develop and characterise a reproducible ~350eV x-ray drive and to constrain a number of material data properties required to successfully model the propagation of radiation through two low-density foam materials. A further goal involved the development and qualification of diagnostics for future radiation transport experiments at NIF. Pleiades was a collaborative campaign involving teams from both AWE and the Los Alamos National Laboratory (LANL).
Jordaan, Sarah M; Diaz Anadon, Laura; Mielke, Erik; Schrag, Daniel P
2013-01-01
The Renewable Fuel Standard (RFS) is among the cornerstone policies created to increase U.S. energy independence by using biofuels. Although greenhouse gas emissions have played a role in shaping the RFS, water implications are less understood. We demonstrate a spatial, life cycle approach to estimate water consumption of transportation fuel scenarios, including a comparison to current water withdrawals and drought incidence by state. The water consumption and land footprint of six scenarios are compared to the RFS, including shale oil, coal-to-liquids, shale gas-to-liquids, corn ethanol, and cellulosic ethanol from switchgrass. The corn scenario is the most water and land intense option and is weighted toward drought-prone states. Fossil options and cellulosic ethanol require significantly less water and are weighted toward less drought-prone states. Coal-to-liquids is an exception, where water consumption is partially weighted toward drought-prone states. Results suggest that there may be considerable water and land impacts associated with meeting energy security goals through using only biofuels. Ultimately, water and land requirements may constrain energy security goals without careful planning, indicating that there is a need to better balance trade-offs. Our approach provides policymakers with a method to integrate federal policies with regional planning over various temporal and spatial scales.
Why REM sleep? Clues beyond the laboratory in a more challenging world.
Horne, Jim
2013-02-01
REM sleep (REM) seems more likely to prepare for ensuing wakefulness rather than provides recovery from prior wakefulness, as happens with 'deeper' nonREM. Many of REM's characteristics are 'wake-like' (unlike nonREM), including several common to feeding. These, with recent findings outside sleep, provide perspectives on REM beyond those from the laboratory. REM can interchange with a wakefulness involving motor output, indicating that REM's atonia is integral to its function. Wakefulness for 'wild' mammals largely comprises exploration; a complex opportunistic behaviour mostly for foraging, involving: curiosity, minimising risks, (emotional) coping, navigation, when (including circadian timing) to investigate new destinations; all linked to 'purposeful, goal directed movement'. REM reflects these adaptive behaviours (including epigenesis), masked in laboratories having constrained, safe, unchanging, unchallenging, featureless, exploration-free environments with ad lib food. Similarly masked may be REM's functions for today's humans living safe, routine lives, with easy food accessibility. In these respects animal and human REM studies are not sufficiently 'ecological'. Copyright © 2012 Elsevier B.V. All rights reserved.
Constraining the dark energy models with H (z ) data: An approach independent of H0
NASA Astrophysics Data System (ADS)
Anagnostopoulos, Fotios K.; Basilakos, Spyros
2018-03-01
We study the performance of the latest H (z ) data in constraining the cosmological parameters of different cosmological models, including that of Chevalier-Polarski-Linder w0w1 parametrization. First, we introduce a statistical procedure in which the chi-square estimator is not affected by the value of the Hubble constant. As a result, we find that the H (z ) data do not rule out the possibility of either nonflat models or dynamical dark energy cosmological models. However, we verify that the time varying equation-of-state parameter w (z ) is not constrained by the current expansion data. Combining the H (z ) and the Type Ia supernova data, we find that the H (z )/SNIa overall statistical analysis provides a substantial improvement of the cosmological constraints with respect to those of the H (z ) analysis. Moreover, the w0-w1 parameter space provided by the H (z )/SNIa joint analysis is in very good agreement with that of Planck 2015, which confirms that the present analysis with the H (z ) and supernova type Ia (SNIa) probes correctly reveals the expansion of the Universe as found by the team of Planck. Finally, we generate sets of Monte Carlo realizations in order to quantify the ability of the H (z ) data to provide strong constraints on the dark energy model parameters. The Monte Carlo approach shows significant improvement of the constraints, when increasing the sample to 100 H (z ) measurements. Such a goal can be achieved in the future, especially in the light of the next generation of surveys.
Compendium of Operations Research and Economic Analysis Studies
1990-10-01
Constraining buslo a mxiu f1 ot will real-ize a- near tverm’, reduction of approximiateIy .$25- mil-lion. Achieving co it er goalI rvducrtions for short er Yead...which have no potential to achieve consolidation cost effectiveness and pull these out of the bank early for shipment, and (4) that the depots allow
ERIC Educational Resources Information Center
Hoang, Hai; Huang, Melrose; Sulcer, Brian; Yesilyurt, Suleyman
2017-01-01
College math is a gateway course that has become a constraining gatekeeper for tens of thousands of students annually. Every year, over 500,000 students fail developmental mathematics, preventing them from achieving their college and career goals. The Carnegie Math Pathways initiative offers students an alternative. It comprises two Pathways…
ERIC Educational Resources Information Center
Tan, Tony Xing; Marfo, Kofi; Dedrick, Robert F.
2010-01-01
The central goal of this longitudinal study was to examine behavioral adjustment outcomes in a sample of preschool-age adopted Chinese girls. Research examining the effects of institutional deprivation on post-adoption behavioral outcomes for internationally adopted children has been constrained by the frequent unavailability of data on the…
NASA Technical Reports Server (NTRS)
Mushotzky, Richard (Technical Monitor); Elvis, Martin
2004-01-01
The aim of the proposal is to investigate the absorption properties of a sample of inter-mediate redshift quasars. The main goals of the project are: Measure the redshift and the column density of the X-ray absorbers; test the correlation between absorption and redshift suggested by ROSAT and ASCA data; constrain the absorber ionization status and metallicity; constrain the absorber dust content and composition through the comparison between the amount of X-ray absorption and optical dust extinction. Unanticipated low energy cut-offs where discovered in ROSAT spectra of quasars and confirmed by ASCA, BeppoSAX and Chandra. In most cases it was not possible to constrain adequately the redshift of the absorber from the X-ray data alone. Two possibilities remain open: a) absorption at the quasar redshift; and b) intervening absorption. The evidences in favour of intrinsic absorption are all indirect. Sensitive XMM observations can discriminate between these different scenarios. If the absorption is at the quasar redshift we can study whether the quasar environment evolves with the Cosmic time.
Baldassarre, Gianluca; Mannella, Francesco; Fiore, Vincenzo G; Redgrave, Peter; Gurney, Kevin; Mirolli, Marco
2013-05-01
Reinforcement (trial-and-error) learning in animals is driven by a multitude of processes. Most animals have evolved several sophisticated systems of 'extrinsic motivations' (EMs) that guide them to acquire behaviours allowing them to maintain their bodies, defend against threat, and reproduce. Animals have also evolved various systems of 'intrinsic motivations' (IMs) that allow them to acquire actions in the absence of extrinsic rewards. These actions are used later to pursue such rewards when they become available. Intrinsic motivations have been studied in Psychology for many decades and their biological substrates are now being elucidated by neuroscientists. In the last two decades, investigators in computational modelling, robotics and machine learning have proposed various mechanisms that capture certain aspects of IMs. However, we still lack models of IMs that attempt to integrate all key aspects of intrinsically motivated learning and behaviour while taking into account the relevant neurobiological constraints. This paper proposes a bio-constrained system-level model that contributes a major step towards this integration. The model focusses on three processes related to IMs and on the neural mechanisms underlying them: (a) the acquisition of action-outcome associations (internal models of the agent-environment interaction) driven by phasic dopamine signals caused by sudden, unexpected changes in the environment; (b) the transient focussing of visual gaze and actions on salient portions of the environment; (c) the subsequent recall of actions to pursue extrinsic rewards based on goal-directed reactivation of the representations of their outcomes. The tests of the model, including a series of selective lesions, show how the focussing processes lead to a faster learning of action-outcome associations, and how these associations can be recruited for accomplishing goal-directed behaviours. The model, together with the background knowledge reviewed in the paper, represents a framework that can be used to guide the design and interpretation of empirical experiments on IMs, and to computationally validate and further develop theories on them. Copyright © 2012 Elsevier Ltd. All rights reserved.
Constraining Lipid Biomarker Paleoclimate Proxies in a Small Arctic Watershed
NASA Astrophysics Data System (ADS)
Dion-Kirschner, H.; McFarlin, J. M.; Axford, Y.; Osburn, M. R.
2017-12-01
Arctic amplification of climate change renders high-latitude environments unusually sensitive to changes in climatic conditions (Serreze and Barry, 2011). Lipid biomarkers, and their hydrogen and carbon isotopic compositions, can yield valuable paleoclimatic and paleoecological information. However, many variables affect the production and preservation of lipids and their constituent isotopes, including precipitation, plant growth conditions, biosynthesis mechanisms, and sediment depositional processes (Sachse et al., 2012). These variables are particularly poorly constrained for high-latitude environments, where trees are sparse or not present, and plants grow under continuous summer light and cool temperatures during a short growing season. Here we present a source-to-sink study of a single watershed from the Kangerlussuaq region of southwest Greenland. Our analytes from in and around `Little Sugarloaf Lake' (LSL) include terrestrial and aquatic plants, plankton, modern lake water, surface sediments, and a sediment core. This diverse sample set allows us to fulfill three goals: 1) We evaluate the production of lipids and isotopic signatures in the modern watershed in comparison to modern climate. Our data exhibit genus-level trends in leaf wax production and isotopic composition, and help clarify the difference between terrestrial and aquatic signals. 2) We evaluate the surface sediment of LSL to determine how lipid biomarkers from the watershed are incorporated into sediments. We constrain the relative contributions of terrestrial plants, aquatic plants, and other aquatic organisms to the sediment in this watershed. 3) We apply this modern source-to-sink calibration to the analysis of a 65 cm sediment core record. Our core is organic-rich, and relatively high deposition rates allow us to reconstruct paleoenvironmental changes with high resolution. Our work will help determine the veracity of these common paleoclimate proxies, specifically for research in southwest Greenland, and will enable an accurate, high-resolution watershed-level reconstruction of Holocene conditions. Serreze, M. and Barry, R. (2011). Global and Planetary Change, 77, 85-96. Sachse, D., et al. (2012). Annual Review of Earth and Planetary Sciences, 40, 221-249.
Connecting Colorado's Renewable Resources to the Markets in a Cabon-Constrained Electricity Sector
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2009-12-31
The benchmark goal that drives the report is to achieve a 20 percent reduction in carbon dioxide (CO{sub 2}) emissions in Colorado's electricity sector below 2005 levels by 2020. We refer to this as the '20 x 20 goal.' In discussing how to meet this goal, the report concentrates particularly on the role of utility-scale renewable energy and high-voltage transmission. An underlying recognition is that any proposed actions must not interfere with electric system reliability and should minimize financial impacts on customers and utilities. The report also describes the goals of Colorado's New Energy Economy5 - identified here, in summary,more » as the integration of energy, environment, and economic policies that leads to an increased quality of life in Colorado. We recognize that a wide array of options are under constant consideration by professionals in the electric industry, and the regulatory community. Many options are under discussion on this topic, and the costs and benefits of the options are inherently difficult to quantify. Accordingly, this report should not be viewed as a blueprint with specific recommendations for the timing, siting, and sizing of generating plants and high-voltage transmission lines. We convened the project with the goal of supplying information inputs for consideration by the state's electric utilities, legislators, regulators, and others as we work creatively to shape our electricity sector in a carbon-constrained world. The report addresses various issues that were raised in the Connecting Colorado's Renewable Resources to the Markets report, also known as the SB07-91 Report. That report was produced by the Senate Bill 2007-91 Renewable Resource Generation Development Areas Task Force and presented to the Colorado General Assembly in 2007. The SB07-91 Report provided the Governor, the General Assembly, and the people of Colorado with an assessment of the capability of Colorado's utility-scale renewable resources to contribute electric power in the state from 10 Colorado generation development areas (GDAs) that have the capacity for more than 96,000 megawatts (MW) of wind generation and 26,000 MW of solar generation. The SB07-91 Report recognized that only a small fraction of these large capacity opportunities are destined to be developed. As a rough comparison, 13,964 MW of installed nameplate capacity was available in Colorado in 2008. The legislature did not direct the SB07-91 task force to examine several issues that are addressed in the REDI report. These issues include topics such as transmission, regulation, wildlife, land use, permitting, electricity demand, and the roles that different combinations of supply-side resources, demand-side resources, and transmission can play to meet a CO{sub 2} emissions reduction goal. This report, which expands upon research from a wide array of sources, serves as a sequel to the SB07-91 Report. Reports and research on renewable energy and transmission abound. This report builds on the work of many, including professionals who have dedicated their careers to these topics. A bibliography of information resources is provided, along with many citations to the work of others. The REDI Project was designed to present baseline information regarding the current status of Colorado's generation and transmission infrastructure. The report discusses proposals to expand the infrastructure, and identifies opportunities to make further improvements in the state's regulatory and policy environment. The report offers a variety of options for consideration as Colorado seeks pathways to meet the 20 x 20 goal. The primary goal of the report is to foster broader discussion regarding how the 20 x 20 goal interacts with electric resource portfolio choices, particularly the expansion of utility-scale renewable energy and the high-voltage transmission infrastructure. The report also is intended to serve as a resource when identifying opportunities stemming from the American Recovery and Reinvestment Act of 2009.« less
The Detection and Attribution Model Intercomparison Project (DAMIP v1.0)contribution to CMIP6
Gillett, Nathan P.; Shiogama, Hideo; Funke, Bernd; ...
2016-10-18
Detection and attribution (D&A) simulations were important components of CMIP5 and underpinned the climate change detection and attribution assessments of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. The primary goals of the Detection and Attribution Model Intercomparison Project (DAMIP) are to facilitate improved estimation of the contributions of anthropogenic and natural forcing changes to observed global warming as well as to observed global and regional changes in other climate variables; to contribute to the estimation of how historical emissions have altered and are altering contemporary climate risk; and to facilitate improved observationally constrained projections of futuremore » climate change. D&A studies typically require unforced control simulations and historical simulations including all major anthropogenic and natural forcings. Such simulations will be carried out as part of the DECK and the CMIP6 historical simulation. In addition D&A studies require simulations covering the historical period driven by individual forcings or subsets of forcings only: such simulations are proposed here. Key novel features of the experimental design presented here include firstly new historical simulations with aerosols-only, stratospheric-ozone-only, CO2-only, solar-only, and volcanic-only forcing, facilitating an improved estimation of the climate response to individual forcing, secondly future single forcing experiments, allowing observationally constrained projections of future climate change, and thirdly an experimental design which allows models with and without coupled atmospheric chemistry to be compared on an equal footing.« less
The Detection and Attribution Model Intercomparison Project (DAMIP v1.0) contribution to CMIP6
NASA Astrophysics Data System (ADS)
Gillett, Nathan P.; Shiogama, Hideo; Funke, Bernd; Hegerl, Gabriele; Knutti, Reto; Matthes, Katja; Santer, Benjamin D.; Stone, Daithi; Tebaldi, Claudia
2016-10-01
Detection and attribution (D&A) simulations were important components of CMIP5 and underpinned the climate change detection and attribution assessments of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. The primary goals of the Detection and Attribution Model Intercomparison Project (DAMIP) are to facilitate improved estimation of the contributions of anthropogenic and natural forcing changes to observed global warming as well as to observed global and regional changes in other climate variables; to contribute to the estimation of how historical emissions have altered and are altering contemporary climate risk; and to facilitate improved observationally constrained projections of future climate change. D&A studies typically require unforced control simulations and historical simulations including all major anthropogenic and natural forcings. Such simulations will be carried out as part of the DECK and the CMIP6 historical simulation. In addition D&A studies require simulations covering the historical period driven by individual forcings or subsets of forcings only: such simulations are proposed here. Key novel features of the experimental design presented here include firstly new historical simulations with aerosols-only, stratospheric-ozone-only, CO2-only, solar-only, and volcanic-only forcing, facilitating an improved estimation of the climate response to individual forcing, secondly future single forcing experiments, allowing observationally constrained projections of future climate change, and thirdly an experimental design which allows models with and without coupled atmospheric chemistry to be compared on an equal footing.
Issues central to a useful image understanding environment
NASA Astrophysics Data System (ADS)
Beveridge, J. Ross; Draper, Bruce A.; Hanson, Allen R.; Riseman, Edward M.
1992-04-01
A recent DARPA initiative has sparked interested in software environments for computer vision. The goal is a single environment to support both basic research and technology transfer. This paper lays out six fundamental attributes such a system must possess: (1) support for both C and Lisp, (2) extensibility, (3) data sharing, (4) data query facilities tailored to vision, (5) graphics, and (6) code sharing. The first three attributes fundamentally constrain the system design. Support for both C and Lisp demands some form of database or data-store for passing data between languages. Extensibility demands that system support facilities, such as spatial retrieval of data, be readily extended to new user-defined datatypes. Finally, data sharing demands that data saved by one user, including data of a user-defined type, must be readable by another user.
A new concept for the exploration of Europa.
Rampelotto, Pabulo Henrique
2012-06-01
The Europa Jupiter System Mission (EJSM) is the major Outer Planet Flagship Mission in preparation by NASA. Although well designed, the current EJSM concept may present problematic issues as a Flagship Mission for a long-term exploration program that will occur over the course of decades. For this reason, the present work reviews the current EJSM concept and presents a new strategy for the exploration of Europa. In this concept, the EJSM is reorganized to comprise three independent missions focused on Europa. The missions are split according to scientific goals, which together will give a complete understanding of the potential habitability of Europa, including in situ life's signal measurements. With this alternative strategy, a complete exploration of Europa would be possible in the next decades, even within a politically and economically constrained environment.
Modelling the 21-cm Signal from the Epoch of Reionization and Cosmic Dawn
NASA Astrophysics Data System (ADS)
Choudhury, T. Roy; Datta, Kanan; Majumdar, Suman; Ghara, Raghunath; Paranjape, Aseem; Mondal, Rajesh; Bharadwaj, Somnath; Samui, Saumyadip
2016-12-01
Studying the cosmic dawn and the epoch of reionization through the redshifted 21-cm line are among the major science goals of the SKA1. Their significance lies in the fact that they are closely related to the very first stars in the Universe. Interpreting the upcoming data would require detailed modelling of the relevant physical processes. In this article, we focus on the theoretical models of reionization that have been worked out by various groups working in India with the upcoming SKA in mind. These models include purely analytical and semi-numerical calculations as well as fully numerical radiative transfer simulations. The predictions of the 21-cm signal from these models would be useful in constraining the properties of the early galaxies using the SKA data.
ERIC Educational Resources Information Center
Shelton, Brett E.; Scoresby, Jon
2011-01-01
We discuss the design, creation and implementation of an instructional game for use in a high school poetry class following a commitment to an educational game design principle of "alignment". We studied groups of instructional designers and an interactive fiction computer game they built. The game was implemented in a 9th grade English classroom…
2010-04-01
constrained to the objects it manages A local reference monitor can be maintained, updated, and replaced with minimal effect on the rest of the system...Compositional assurance is the path towards the goal of JIT Assurance Construct individual assurance case for each trusted tcomponen Provide argument that...local policies combine to enforce the overall system policy Composability enables JIT Assurance A component can be patched, upgraded, refreshed
NASA Technical Reports Server (NTRS)
Bregman, Jesse; Sloan, G. C.
1996-01-01
The emission from polycyclic aromatic hydrocarbons (PAH's) in the Orion Bar region is investigated using a combination of narrow-band imaging and long-slit spectroscopy. The goal was to study how the strength of the PAH bands vary with spatial position in this edge-on photo-dissociation region. The specific focus here is how these variations constrain the carrier of the 3.4 micron band.
ERIC Educational Resources Information Center
Hui, Sammy King Fai; Cheung, Hoi Yan
2015-01-01
Technical and vocational education and training (TVET) is often seen to be constraining and focused on specific skill development for specific occupations. This goal is at odds with the demands of a knowledge economy that requires more general educational outcomes for an uncertain and unpredictable labour market. For this reason, Hui (2012) has…
Competitiveness and the Process of Co-adaptation in Team Sport Performance.
Passos, Pedro; Araújo, Duarte; Davids, Keith
2016-01-01
An evolutionary psycho-biological perspective on competitiveness dynamics is presented, focusing on continuous behavioral co-adaptations to constraints that arise in performance environments. We suggest that an athlete's behavioral dynamics are constrained by circumstances of competing for the availability of resources, which once obtained offer possibilities for performance success. This defines the influence of the athlete-environment relationship on competitiveness. Constraining factors in performance include proximity to target areas in team sports and the number of other competitors in a location. By pushing the athlete beyond existing limits, competitiveness enhances opportunities for co-adaptation, innovation and creativity, which can lead individuals toward different performance solutions to achieve the same performance goal. Underpinned by an ecological dynamics framework we examine whether competitiveness is a crucial feature to succeed in team sports. Our focus is on intra-team competitiveness, concerning the capacity of individuals within a team to become perceptually attuned to affordances in a given performance context which can increase their likelihood of success. This conceptualization implies a re-consideration of the concept of competitiveness, not as an inherited trait or entity to be acquired, but rather theorizing it as a functional performer-environment relationship that needs to be explored, developed, enhanced and maintained in team games training programs.
Competitiveness and the Process of Co-adaptation in Team Sport Performance
Passos, Pedro; Araújo, Duarte; Davids, Keith
2016-01-01
An evolutionary psycho-biological perspective on competitiveness dynamics is presented, focusing on continuous behavioral co-adaptations to constraints that arise in performance environments. We suggest that an athlete’s behavioral dynamics are constrained by circumstances of competing for the availability of resources, which once obtained offer possibilities for performance success. This defines the influence of the athlete-environment relationship on competitiveness. Constraining factors in performance include proximity to target areas in team sports and the number of other competitors in a location. By pushing the athlete beyond existing limits, competitiveness enhances opportunities for co-adaptation, innovation and creativity, which can lead individuals toward different performance solutions to achieve the same performance goal. Underpinned by an ecological dynamics framework we examine whether competitiveness is a crucial feature to succeed in team sports. Our focus is on intra-team competitiveness, concerning the capacity of individuals within a team to become perceptually attuned to affordances in a given performance context which can increase their likelihood of success. This conceptualization implies a re-consideration of the concept of competitiveness, not as an inherited trait or entity to be acquired, but rather theorizing it as a functional performer-environment relationship that needs to be explored, developed, enhanced and maintained in team games training programs. PMID:27777565
Reducing the complexity of NASA's space communications infrastructure
NASA Technical Reports Server (NTRS)
Miller, Raymond E.; Liu, Hong; Song, Junehwa
1995-01-01
This report describes the range of activities performed during the annual reporting period in support of the NASA Code O Success Team - Lifecycle Effectiveness for Strategic Success (COST LESS) team. The overall goal of the COST LESS team is to redefine success in a constrained fiscal environment and reduce the cost of success for end-to-end mission operations. This goal is more encompassing than the original proposal made to NASA for reducing complexity of NASA's Space Communications Infrastructure. The COST LESS team approach for reengineering the space operations infrastructure has a focus on reversing the trend of engineering special solutions to similar problems.
Patterson, Richard; Operskalski, Joachim T.; Barbey, Aron K.
2015-01-01
Although motivation is a well-established field of study in its own right, and has been fruitfully studied in connection with attribution theory and belief formation under the heading of “motivated thinking,” its powerful and pervasive influence on specifically explanatory processes is less well explored. Where one has a strong motivation to understand some event correctly, one is thereby motivated to adhere as best one can to normative or “epistemic” criteria for correct or accurate explanation, even if one does not consciously formulate or apply such criteria. By contrast, many of our motivations to explain introduce bias into the processes involved in generating, evaluating, or giving explanations. Non-epistemic explanatory motivations, or following Kunda's usage, “directional” motivations, include self-justification, resolution of cognitive dissonance, deliberate deception, teaching, and many more. Some of these motivations lead to the relaxation or violation of epistemic norms; others enhance epistemic motivation, so that one engages in more careful and thorough generational and evaluative processes. We propose that “real life” explanatory processes are often constrained by multiple goals, epistemic and directional, where these goals may mutually reinforce one another or may conflict, and where our explanations emerge as a matter of weighing and satisfying those goals. We review emerging evidence from psychology and neuroscience to support this framework and to elucidate the central role of motivation in human thought and explanation. PMID:26528166
Energy-constrained two-way assisted private and quantum capacities of quantum channels
NASA Astrophysics Data System (ADS)
Davis, Noah; Shirokov, Maksim E.; Wilde, Mark M.
2018-06-01
With the rapid growth of quantum technologies, knowing the fundamental characteristics of quantum systems and protocols is essential for their effective implementation. A particular communication setting that has received increased focus is related to quantum key distribution and distributed quantum computation. In this setting, a quantum channel connects a sender to a receiver, and their goal is to distill either a secret key or entanglement, along with the help of arbitrary local operations and classical communication (LOCC). In this work, we establish a general theory of energy-constrained, LOCC-assisted private and quantum capacities of quantum channels, which are the maximum rates at which an LOCC-assisted quantum channel can reliably establish a secret key or entanglement, respectively, subject to an energy constraint on the channel input states. We prove that the energy-constrained squashed entanglement of a channel is an upper bound on these capacities. We also explicitly prove that a thermal state maximizes a relaxation of the squashed entanglement of all phase-insensitive, single-mode input bosonic Gaussian channels, generalizing results from prior work. After doing so, we prove that a variation of the method introduced by Goodenough et al. [New J. Phys. 18, 063005 (2016), 10.1088/1367-2630/18/6/063005] leads to improved upper bounds on the energy-constrained secret-key-agreement capacity of a bosonic thermal channel. We then consider a multipartite setting and prove that two known multipartite generalizations of the squashed entanglement are in fact equal. We finally show that the energy-constrained, multipartite squashed entanglement plays a role in bounding the energy-constrained LOCC-assisted private and quantum capacity regions of quantum broadcast channels.
Zheng, Wenjing; Balzer, Laura; van der Laan, Mark; Petersen, Maya
2018-01-30
Binary classification problems are ubiquitous in health and social sciences. In many cases, one wishes to balance two competing optimality considerations for a binary classifier. For instance, in resource-limited settings, an human immunodeficiency virus prevention program based on offering pre-exposure prophylaxis (PrEP) to select high-risk individuals must balance the sensitivity of the binary classifier in detecting future seroconverters (and hence offering them PrEP regimens) with the total number of PrEP regimens that is financially and logistically feasible for the program. In this article, we consider a general class of constrained binary classification problems wherein the objective function and the constraint are both monotonic with respect to a threshold. These include the minimization of the rate of positive predictions subject to a minimum sensitivity, the maximization of sensitivity subject to a maximum rate of positive predictions, and the Neyman-Pearson paradigm, which minimizes the type II error subject to an upper bound on the type I error. We propose an ensemble approach to these binary classification problems based on the Super Learner methodology. This approach linearly combines a user-supplied library of scoring algorithms, with combination weights and a discriminating threshold chosen to minimize the constrained optimality criterion. We then illustrate the application of the proposed classifier to develop an individualized PrEP targeting strategy in a resource-limited setting, with the goal of minimizing the number of PrEP offerings while achieving a minimum required sensitivity. This proof of concept data analysis uses baseline data from the ongoing Sustainable East Africa Research in Community Health study. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Kahn, Ralph A.
2014-01-01
AeroCom is an open international initiative of scientists interested in the advancement of the understanding of global aerosol properties and aerosol impacts on climate. A central goal is to more strongly tie and constrain modeling efforts to observational data. A major element for exchanges between data and modeling groups are annual meetings. The meeting was held September 20 through October 2, 1014 and the organizers would like to post the presentations.
NASA Astrophysics Data System (ADS)
Del Genio, A. D.; Platnick, S. E.; Bennartz, R.; Klein, S. A.; Marchand, R.; Oreopoulos, L.; Pincus, R.; Wood, R.
2016-12-01
Low clouds are central to leading-order questions in climate and subseasonal weather predictability, and are key to the NRC panel report's goals "to understand the signals of the Earth system under a changing climate" and "for improved models and model projections." To achieve both goals requires a mix of continuity observations to document the components of the changing climate and improvements in retrievals of low cloud and boundary layer dynamical/thermodynamic properties to ensure process-oriented observations that constrain the parameterized physics of the models. We discuss four climate/weather objectives that depend sensitively on understanding the behavior of low clouds: 1. Reduce uncertainty in GCM-inferred climate sensitivity by 50% by constraining subtropical low cloud feedbacks. 2. Eliminate the GCM Southern Ocean shortwave flux bias and its effect on cloud feedback and the position of the midlatitude storm track. 3. Eliminate the double Intertropical Convergence Zone bias in GCMs and its potential effects on tropical precipitation over land and the simulation and prediction of El Niño. 4. Increase the subseasonal predictability of tropical warm pool precipitation from 20 to 30 days. We envision advances in three categories of observations that would be highly beneficial for reaching these goals: 1. More accurate observations will facilitate more thorough evaluation of clouds in GCMs. 2. Better observations of the links between cloud properties and the environmental state will be used as the foundation for parameterization improvements. 3. Sufficiently long and higher quality records of cloud properties and environmental state will constrain low cloud feedback purely observationally. To accomplish this, the greatest need is to replace A-Train instruments, which are nearing end-of-life, with enhanced versions. The requirements are sufficient horizontal and vertical resolution to capture boundary layer cloud and thermodynamic spatial structure; more accurate determination of cloud condensate profiles and optical properties; near-coincident observations to permit multi-instrument retrievals and association with dynamic and thermodynamic structure; global coverage; and, for long-term monitoring, measurement and orbit stability and sufficient mission duration.
Dynamics simulations for engineering macromolecular interactions
NASA Astrophysics Data System (ADS)
Robinson-Mosher, Avi; Shinar, Tamar; Silver, Pamela A.; Way, Jeffrey
2013-06-01
The predictable engineering of well-behaved transcriptional circuits is a central goal of synthetic biology. The artificial attachment of promoters to transcription factor genes usually results in noisy or chaotic behaviors, and such systems are unlikely to be useful in practical applications. Natural transcriptional regulation relies extensively on protein-protein interactions to insure tightly controlled behavior, but such tight control has been elusive in engineered systems. To help engineer protein-protein interactions, we have developed a molecular dynamics simulation framework that simplifies features of proteins moving by constrained Brownian motion, with the goal of performing long simulations. The behavior of a simulated protein system is determined by summation of forces that include a Brownian force, a drag force, excluded volume constraints, relative position constraints, and binding constraints that relate to experimentally determined on-rates and off-rates for chosen protein elements in a system. Proteins are abstracted as spheres. Binding surfaces are defined radially within a protein. Peptide linkers are abstracted as small protein-like spheres with rigid connections. To address whether our framework could generate useful predictions, we simulated the behavior of an engineered fusion protein consisting of two 20 000 Da proteins attached by flexible glycine/serine-type linkers. The two protein elements remained closely associated, as if constrained by a random walk in three dimensions of the peptide linker, as opposed to showing a distribution of distances expected if movement were dominated by Brownian motion of the protein domains only. We also simulated the behavior of fluorescent proteins tethered by a linker of varying length, compared the predicted Förster resonance energy transfer with previous experimental observations, and obtained a good correspondence. Finally, we simulated the binding behavior of a fusion of two ligands that could simultaneously bind to distinct cell-surface receptors, and explored the landscape of linker lengths and stiffnesses that could enhance receptor binding of one ligand when the other ligand has already bound to its receptor, thus, addressing potential mechanisms for improving targeted signal transduction proteins. These specific results have implications for the design of targeted fusion proteins and artificial transcription factors involving fusion of natural domains. More broadly, the simulation framework described here could be extended to include more detailed system features such as non-spherical protein shapes and electrostatics, without requiring detailed, computationally expensive specifications. This framework should be useful in predicting behavior of engineered protein systems including binding and dissociation reactions.
Dynamics simulations for engineering macromolecular interactions.
Robinson-Mosher, Avi; Shinar, Tamar; Silver, Pamela A; Way, Jeffrey
2013-06-01
The predictable engineering of well-behaved transcriptional circuits is a central goal of synthetic biology. The artificial attachment of promoters to transcription factor genes usually results in noisy or chaotic behaviors, and such systems are unlikely to be useful in practical applications. Natural transcriptional regulation relies extensively on protein-protein interactions to insure tightly controlled behavior, but such tight control has been elusive in engineered systems. To help engineer protein-protein interactions, we have developed a molecular dynamics simulation framework that simplifies features of proteins moving by constrained Brownian motion, with the goal of performing long simulations. The behavior of a simulated protein system is determined by summation of forces that include a Brownian force, a drag force, excluded volume constraints, relative position constraints, and binding constraints that relate to experimentally determined on-rates and off-rates for chosen protein elements in a system. Proteins are abstracted as spheres. Binding surfaces are defined radially within a protein. Peptide linkers are abstracted as small protein-like spheres with rigid connections. To address whether our framework could generate useful predictions, we simulated the behavior of an engineered fusion protein consisting of two 20,000 Da proteins attached by flexible glycine/serine-type linkers. The two protein elements remained closely associated, as if constrained by a random walk in three dimensions of the peptide linker, as opposed to showing a distribution of distances expected if movement were dominated by Brownian motion of the protein domains only. We also simulated the behavior of fluorescent proteins tethered by a linker of varying length, compared the predicted Förster resonance energy transfer with previous experimental observations, and obtained a good correspondence. Finally, we simulated the binding behavior of a fusion of two ligands that could simultaneously bind to distinct cell-surface receptors, and explored the landscape of linker lengths and stiffnesses that could enhance receptor binding of one ligand when the other ligand has already bound to its receptor, thus, addressing potential mechanisms for improving targeted signal transduction proteins. These specific results have implications for the design of targeted fusion proteins and artificial transcription factors involving fusion of natural domains. More broadly, the simulation framework described here could be extended to include more detailed system features such as non-spherical protein shapes and electrostatics, without requiring detailed, computationally expensive specifications. This framework should be useful in predicting behavior of engineered protein systems including binding and dissociation reactions.
Ding, A Adam; Wu, Hulin
2014-10-01
We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.
Ding, A. Adam; Wu, Hulin
2015-01-01
We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093
Saturn PRobe Interior and aTmosphere Explorer (SPRITE)
NASA Technical Reports Server (NTRS)
Simon, Amy; Banfield, D.; Atkinson, D.; Atreya, S.; Brinckerhoff, W.; Colaprete, A.; Coustenis, A.; Fletcher, L.; Guillot, T.; Hofstadter, M.;
2016-01-01
The Vision and Voyages Planetary Decadal Survey identified a Saturn Probe mission as one of the high priority New Frontiers mission targets[1]. Many aspects of the Saturn system will not have been fully investigated at the end of the Cassini mission, because of limitations in its implementation and science instrumentation. Fundamental measurements of the interior structure and noble gas abundances of Saturn are needed to better constrain models of Solar System formation, as well as to provide an improved context for exoplanet systems. The SPRITE mission will fulfill the scientific goals of the Decadal Survey Saturn probe mission. It will also provide ground truth for quantities constrained by Cassini and conduct new investigations that improve our understanding of Saturn's interior structure and composition, and by proxy, those of extrasolar giant planets.
NASA Astrophysics Data System (ADS)
Nerney, E. G.; Bagenal, F.; Yoshioka, K.; Schmidt, C.
2017-12-01
Io emits volcanic gases into space at a rate of about a ton per second. The gases become ionized and trapped in Jupiter's strong magnetic field, forming a torus of plasma that emits 2 terawatts of UV emissions. In recent work re-analyzing UV emissions observed by Voyager, Galileo, & Cassini, we found plasma conditions consistent with a physical chemistry model with a neutral source of dissociated sulfur dioxide from Io (Nerney et al., 2017). In further analysis of UV observations from JAXA's Hisaki mission (using our spectral emission model) we constrain the torus composition with ground based observations. The physical chemistry model (adapted from Delamere et al., 2005) is then used to match derived plasma conditions. We correlate the oxygen to sulfur ratio of the neutral source with volcanic eruptions to understand the change in magnetospheric plasma conditions. Our goal is to better understand and constrain both the temporal and spatial variability of the flow of mass and energy from Io's volcanic atmosphere to Jupiter's dynamic magnetosphere.
NASA Astrophysics Data System (ADS)
Frith, J.; Barker, E.; Cowardin, H.; Buckalew, B.; Anz-Meador, P.; Lederer, S.
The National Aeronautics and Space Administration (NASA) Orbital Debris Program Office (ODPO) recently commissioned the Meter Class Autonomous Telescope (MCAT) on Ascension Island with the primary goal of obtaining population statistics of the geosynchronous (GEO) orbital debris environment. To help facilitate this, studies have been conducted using MCAT’s known and projected capabilities to estimate the accuracy and timeliness in which it can survey the GEO environment, including collected weather data and the proposed observational data collection cadence. To optimize observing cadences and probability of detection, on-going work using a simulated GEO debris population sampled at various cadences are run through the Constrained Admissible Region Multi Hypotheses Filter (CAR-MHF). The orbits computed from the results are then compared to the simulated data to assess MCAT’s ability to determine accurately the orbits of debris at various sample rates. The goal of this work is to discriminate GEO and near-GEO objects from GEO transfer orbit objects that can appear as GEO objects in the environmental models due to the short arc observation and an assumed circular orbit. The specific methods and results are presented here.
Spitzer IRS Observations of Low-Mass Seyfert Galaxies
NASA Astrophysics Data System (ADS)
Thornton, Carol E.; Barth, A. J.; Greene, J. E.; Ho, L. C.
2009-05-01
The Sloan Digital Sky Survey has made it possible to identify the first samples of active galaxies with estimated black hole masses below 106 solar masses. We have obtained Spitzer IRS low-resolution spectra, covering 5-30 microns, of a sample of 41 Seyfert galaxies with low-mass black holes. Our sample includes SDSS-selected objects from the low-mass Seyfert 1 sample of Greene & Ho (2004) and the low-mass Seyfert 2 sample of Barth et al. (2008), as well as NGC 4395 and POX 52. The goals of this work are to examine the dust emission properties of these objects and investigate the relationship between Type 1 and Type 2 AGNs at low luminosities and low masses, to search for evidence of star formation, and to use emission-line diagnostics to constrain physical conditions within the narrow-line regions. We will present preliminary results from this project, including measurements of continuum shapes and dust temperatures, narrow-line region diagnostics, and PAH features, derived using the IDL code PAHFIT (Smith et al. 2007).
Evolving the Technical Infrastructure of the Planetary Data System for the 21st Century
NASA Technical Reports Server (NTRS)
Beebe, Reta F.; Crichton, D.; Hughes, S.; Grayzeck, E.
2010-01-01
The Planetary Data System (PDS) was established in 1989 as a distributed system to assure scientific oversight. Initially the PDS followed guidelines recommended by the National Academies Committee on Data Management and Computation (CODMAC, 1982) and placed emphasis on archiving validated datasets. But overtime user demands, supported by increased computing capabilities and communication methods, have placed increasing demands on the PDS. The PDS must add additional services to better enable scientific analysis within distributed environments and to ensure that those services integrate with existing systems and data. To face these challenges the Planetary Data System (PDS) must modernize its architecture and technical implementation. The PDS 2010 project addresses these challenges. As part of this project, the PDS has three fundamental project goals that include: (1) Providing more efficient client delivery of data by data providers to the PDS (2) Enabling a stable, long-term usable planetary science data archive (3) Enabling services for the data consumer to find, access and use the data they require in contemporary data formats. In order to achieve these goals, the PDS 2010 project is upgrading both the technical infrastructure and the data standards to support increased efficiency in data delivery as well as usability of the PDS. Efforts are underway to interface with missions as early as possible and to streamline the preparation and delivery of data to the PDS. Likewise, the PDS is working to define and plan for data services that will help researchers to perform analysis in cost-constrained environments. This presentation will cover the PDS 2010 project including the goals, data standards and technical implementation plans that are underway within the Planetary Data System. It will discuss the plans for moving from the current system, version PDS 3, to version PDS 4.
Health promotion and the First Amendment: government control of the informational environment.
Gostin, L O; Javitt, G H
2001-01-01
Government efforts to protect public health often include controlling health information. The government may proscribe messages conveyed by commercial entities (e.g., false or misleading), recommend messages from commercial entities (e.g., warnings and safety instructions), and convey health messages (e.g., health communication campaigns). Through well-developed, albeit evolving, case law, government control of private speech has been constrained to avoid impinging on such values as free expression, truthfulness, and autonomous decision making. No simple legal framework has been developed for the government's own health messages to mediate between the legitimate goals of health protection and these other values. Nevertheless, government recommendations on matters of health raise difficult social and ethical questions and involve important societal trade-offs. Accordingly, this article proposes legal and ethical principles relating to government control of the health information environment.
Aircraft Optimization for Minimum Environmental Impact
NASA Technical Reports Server (NTRS)
Antoine, Nicolas; Kroo, Ilan M.
2001-01-01
The objective of this research is to investigate the tradeoff between operating cost and environmental acceptability of commercial aircraft. This involves optimizing the aircraft design and mission to minimize operating cost while constraining exterior noise and emissions. Growth in air traffic and airport neighboring communities has resulted in increased pressure to severely penalize airlines that do not meet strict local noise and emissions requirements. As a result, environmental concerns have become potent driving forces in commercial aviation. Traditionally, aircraft have been first designed to meet performance and cost goals, and adjusted to satisfy the environmental requirements at given airports. The focus of the present study is to determine the feasibility of including noise and emissions constraints in the early design of the aircraft and mission. This paper introduces the design tool and results from a case study involving a 250-passenger airliner.
Debris and shrapnel assessments for National Ignition Facility targets and diagnostics
NASA Astrophysics Data System (ADS)
Masters, N. D.; Fisher, A.; Kalantar, D.; Stölken, J.; Smith, C.; Vignes, R.; Burns, S.; Doeppner, T.; Kritcher, A.; Park, H.-S.
2016-05-01
High-energy laser experiments at the National Ignition Facility (NIF) can create debris and shrapnel capable of damaging laser optics and diagnostic instruments. The size, composition and location of target components and sacrificial shielding (e.g., disposable debris shields, or diagnostic filters) and the protection they provide is constrained by many factors, including: chamber and diagnostic geometries, experimental goals and material considerations. An assessment of the generation, nature and velocity of shrapnel and debris and their potential threats is necessary prior to fielding targets or diagnostics. These assessments may influence target and shielding design, filter configurations and diagnostic selection. This paper will outline the approach used to manage the debris and shrapnel risk associated with NIF targets and diagnostics and present some aspects of two such cases: the Material Strength Rayleigh- Taylor campaign and the Mono Angle Crystal Spectrometer (MACS).
Use of Lantibiotic Synthetases for the Preparation of Bioactive Constrained Peptides
Levengood, Matthew R.
2008-01-01
Stabilization of biologically active peptides is a major goal in peptide-based drug design. Cyclization is an often-used strategy to enhance resistance of peptides towards protease degradation and simultaneously improve their affinity for targets by restricting their conformational flexibility. Amongst the various cyclization strategies, the use of thioether crosslinks has been successful for various peptides including enkephalin. The synthesis of these thioethers can be arduous, especially for longer peptides. Described herein is an enzymatic strategy taking advantage of the lantibiotic synthetase LctM that dehydrates Ser and Thr residues to the corresponding dehydroalanine and dehydrobutyrine residues and catalyzes the Michael-type addition of Cys residues to form thioether crosslinks. The use of LctM to prepare thioether containing analogs of enkephalin, contryphan, and inhibitors of human tripeptidyl peptidase II and spider venom epimerase is demonstrated. PMID:18294843
NASA Technical Reports Server (NTRS)
Mutambara, Arthur G. O.; Litt, Jonathan
1998-01-01
This report addresses the problem of path planning and control of robotic manipulators which have joint-position limits and joint-rate limits. The manipulators move autonomously and carry out variable tasks in a dynamic, unstructured and cluttered environment. The issue considered is whether the robotic manipulator can achieve all its tasks, and if it cannot, the objective is to identify the closest achievable goal. This problem is formalized and systematically solved for generic manipulators by using inverse kinematics and forward kinematics. Inverse kinematics are employed to define the subspace, workspace and constrained workspace, which are then used to identify when a task is not achievable. The closest achievable goal is obtained by determining weights for an optimal control redistribution scheme. These weights are quantified by using forward kinematics. Conditions leading to joint rate limits are identified, in particular it is established that all generic manipulators have singularities at the boundary of their workspace, while some have loci of singularities inside their workspace. Once the manipulator singularity is identified the command redistribution scheme is used to compute the closest achievable Cartesian velocities. Two examples are used to illustrate the use of the algorithm: A three link planar manipulator and the Unimation Puma 560. Implementation of the derived algorithm is effected by using a supervisory expert system to check whether the desired goal lies in the constrained workspace and if not, to evoke the redistribution scheme which determines the constraint relaxation between end effector position and orientation, and then computes optimal gains.
Depletion mapping and constrained optimization to support managing groundwater extraction
Fienen, Michael N.; Bradbury, Kenneth R.; Kniffin, Maribeth; Barlow, Paul M.
2018-01-01
Groundwater models often serve as management tools to evaluate competing water uses including ecosystems, irrigated agriculture, industry, municipal supply, and others. Depletion potential mapping—showing the model-calculated potential impacts that wells have on stream baseflow—can form the basis for multiple potential management approaches in an oversubscribed basin. Specific management approaches can include scenarios proposed by stakeholders, systematic changes in well pumping based on depletion potential, and formal constrained optimization, which can be used to quantify the tradeoff between water use and stream baseflow. Variables such as the maximum amount of reduction allowed in each well and various groupings of wells using, for example, K-means clustering considering spatial proximity and depletion potential are considered. These approaches provide a potential starting point and guidance for resource managers and stakeholders to make decisions about groundwater management in a basin, spreading responsibility in different ways. We illustrate these approaches in the Little Plover River basin in central Wisconsin, United States—home to a rich agricultural tradition, with farmland and urban areas both in close proximity to a groundwater-dependent trout stream. Groundwater withdrawals have reduced baseflow supplying the Little Plover River below a legally established minimum. The techniques in this work were developed in response to engaged stakeholders with various interests and goals for the basin. They sought to develop a collaborative management plan at a watershed scale that restores the flow rate in the river in a manner that incorporates principles of shared governance and results in effective and minimally disruptive changes in groundwater extraction practices.
Depletion Mapping and Constrained Optimization to Support Managing Groundwater Extraction.
Fienen, Michael N; Bradbury, Kenneth R; Kniffin, Maribeth; Barlow, Paul M
2018-01-01
Groundwater models often serve as management tools to evaluate competing water uses including ecosystems, irrigated agriculture, industry, municipal supply, and others. Depletion potential mapping-showing the model-calculated potential impacts that wells have on stream baseflow-can form the basis for multiple potential management approaches in an oversubscribed basin. Specific management approaches can include scenarios proposed by stakeholders, systematic changes in well pumping based on depletion potential, and formal constrained optimization, which can be used to quantify the tradeoff between water use and stream baseflow. Variables such as the maximum amount of reduction allowed in each well and various groupings of wells using, for example, K-means clustering considering spatial proximity and depletion potential are considered. These approaches provide a potential starting point and guidance for resource managers and stakeholders to make decisions about groundwater management in a basin, spreading responsibility in different ways. We illustrate these approaches in the Little Plover River basin in central Wisconsin, United States-home to a rich agricultural tradition, with farmland and urban areas both in close proximity to a groundwater-dependent trout stream. Groundwater withdrawals have reduced baseflow supplying the Little Plover River below a legally established minimum. The techniques in this work were developed in response to engaged stakeholders with various interests and goals for the basin. They sought to develop a collaborative management plan at a watershed scale that restores the flow rate in the river in a manner that incorporates principles of shared governance and results in effective and minimally disruptive changes in groundwater extraction practices. © 2017, National Ground Water Association.
ERIC Educational Resources Information Center
Gagnon, Amélie; Legault, Elise
2015-01-01
Ensuring that every child gets a teacher is a prerequisite to reaching the Education for All goals. Today, 58 million children are still not in school, and while a variety of factors constrain efforts to provide quality primary education for all children, ensuring that classrooms have enough teachers is at the top of the list. Since 2006, the…
ERIC Educational Resources Information Center
Hutchison, Amy C.; Woodward, Lindsay
2014-01-01
The Common Core State Standards produce a need to understand how digital tools can support literacy instruction. The purpose of this case study was to explore how a language arts teacher's integration of computers and iPads empowered and constrained her and the resulting classroom instruction. Constraining factors included (a) inadequate…
NASA Technical Reports Server (NTRS)
Spangelo, Sara
2015-01-01
The goal of this paper is to explore the mission opportunities that are uniquely enabled by U-class Solar Electric Propulsion (SEP) technologies. Small SEP thrusters offers significant advantages relative to existing technologies and will revolutionize the class of mission architectures that small spacecraft can accomplish by enabling trajectory maneuvers with significant change in velocity requirements and reaction wheel-free attitude control. This paper aims to develop and apply a common system-level modeling framework to evaluate these thrusters for relevant upcoming mission scenarios, taking into account the mass, power, volume, and operational constraints of small highly-constrained missions. We will identify the optimal technology for broad classes of mission applications for different U-class spacecraft sizes and provide insights into what constrains the system performance to identify technology areas where improvements are needed.
Constraining the evolution of the Hubble Parameter using cosmic chronometers
NASA Astrophysics Data System (ADS)
Dickinson, Hugh
2017-08-01
Substantial investment is being made in space- and ground-based missions with the goal of revealing the nature of the observed cosmic acceleration. This is one of the most important unsolved problems in cosmology today.We propose here to constrain the evolution of the Hubble parameter [H(z)] between 1.3 < z < 2, using the cosmic chronometer method, based on differential age measurements for passively evolving galaxies. Existing WFC3-IR G102 and G141 grisms data obtained by the WISP, 3D-HST+AGHAST, FIGS, and CLEAR surveys will yield a sample of 140 suitable standard clocks, expanding existing samples by a factor of five. These additional data will enable us to improve existing constraints on the evolution of H at high redshift, and insodoing to better understand the fundamental nature of dark energy.
Dynamical aspects of behavior generation under constraints
Harter, Derek; Achunala, Srinivas
2007-01-01
Dynamic adaptation is a key feature of brains helping to maintain the quality of their performance in the face of increasingly difficult constraints. How to achieve high-quality performance under demanding real-time conditions is an important question in the study of cognitive behaviors. Animals and humans are embedded in and constrained by their environments. Our goal is to improve the understanding of the dynamics of the interacting brain–environment system by studying human behaviors when completing constrained tasks and by modeling the observed behavior. In this article we present results of experiments with humans performing tasks on the computer under variable time and resource constraints. We compare various models of behavior generation in order to describe the observed human performance. Finally we speculate on mechanisms how chaotic neurodynamics can contribute to the generation of flexible human behaviors under constraints. PMID:19003514
Model selection as a science driver for dark energy surveys
NASA Astrophysics Data System (ADS)
Mukherjee, Pia; Parkinson, David; Corasaniti, Pier Stefano; Liddle, Andrew R.; Kunz, Martin
2006-07-01
A key science goal of upcoming dark energy surveys is to seek time-evolution of the dark energy. This problem is one of model selection, where the aim is to differentiate between cosmological models with different numbers of parameters. However, the power of these surveys is traditionally assessed by estimating their ability to constrain parameters, which is a different statistical problem. In this paper, we use Bayesian model selection techniques, specifically forecasting of the Bayes factors, to compare the abilities of different proposed surveys in discovering dark energy evolution. We consider six experiments - supernova luminosity measurements by the Supernova Legacy Survey, SNAP, JEDI and ALPACA, and baryon acoustic oscillation measurements by WFMOS and JEDI - and use Bayes factor plots to compare their statistical constraining power. The concept of Bayes factor forecasting has much broader applicability than dark energy surveys.
Kong, Peter C; Grandy, Jon D; Detering, Brent A; Zuck, Larry D
2013-09-17
Electrode assemblies for plasma reactors include a structure or device for constraining an arc endpoint to a selected area or region on an electrode. In some embodiments, the structure or device may comprise one or more insulating members covering a portion of an electrode. In additional embodiments, the structure or device may provide a magnetic field configured to control a location of an arc endpoint on the electrode. Plasma generating modules, apparatus, and systems include such electrode assemblies. Methods for generating a plasma include covering at least a portion of a surface of an electrode with an electrically insulating member to constrain a location of an arc endpoint on the electrode. Additional methods for generating a plasma include generating a magnetic field to constrain a location of an arc endpoint on an electrode.
Characterizing Mega-Earthquake Related Tsunami on Subduction Zones without Large Historical Events
NASA Astrophysics Data System (ADS)
Williams, C. R.; Lee, R.; Astill, S.; Farahani, R.; Wilson, P. S.; Mohammed, F.
2014-12-01
Due to recent large tsunami events (e.g., Chile 2010 and Japan 2011), the insurance industry is very aware of the importance of managing its exposure to tsunami risk. There are currently few tools available to help establish policies for managing and pricing tsunami risk globally. As a starting point and to help address this issue, Risk Management Solutions Inc. (RMS) is developing a global suite of tsunami inundation footprints. This dataset will include both representations of historical events as well as a series of M9 scenarios on subductions zones that have not historical generated mega earthquakes. The latter set is included to address concerns about the completeness of the historical record for mega earthquakes. This concern stems from the fact that the Tohoku Japan earthquake was considerably larger than had been observed in the historical record. Characterizing the source and rupture pattern for the subduction zones without historical events is a poorly constrained process. In many case, the subduction zones can be segmented based on changes in the characteristics of the subducting slab or major ridge systems. For this project, the unit sources from the NOAA propagation database are utilized to leverage the basin wide modeling included in this dataset. The length of the rupture is characterized based on subduction zone segmentation and the slip per unit source can be determined based on the event magnitude (i.e., M9) and moment balancing. As these events have not occurred historically, there is little to constrain the slip distribution. Sensitivity tests on the potential rupture pattern have been undertaken comparing uniform slip to higher shallow slip and tapered slip models. Subduction zones examined include the Makran Trench, the Lesser Antilles and the Hikurangi Trench. The ultimate goal is to create a series of tsunami footprints to help insurers understand their exposures at risk to tsunami inundation around the world.
Constrained Laboratory vs. Unconstrained Steering-Induced Rollover Crash Tests.
Kerrigan, Jason R; Toczyski, Jacek; Roberts, Carolyn; Zhang, Qi; Clauser, Mark
2015-01-01
The goal of this study was to evaluate how well an in-laboratory rollover crash test methodology that constrains vehicle motion can reproduce the dynamics of unconstrained full-scale steering-induced rollover crash tests in sand. Data from previously-published unconstrained steering-induced rollover crash tests using a full-size pickup and mid-sized sedan were analyzed to determine vehicle-to-ground impact conditions and kinematic response of the vehicles throughout the tests. Then, a pair of replicate vehicles were prepared to match the inertial properties of the steering-induced test vehicles and configured to record dynamic roof structure deformations and kinematic response. Both vehicles experienced greater increases in roll-axis angular velocities in the unconstrained tests than in the constrained tests; however, the increases that occurred during the trailing side roof interaction were nearly identical between tests for both vehicles. Both vehicles experienced linear accelerations in the constrained tests that were similar to those in the unconstrained tests, but the pickup, in particular, had accelerations that were matched in magnitude, timing, and duration very closely between the two test types. Deformations in the truck test were higher in the constrained than the unconstrained, and deformations in the sedan were greater in the unconstrained than the constrained as a result of constraints of the test fixture, and differences in impact velocity for the trailing side. The results of the current study suggest that in-laboratory rollover tests can be used to simulate the injury-causing portions of unconstrained rollover crashes. To date, such a demonstration has not yet been published in the open literature. This study did, however, show that road surface can affect vehicle response in a way that may not be able to be mimicked in the laboratory. Lastly, this study showed that configuring the in-laboratory tests to match the leading-side touchdown conditions could result in differences in the trailing side impact conditions.
2007-03-01
potential of moving closer to the goal of a fully service-oriented GIG by allowing even computing - and bandwidth-constrained elements to participate...the functionality provided by core network assets with relatively unlimited bandwidth and computing resources. Finally, the nature of information is...the Department of Defense is a requirement for ubiquitous computer connectivity. An espoused vehicle for delivering that ubiquity is the Global
Study on transfer optimization of urban rail transit and conventional public transport
NASA Astrophysics Data System (ADS)
Wang, Jie; Sun, Quan Xin; Mao, Bao Hua
2018-04-01
This paper mainly studies the time optimization of feeder connection between rail transit and conventional bus in a shopping center. In order to achieve the goal of connecting rail transportation effectively and optimizing the convergence between the two transportations, the things had to be done are optimizing the departure intervals, shorting the passenger transfer time and improving the service level of public transit. Based on the goal that has the minimum of total waiting time of passengers and the number of start of classes, establish the optimizing model of bus connecting of departure time. This model has some constrains such as transfer time, load factor, and the convergence of public transportation grid spacing. It solves the problems by using genetic algorithms.
Non-solenoidal startup and low-β operations in Pegasus
NASA Astrophysics Data System (ADS)
Schlossberg, D. J.; Battaglia, D. J.; Bongard, M. W.; Fonck, R. J.; Redd, A. J.
2009-11-01
Non-solenoidal startup using point-source DC helicity injectors (plasma guns) has been achieved in the Pegasus Toroidal Experiment for plasmas with Ip in excess of 100 kA using Iinj<4,A. The maximum achieved Ip tentatively scales as √ITFIinj/w, where w is the radial thickness of the gun-driven edge. The Ip limits appear to conform to a simple stationary model involving helicity conservation and Taylor relaxation. However, observed MHD activity reveals the additional dynamics of the relaxation process, evidenced by intermittent bursts of n=1 activity correlated with rapid redistribution of the current channel. Recent upgrades to the gun system provide higher helicity injection rates, smaller w, a more constrained gun current path, and more precise diagnostics. Experimental goals include extending parametric scaling studies, determining the conditions where parallel conduction losses dominate the helicity dissipation, and building the physics understanding of helicity injection to confidently design gun systems for larger, future tokamaks.
Searching for X-ray Pulsations from Neutron Stars Using NICER
NASA Astrophysics Data System (ADS)
Ray, Paul S.; Arzoumanian, Zaven; Bogdanov, Slavko; Bult, Peter; Chakrabarty, Deepto; Guillot, Sebastien; Kust Harding, Alice; Ho, Wynn C. G.; Lamb, Frederick K.; Mahmoodifar, Simin; Miller, M. Coleman; Strohmayer, Tod E.; Wilson-Hodge, Colleen A.; Wolff, Michael Thomas
2017-08-01
The Neutron Star Interior Composition Explorer (NICER) presents an exciting new capability for discovering new modulation properties of X-ray emitting neutron stars, including large area, low background, extremely precise absolute time stamps, superb low-energy response and flexible scheduling. The Pulsation Searches and Multiwavelength Coordination working group has designed a 2.5 Ms observing program to search for pulsations and characterize the modulation properties of about 30 known or suspected neutron star sources across a number of source categories. A key early goal will be to search for pulsations from millisecond pulsars that might exhibit thermal pulsations from the surface suitable for pulse profile modeling to constrain the neutron star equation of state. In addition, we will search for pulsations from transitional millisecond pulsars, isolated neutron stars, LMXBs, accretion-powered millisecond pulsars, central compact objects and other sources. We will present our science plan and initial results from the first months of the NICER mission.
Searching for X-ray Pulsations from Neutron Stars Using NICER
NASA Astrophysics Data System (ADS)
Ray, Paul S.; Arzoumanian, Zaven; Gendreau, Keith C.; Bogdanov, Slavko; Bult, Peter; Chakrabarty, Deepto; Chakrabarty, Deepto; Guillot, Sebastien; Harding, Alice; Ho, Wynn C. G.; Lamb, Frederick; Mahmoodifar, Simin; Miller, Cole; Strohmayer, Tod; Wilson-Hodge, Colleen; Wolff, Michael T.; NICER Science Team Working Group on Pulsation Searches and Multiwavelength Coordination
2018-01-01
The Neutron Star Interior Composition Explorer (NICER) presents an exciting new capability for discovering new modulation properties of X-ray emitting neutron stars, including large area, low background, extremely precise absolute time stamps, superb low-energy response and flexible scheduling. The Pulsation Searches and Multiwavelength Coordination working group has designed a 2.5 Ms observing program to search for pulsations and characterize the modulation properties of about 30 known or suspected neutron star sources across a number of source categories. A key early goal will be to search for pulsations from millisecond pulsars that might exhibit thermal pulsations from the surface suitable for pulse profile modeling to constrain the neutron star equation of state. In addition, we will search for pulsations from transitional millisecond pulsars, isolated neutron stars, LMXBs, accretion-powered millisecond pulsars, central compact objects and other sources. We present our science plan and initial results from the first months of the NICER mission.
Automated recognition and extraction of tabular fields for the indexing of census records
NASA Astrophysics Data System (ADS)
Clawson, Robert; Bauer, Kevin; Chidester, Glen; Pohontsch, Milan; Kennard, Douglas; Ryu, Jongha; Barrett, William
2013-01-01
We describe a system for indexing of census records in tabular documents with the goal of recognizing the content of each cell, including both headers and handwritten entries. Each document is automatically rectified, registered and scaled to a known template following which lines and fields are detected and delimited as cells in a tabular form. Whole-word or whole-phrase recognition of noisy machine-printed text is performed using a glyph library, providing greatly increased efficiency and accuracy (approaching 100%), while avoiding the problems inherent with traditional OCR approaches. Constrained handwriting recognition results for a single author reach as high as 98% and 94.5% for the Gender field and Birthplace respectively. Multi-author accuracy (currently 82%) can be improved through an increased training set. Active integration of user feedback in the system will accelerate the indexing of records while providing a tightly coupled learning mechanism for system improvement.
Spitzer IRS Observations of Low-Mass Seyfert Galaxies
NASA Astrophysics Data System (ADS)
Thornton, Carol E.; Barth, Aaron J.; Ho, Luis C.; Greene, Jenny E.
2010-05-01
The Sloan Digital Sky Survey has made it possible to identify the first samples of active galaxies with estimated black hole masses below ~ 106 M⊙. We have obtained Spitzer IRS low-resolution spectra, covering 5-38 μm, of a sample of 41 Seyfert galaxies with low-mass black holes. Our sample includes SDSS-selected objects from the low-mass Seyfert 1 sample of Greene & Ho (2004) and the low-mass Seyfert 2 sample of Barth et al. (2008), as well as NGC 4395 and POX 52. The goals of this work are to examine the dust emission properties of these objects and investigate the relationship between type 1 and type 2 AGNs at low luminosities and low masses, to search for evidence of star formation, and to use emission-line diagnostics to constrain physical conditions within the narrow-line regions. Here we present preliminary results from this project.
Electrode assemblies, plasma generating apparatuses, and methods for generating plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kong, Peter C.; Grandy, Jon D.; Detering, Brent A.
Electrode assemblies for plasma reactors include a structure or device for constraining an arc endpoint to a selected area or region on an electrode. In some embodiments, the structure or device may comprise one or more insulating members covering a portion of an electrode. In additional embodiments, the structure or device may provide a magnetic field configured to control a location of an arc endpoint on the electrode. Plasma generating modules, apparatus, and systems include such electrode assemblies. Methods for generating a plasma include covering at least a portion of a surface of an electrode with an electrically insulating membermore » to constrain a location of an arc endpoint on the electrode. Additional methods for generating a plasma include generating a magnetic field to constrain a location of an arc endpoint on an electrode.« less
Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data
Pnevmatikakis, Eftychios A.; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A.; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M.; Peterka, Darcy S.; Yuste, Rafael; Paninski, Liam
2016-01-01
SUMMARY We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multineuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data. PMID:26774160
NASA Astrophysics Data System (ADS)
Margutti, Raffaella
2015-09-01
Mass loss in massive stars is one of the least understood yet fundamental aspects of stellar evolution. HOW and WHEN do massive stars lose their H-envelopes? This central question motivates this proposal. We request a modest investment of Chandra time over 3 years to map the unique situation of the interaction of a H-stripped SN2014C with a H-rich shell ejected by its progenitor star, as part of our extensive radio-to-gamma-ray follow-up. Our goal is to constrain the density profile and proximity of the ejected material, and hence the mass-loss history of the progenitor star. Unlike all other H-stripped SNe, the radio and X-ray emission of SN14C is still increasing at 400 days, giving us the unprecedented opportunity to constrain the epoch ejection of H-rich material in fine detail.
NASA Astrophysics Data System (ADS)
Henry, Richard B. C.; Balick, Bruce; Dufour, Reginald J.; Kwitter, Karen B.; Shaw, Richard A.; Corradi, Romano
2015-01-01
We present detailed photoionization models of eight Galactic planetary nebulae (IC2165, IC3568, NGC2440, NGC3242, NGC5315, NGC5882, NGC7662, & PB6) based on recently obtained HST STIS spectra. Our interim goal is to infer Teff, luminosity, and current and progenitor masses for each central star, while the ultimate goal is to constrain published stellar evolution models which predict nebular CNO abundances. The models were produced by using the code CLOUDY to match closely the measured line strengths derived from high-quality HST STIS spectra (see poster by Dufour et al., this session) extending in wavelength from 1150-10270 Angstroms. The models assumed a blackbody SED. Variable input parameters included Teff, a radially constant nebular density, a filling factor, and elemental abundances. For the eight PNs we found a birth mass range of 1.5-2.9 Msun, a range in log(L/Lsun) of 3.10-3.88, and a Teff range of 51-150k K. Finally, we compare CNO abundances of the eight successful models with PN abundances of these same elements that are predicted by published stellar evolution models. We gratefully acknowledge generous support from NASA through grants related to the Cycle 19 program GO12600.
Observing Solar Radio Bursts from the Lunar Surface
NASA Technical Reports Server (NTRS)
MacDowall, R. J.; Lazio, T. J.; Bale, S. D.; Burns, J.; Gopalswamy, N.; Jones, D. L.; Kaiser, M. L.; Kasper, J.; Weiler, K. W.
2010-01-01
Locating low frequency radio observatories on the lunar surface has a number of advantages. Here, we describe the Radio Observatory for Lunar Sortie Science (ROLSS), a concept for a low frequency, radio imaging interferometric array designed to study particle acceleration in the corona and inner heliosphere. ROLSS would be deployed during an early lunar sortie or by a robotic rover as part of an unmanned landing. The prime science mission is to image type II and type III solar radio bursts with the aim of determining the sites at and mechanisms by which the radiating particles are accelerated. Secondary science goals include constraining the density of the lunar ionosphere by searching for a low radio frequency cutoff of the solar radio emissions and constraining the low energy electron population in astrophysical sources. Furthermore, ROLSS serves a pathfinder function for larger lunar radio arrays. Key design requirements on ROLES include the operational frequency and angular resolution. The electron densities in the solar corona and inner heliosphere are such that the relevant emission occurs below 10 MHz, essentially unobservable from Earth's surface due to the terrestrial ionospheric cutoff. Resolving the potential sites of particle acceleration requires an instrument with an angular resolution of at least 2 deg, equivalent to a linear array size of approximately 500 meters. Operations would consist of data acquisition during the lunar day, with regular data downlinks. The major components of the ROLSS array are 3 antenna arms arranged in a Y shape, with a central electronics package (CEP). Each antenna arm is a linear strip of polyimide film (e.g., Kapton (TM)) on which 16 single polarization dipole antennas are located by depositing a conductor (e.g., silver). The arms also contain transmission lines for carrying the radio signals from the science antennas to the CEP.
2016-09-01
to both genetic algorithms and evolution strategies to achieve these goals. The results of this research offer a promising new set of modified ...abs_all.jsp?arnumber=203904 [163] Z. Michalewicz, C. Z. Janikow, and J. B. Krawczyk, “A modified genetic algo- rithm for optimal control problems...Available: http://arc.aiaa.org/doi/abs/10.2514/ 2.7053 375 [166] N. Yokoyama and S. Suzuki, “ Modified genetic algorithm for constrained trajectory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Greg F.; Cooley, Scott K.; Vienna, John D.
This article presents a case study of developing an experimental design for a constrained mixture experiment when the experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this article. The case study involves a 15-component nuclear waste glass example in which SO3 is one of the components. SO3 has a solubility limit inmore » glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture (PQM) model expressed in the relative proportions of the 14 other components. The PQM model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This article discusses the waste glass example and how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study.« less
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
NASA Astrophysics Data System (ADS)
Rummel, J. D.; Conley, C. A.
2013-12-01
The 2013-2022 NRC Decadal Survey named its #1 Flagship priority as a large, capable Mars rover that would be the first of a three-mission, multi-decadal effort to return samples from Mars. More recently, NASA's Mars Program has stated that a Mars rover mission known as 'Mars 2020' would be flown to Mars (in 2020) to accomplish a subset of the goals specified by the NRC, and the recent report of the Mars 2020 Science Definition Team (SDT) has recommended that the mission accomplish broad and rigorous in situ science, including seeking biosignatures, acquiring a diverse set of samples intended to address a range of Mars science questions and storing them in a cache for potential return to Earth at a later time, and other engineering goals to constrain costs and support future human exploration. In some ways Mars 2020 will share planetary protection requirements with the Mars Science Laboratory mission that landed in 2012, which included landing site constraints based on the presence of a perennial heat source (the MMRTG) aboard the lander/rover. In a very significant way, however, the presence of a sample-cache and the potential that Mars 2020 will be the first mission in the chain that will return a sample from Mars to Earth. Thus Mars 2020 will face more stringent requirements aimed at keeping the mission from returning Earth contamination with the samples from Mars. Mars 2020 will be looking for biosignatures of ancient life, on Mars, but will also need to be concerned with the potential to detect extant biosignatures or life itself within the sample that is eventually returned. If returned samples are able to unlock wide-ranging questions about the geology, surface processes, and habitability of Mars that cannot be answered by study of meteorites or current mission data, then either the returned samples must be free enough of Earth organisms to be releasable from a quarantine facility or the planned work of sample scientists, including high- and low-T geochemistry, igneous and sedimentary petrology, mineral spectroscopy, and astrobiology, will have to be accomplished within a containment facility. The returned samples also need to be clean of Earth organisms to avoid the potential that Earth contamination will mask the potential for martian life to be detected, allowing only non-conclusive or false-negative results. The requirements placed on the Mars 2020 mission to address contamination control in a life-detection framework will be one of the many challenges faced in this potential first step in Mars sample return.
Dynamics simulations for engineering macromolecular interactions
Robinson-Mosher, Avi; Shinar, Tamar; Silver, Pamela A.; Way, Jeffrey
2013-01-01
The predictable engineering of well-behaved transcriptional circuits is a central goal of synthetic biology. The artificial attachment of promoters to transcription factor genes usually results in noisy or chaotic behaviors, and such systems are unlikely to be useful in practical applications. Natural transcriptional regulation relies extensively on protein-protein interactions to insure tightly controlled behavior, but such tight control has been elusive in engineered systems. To help engineer protein-protein interactions, we have developed a molecular dynamics simulation framework that simplifies features of proteins moving by constrained Brownian motion, with the goal of performing long simulations. The behavior of a simulated protein system is determined by summation of forces that include a Brownian force, a drag force, excluded volume constraints, relative position constraints, and binding constraints that relate to experimentally determined on-rates and off-rates for chosen protein elements in a system. Proteins are abstracted as spheres. Binding surfaces are defined radially within a protein. Peptide linkers are abstracted as small protein-like spheres with rigid connections. To address whether our framework could generate useful predictions, we simulated the behavior of an engineered fusion protein consisting of two 20 000 Da proteins attached by flexible glycine/serine-type linkers. The two protein elements remained closely associated, as if constrained by a random walk in three dimensions of the peptide linker, as opposed to showing a distribution of distances expected if movement were dominated by Brownian motion of the protein domains only. We also simulated the behavior of fluorescent proteins tethered by a linker of varying length, compared the predicted Förster resonance energy transfer with previous experimental observations, and obtained a good correspondence. Finally, we simulated the binding behavior of a fusion of two ligands that could simultaneously bind to distinct cell-surface receptors, and explored the landscape of linker lengths and stiffnesses that could enhance receptor binding of one ligand when the other ligand has already bound to its receptor, thus, addressing potential mechanisms for improving targeted signal transduction proteins. These specific results have implications for the design of targeted fusion proteins and artificial transcription factors involving fusion of natural domains. More broadly, the simulation framework described here could be extended to include more detailed system features such as non-spherical protein shapes and electrostatics, without requiring detailed, computationally expensive specifications. This framework should be useful in predicting behavior of engineered protein systems including binding and dissociation reactions. PMID:23822508
Infertility in resource-constrained settings: moving towards amelioration.
Hammarberg, Karin; Kirkman, Maggie
2013-02-01
It is often presumed that infertility is not a problem in resource-poor areas where fertility rates are high. This is challenged by consistent evidence that the consequences of childlessness are very severe in low-income countries, particularly for women. In these settings, childless women are frequently stigmatized, isolated, ostracized, disinherited and neglected by the family and local community. This may result in physical and psychological abuse, polygamy and even suicide. Attitudes among people in high-income countries towards provision of infertility care in low-income countries have mostly been either dismissive or indifferent as it is argued that scarce healthcare resources should be directed towards reducing fertility and restricting population growth. However, recognition of the plight of infertile couples in low-income settings is growing. One of the United Nation's Millennium Development Goals was for universal access to reproductive health care by 2015, and WHO has recommended that infertility be considered a global health problem and stated the need for adaptation of assisted reproductive technology in low-resource countries. This paper challenges the construct that infertility is not a serious problem in resource-constrained settings and argues that there is a need for infertility care, including affordable assisted reproduction treatment, in these settings. It is often presumed that infertility is not a problem in densely populated, resource-poor areas where fertility rates are high. This presumption is challenged by consistent evidence that the consequences of childlessness are very severe in low-income countries, particularly for women. In these settings, childless women are frequently stigmatized, isolated, ostracized, disinherited and neglected by the family and local community. This may result in physical and psychological abuse, polygamy and even suicide. Because many families in low-income countries depend on children for economic survival, childlessness and having fewer children than the number identified as appropriate are social and public health matters, not only medical problems. Attitudes among people in high-income countries towards provision of infertility care in low-income countries have mostly been either dismissive or indifferent as it is argued that scarce healthcare resources and family planning activities should be directed towards reducing fertility and restricting population growth. However, recognition of the plight of infertile couples in low-income settings is growing. One of the United Nation's Millennium Development Goals was for universal access to reproductive health care by 2015, and WHO has recommended that infertility be considered a global health problem and stated the need for adaptation of assisted reproduction technology in low-resource countries. In this paper, we challenge the construct that infertility is not a serious problem in resource-constrained settings and argue that there is a need for infertility care, including affordable assisted reproduction treatment, in these settings. Copyright © 2012 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Reliability Constrained Priority Load Shedding for Aerospace Power System Automation
NASA Technical Reports Server (NTRS)
Momoh, James A.; Zhu, Jizhong; Kaddah, Sahar S.; Dolce, James L. (Technical Monitor)
2000-01-01
The need for improving load shedding on board the space station is one of the goals of aerospace power system automation. To accelerate the optimum load-shedding functions, several constraints must be involved. These constraints include congestion margin determined by weighted probability contingency, component/system reliability index, generation rescheduling. The impact of different faults and indices for computing reliability were defined before optimization. The optimum load schedule is done based on priority, value and location of loads. An optimization strategy capable of handling discrete decision making, such as Everett optimization, is proposed. We extended Everett method to handle expected congestion margin and reliability index as constraints. To make it effective for real time load dispatch process, a rule-based scheme is presented in the optimization method. It assists in selecting which feeder load to be shed, the location of the load, the value, priority of the load and cost benefit analysis of the load profile is included in the scheme. The scheme is tested using a benchmark NASA system consisting of generators, loads and network.
Launch and Assembly Reliability Analysis for Mars Human Space Exploration Missions
NASA Technical Reports Server (NTRS)
Cates, Grant R.; Stromgren, Chel; Cirillo, William M.; Goodliff, Kandyce E.
2013-01-01
NASA s long-range goal is focused upon human exploration of Mars. Missions to Mars will require campaigns of multiple launches to assemble Mars Transfer Vehicles in Earth orbit. Launch campaigns are subject to delays, launch vehicles can fail to place their payloads into the required orbit, and spacecraft may fail during the assembly process or while loitering prior to the Trans-Mars Injection (TMI) burn. Additionally, missions to Mars have constrained departure windows lasting approximately sixty days that repeat approximately every two years. Ensuring high reliability of launching and assembling all required elements in time to support the TMI window will be a key enabler to mission success. This paper describes an integrated methodology for analyzing and improving the reliability of the launch and assembly campaign phase. A discrete event simulation involves several pertinent risk factors including, but not limited to: manufacturing completion; transportation; ground processing; launch countdown; ascent; rendezvous and docking, assembly, and orbital operations leading up to TMI. The model accommodates varying numbers of launches, including the potential for spare launches. Having a spare launch capability provides significant improvement to mission success.
Water Use Optimization Toolset Project: Development and Demonstration Phase Draft Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gasper, John R.; Veselka, Thomas D.; Mahalik, Matthew R.
2014-05-19
This report summarizes the results of the development and demonstration phase of the Water Use Optimization Toolset (WUOT) project. It identifies the objective and goals that guided the project, as well as demonstrating potential benefits that could be obtained by applying the WUOT in different geo-hydrologic systems across the United States. A major challenge facing conventional hydropower plants is to operate more efficiently while dealing with an increasingly uncertain water-constrained environment and complex electricity markets. The goal of this 3-year WUOT project, which is funded by the U.S. Department of Energy (DOE), is to improve water management, resulting in moremore » energy, revenues, and grid services from available water, and to enhance environmental benefits from improved hydropower operations and planning while maintaining institutional water delivery requirements. The long-term goal is for the WUOT to be used by environmental analysts and deployed by hydropower schedulers and operators to assist in market, dispatch, and operational decisions.« less
The EB Factory: Fundamental Stellar Astrophysics with Eclipsing Binary Stars Discovered by Kepler
NASA Astrophysics Data System (ADS)
Stassun, Keivan
Eclipsing binaries (EBs) are key laboratories for determining the fundamental properties of stars. EBs are therefore foundational objects for constraining stellar evolution models, which in turn are central to determinations of stellar mass functions, of exoplanet properties, and many other areas. The primary goal of this proposal is to mine the Kepler mission light curves for: (1) EBs that include a subgiant star, from which precise ages can be derived and which can thus serve as critically needed age benchmarks; and within these, (2) long-period EBs that include low-mass M stars or brown dwarfs, which are increa-singly becoming the focus of exoplanet searches, but for which there are the fewest available fundamental mass- radius-age benchmarks. A secondary goal of this proposal is to develop an end-to-end computational pipeline -- the Kepler EB Factory -- that allows automatic processing of Kepler light curves for EBs, from period finding, to object classification, to determination of EB physical properties for the most scientifically interesting EBs, and finally to accurate modeling of these EBs for detailed tests and benchmarking of theoretical stellar evolution models. We will integrate the most successful algorithms into a single, cohesive workflow environment, and apply this 'Kepler EB Factory' to the full public Kepler dataset to find and characterize new "benchmark grade" EBs, and will disseminate both the enhanced data products from this pipeline and the pipeline itself to the broader NASA science community. The proposed work responds directly to two of the defined Research Areas of the NASA Astrophysics Data Analysis Program (ADAP), specifically Research Area #2 (Stellar Astrophysics) and Research Area #9 (Astrophysical Databases). To be clear, our primary goal is the fundamental stellar astrophysics that will be enabled by the discovery and analysis of relatively rare, benchmark-grade EBs in the Kepler dataset. At the same time, to enable this goal will require bringing a suite of extant and new custom algorithms to bear on the Kepler data, and thus our development of the Kepler EB Factory represents a value-added product that will allow the widest scientific impact of the in-formation locked within the vast reservoir of the Kepler light curves.
Duffy, Fergal J; Verniere, Mélanie; Devocelle, Marc; Bernard, Elise; Shields, Denis C; Chubb, Anthony J
2011-04-25
We introduce CycloPs, software for the generation of virtual libraries of constrained peptides including natural and nonnatural commercially available amino acids. The software is written in the cross-platform Python programming language, and features include generating virtual libraries in one-dimensional SMILES and three-dimensional SDF formats, suitable for virtual screening. The stand-alone software is capable of filtering the virtual libraries using empirical measurements, including peptide synthesizability by standard peptide synthesis techniques, stability, and the druglike properties of the peptide. The software and accompanying Web interface is designed to enable the rapid generation of large, structurally diverse, synthesizable virtual libraries of constrained peptides quickly and conveniently, for use in virtual screening experiments. The stand-alone software, and the Web interface for evaluating these empirical properties of a single peptide, are available at http://bioware.ucd.ie .
NASA Astrophysics Data System (ADS)
Grosset, L.; Rouan, D.; Gratadour, D.; Pelat, D.; Orkisz, J.; Marin, F.; Goosmann, R.
2018-04-01
Aims: In this paper we aim to constrain the properties of dust structures in the central first parsecs of active galactic nuclei (AGN). Our goal is to study the required optical depth and composition of different dusty and ionised structures. Methods: We developed a radiative transfer code called Monte Carlo for Active Galactic Nuclei (MontAGN), which is optimised for polarimetric observations in the infrared. With both this code and STOKES, designed to be relevant from the hard X-ray band to near-infrared wavelengths, we investigate the polarisation emerging from a characteristic model of the AGN environment. For this purpose, we compare predictions of our models with previous infrared observations of NGC 1068, and try to reproduce several key polarisation patterns revealed by polarisation mapping. Results: We constrain the required dust structures and their densities. More precisely, we find that the electron density inside the ionisation cone is about 2.0 × 109 m-3. With structures constituted of spherical grains of constant density, we also highlight that the torus should be thicker than 20 in term of K-band optical depth to block direct light from the centre. It should also have a stratification in density: a less dense outer rim with an optical depth at 2.2 μm typically between 0.8 and 4 for observing the double scattering effect previously proposed. Conclusions: We bring constraints on the dust structures in the inner parsecs of an AGN model supposed to describe NGC 1068. When compared to observations, this leads to an optical depth of at least 20 in the Ks band for the torus of NGC 1068, corresponding to τV ≈ 170, which is within the range of current estimation based on observations. In the future, we will improve our study by including non-uniform dust structures and aligned elongated grains to constrain other possible interpretations of the observations.
Constraining the processes modifying the surfaces of the classical Uranian satellites
NASA Astrophysics Data System (ADS)
Cartwright, Richard J.; Emery, Joshua P.
2016-10-01
Near-infrared (NIR) observations of the classical Uranian moons have detected relatively weak H2O ice bands, mixed with a spectrally red, low albedo constituent on the surfaces of their southern hemispheres (sub-observer lat. ~10 - 75°S). The H2O bands and the degree of spectral reddening are greatest on the leading hemispheres of these moons. CO2 ice bands have been detected in spectra collected over their trailing hemispheres, with stronger CO2 bands on the moons closest to Uranus. Our preferred hypotheses to explain the distribution of CO2, H2O, and dark material are: bombardment by magnetospherically-embedded charged particles, primarily on the trailing hemispheres of these moons, and bombardment by micrometeorites, primarily on their leading hemispheres.To test these complementary hypotheses, we are constraining the distribution and spectral characteristics of surface constituents on the currently observable northern hemispheres (sub-observer lat. ~20 - 35°N) to compare with existing southern hemisphere data. Analysis of northern hemisphere data shows that CO2 is present on their trailing hemispheres, and H2O bands and the degree of spectral reddening are strongest on their leading hemispheres, in agreement with the southern hemisphere data. This longitudinal distribution of constituents supports our preferred hypotheses.However, tantalizing mysteries regarding the distribution of constituents remain. There has been no detection of CO2 on Miranda, and H2O bands are stronger on its trailing hemisphere. NIR slope measurements indicate that the northern hemisphere of Titania is redder than Oberon, unlike the spectral colors of their southern hemispheres. There are latitudinal variations in H2O band strengths on these moons, with stronger H2O bands at northern latitudes compared to southern latitudes on Umbriel and Titania. Several Miranda and Ariel spectra potentially include weak and unconfirmed NH3-hydrate bands, which could be tracers of cryovolcanic emplacement. We will present work related to our goals of constraining the processes modifying the surfaces of the classical Uranian moons.
A framework for evaluating and designing citizen science programs for natural resources monitoring.
Chase, Sarah K; Levine, Arielle
2016-06-01
We present a framework of resource characteristics critical to the design and assessment of citizen science programs that monitor natural resources. To develop the framework we reviewed 52 citizen science programs that monitored a wide range of resources and provided insights into what resource characteristics are most conducive to developing citizen science programs and how resource characteristics may constrain the use or growth of these programs. We focused on 4 types of resource characteristics: biophysical and geographical, management and monitoring, public awareness and knowledge, and social and cultural characteristics. We applied the framework to 2 programs, the Tucson (U.S.A.) Bird Count and the Maui (U.S.A.) Great Whale Count. We found that resource characteristics such as accessibility, diverse institutional involvement in resource management, and social or cultural importance of the resource affected program endurance and success. However, the relative influence of each characteristic was in turn affected by goals of the citizen science programs. Although the goals of public engagement and education sometimes complimented the goal of collecting reliable data, in many cases trade-offs must be made between these 2 goals. Program goals and priorities ultimately dictate the design of citizen science programs, but for a program to endure and successfully meet its goals, program managers must consider the diverse ways that the nature of the resource being monitored influences public participation in monitoring. © 2016 Society for Conservation Biology.
Joiner, Keith A; Libecap, Ann; Cress, Anne E; Wormsley, Steve; St Germain, Patricia; Berg, Robert; Malan, Philip
2008-09-01
The authors describe initiatives at the University of Arizona College of Medicine to markedly expand faculty, build research along programmatic lines, and promote a new, highly integrated medical school curriculum. Accomplishing these goals in this era of declining resources is challenging. The authors describe their approaches and outcomes to date, derived from a solid theoretical framework in the management literature, to (1) support research faculty recruitment, emphasizing return on investment, by using net present value to guide formulation of recruitment packages, (2) stimulate efficiency and growth through incentive plans, by using utility theory to optimize incentive plan design, (3) distribute resources to support programmatic growth, by allocating research space and recruitment dollars to maximize joint hires between units with shared interests, and (4) distribute resources from central administration to encourage medical student teaching, by aligning state dollars to support a new integrated organ-system based-curriculum. Detailed measurement is followed by application of management principles, including mathematical modeling, to make projections based on the data collected. Although each of the initiatives was developed separately, they are linked functionally and financially, and they are predicated on explicitly identifying opportunity costs for all major decisions, to achieve efficiencies while supporting growth. The overall intent is to align institutional goals in education, research, and clinical care with incentives for unit heads and individual faculty to achieve those goals, and to create a clear line of sight between expectations and rewards. Implementation is occurring in a hypothesis-driven fashion, permitting testing and refinement of the strategies.
To Pluto by way of a postage stamp
NASA Technical Reports Server (NTRS)
Staehle, Robert L.; Terrile, Richard J.; Weinstein, Stacy S.
1994-01-01
In this time of constrained budgets, the primary question facing planetary explorers is not 'Can we do it?' but 'Can we do it cheaply?' Taunted by words on a postage stamp, a group of mission designers at the Jet Propulsion Laboratory is struggling to find a cheap way to go to Pluto. Three primary goals were set by the science community: (1) imaging of Pluto and Charon, (2) mapping their surface composition, and (3) characterizing Pluto's atmosphere. The spacecraft will be designed around these primary goals. With the help of the Advanced Technology Insertion (ATI) process $5 million was alloted for two years to shop for lightweight components and subsystems using new technology never tried on a planetary mission. The process for this search and development is described.
Hybrid Motion Planning with Multiple Destinations
NASA Technical Reports Server (NTRS)
Clouse, Jeffery
1998-01-01
In our initial proposal, we laid plans for developing a hybrid motion planning system that combines the concepts of visibility-based motion planning, artificial potential field based motion planning, evolutionary constrained optimization, and reinforcement learning. Our goal was, and still is, to produce a hybrid motion planning system that outperforms the best traditional motion planning systems on problems with dynamic environments. The proposed hybrid system will be in two parts the first is a global motion planning system and the second is a local motion planning system. The global system will take global information about the environment, such as the placement of the obstacles and goals, and produce feasible paths through those obstacles. We envision a system that combines the evolutionary-based optimization and visibility-based motion planning to achieve this end.
The Intriguing Case of the (Almost) Dark Galaxy AGC 229385
NASA Astrophysics Data System (ADS)
Salzer, John
2015-10-01
The ALFALFA blind HI survey has catalogued tens of thousands of HI sources over 7000 square degrees of high Galactic latitude sky. While the vast majority of the sources in ALFALFA have optical counterparts in existing wide-field surveys like SDSS, a class of objects has been identified that have no obvious optical counterparts in existing catalogs. Dubbed almost dark galaxies, these objects represent an extreme in the continuum of galaxy properties, with the highest HI mass-to-optical light ratios ever measured. We propose to use HST to observe AGC 229385, an almost dark object found in deep WIYN imaging to have an ultra-low surface brightness stellar component with extremely blue colors. AGC 229385 falls well off of all galaxy scaling relationships, including the Baryonic Tully-Fisher relation. Ground-based optical and HI data have been able to identify this object as extreme, but are insufficient to constrain the properties of its stellar component or its distance - for this, we need HST. Our science goals are twofold: to better constrain the distance to AGC 229385, and to investigate the stellar population(s) in this mysterious object. The requested observations will not only provide crucial insight into the properties and evolution of this specific system but will also help us understand this important class of ultra low surface brightness, gas-rich galaxies. The proposed observations are designed to be exploratory, yet they promise to pay rich dividends for a modest investment in observing time.
FUSE Observations of the Dwarf Seyfert Nucleus of NGC 4395
NASA Astrophysics Data System (ADS)
Kraemer, Steven B.
The Sd IV dwarf galaxy NGC 4395 is the nearest (d approx. 2.6 Mpc) and least luminous (L_bol < 1041 ergs s-1) example of a Seyfert 1 galaxy. This unique object possesses all of the classic Seyfert 1 properties in miniature, including broad and narrow emission lines, a non-stellar continuum, and highly variable X-ray emission, presumably powered by a small (105 M_sun) black hole. Furthermore, there is evidence for blue-shifted, intrinsic absorption lines in the UV (C IV lambda lambda 1548.2, 1550.8), while X-ray spectra show the presence of bound-free edges from O VII and O VIII and evidence for even more highly ionized gas. The UV absorption could arise within the X-ray absorbers or, alternatively, within the emission-line gas, which we have determined to have a high covering factor. The unique capabilities of FUSE provide the means with which to constrain the ionization state, column density, and covering factor of the absorbers and, hence, distinguish between these two possibilities. By extending our investigation of intrinsic absorption to the low luminosity extreme of the Seyfert population, we will obtain crucial insight into the effects of luminosity, global covering factor, and central black hole mass on the intrinsic absorbers. A second goal of this project is to constrain the spectral energy distribution of the non-stellar continuum radiation, which may be unique in this object as a consequence of its small black hole mass.
Mapping the Solar Wind from its Source Region into the Outer Corona
NASA Technical Reports Server (NTRS)
Esser, Ruth
1998-01-01
Knowledge of the radial variation of the plasma conditions in the coronal source region of the solar wind is essential to exploring coronal heating and solar wind acceleration mechanisms. The goal of the present proposal is to determine as many plasma parameters in that region as possible by coordinating different observational techniques, such as Interplanetary Scintillation Observations, spectral line intensity observations, polarization brightness measurements and X-ray observations. The inferred plasma parameters are then used to constrain solar wind models.
Hierachical Object Recognition Using Libraries of Parameterized Model Sub-Parts.
1987-06-01
SketchI Structure Hierarchy Constrained Search 20. AUISTR ACT (Ce.ntU..w se reveres. 01411 at 00 OW 4MI 9smtilp Me"h aindo" This thesis describes the... theseU hierarchies to achieve robust recognition based on effective organization and indexing schemes for model libraries. The goal of the system is to...with different relative scaling, rotation, or translation than in the models. The approach taken in this thesis is to develop an object shape
BEAMS Lab: Novel approaches to finding a balance between throughput and sensitivity
NASA Astrophysics Data System (ADS)
Liberman, Rosa G.; Skipper, Paul L.; Prakash, Chandra; Shaffer, Christopher L.; Flarakos, Jimmy; Tannenbaum, Steven R.
2007-06-01
Development of 14C AMS has long pursued the twin goals of maximizing both sensitivity and precision in the interest, among others, of optimizing radiocarbon dating. Application of AMS to biomedical research is less constrained with respect to sensitivity requirements, but more demanding of high throughput. This work presents some technical and conceptual developments in sample processing and analytical instrumentation designed to streamline the process of extracting quantitative data from the various types of samples encountered in analytical biochemistry.
NASA Technical Reports Server (NTRS)
Glaze, Lori S.; Baloga, S. M.; Garvin, James B.; Quick, Lynnae C.
2014-01-01
Investigation of lava flow deposits is a key component of Investigation II.A.1 in the VEXAG Goals, Objectives and Investigations. Because much of the Venus surface is covered in lava flows, characterization of lava flow emplacement conditions(eruption rate and eruption duration) is critical for understanding the mechanisms through which magma is stored and released onto the surface as well as for placing constraints on rates of volcanic resurfacing throughout the geologic record preserved at the surface.
Identity-Based Motivation: Constraints and Opportunities in Consumer Research.
Shavitt, Sharon; Torelli, Carlos J; Wong, Jimmy
2009-07-01
This commentary underscores the integrative nature of the identity-based motivation model (Oyserman, 2009). We situate the model within existing literatures in psychology and consumer behavior, and illustrate its novel elements with research examples. Special attention is devoted to, 1) how product- and brand-based affordances constrain identity-based motivation processes and, 2) the mindsets and action tendencies that can be triggered by specific cultural identities in pursuit of consumer goals. Future opportunities are suggested for researching the antecedents of product meanings and relevant identities.
Calver, Janine; Holman, C D'Arcy; Lewin, Gill
2004-01-01
political and policy settlement further institutionalised surveillance as the basis of the MCHS. The restructured Service has remained constrained by the dominance of health surveillance as the primary program goal even after more varied contracting arrangements replaced CCT. Although recent initiatives indicate signs of hange, narrow surveillancebased guidelines for Victorian MCH Services are not consistent, we argue, with recent early years of life policy which calls for approaches derived from socio-ecological models of health.
Peology and Geochemistry of New Paired Martian Meteorites 12095 and LAR 12240
NASA Technical Reports Server (NTRS)
Funk, R. C.; Brandon, A. D.; Peslier, A.
2015-01-01
The meteorites LAR 12095 and LAR 12240 are believed to be paired Martian meteorites and were discovered during the Antarctic Search for Meteorites (ANSMET) 2012-2013 Season at Larkman Nunatak. The purpose of this study is to characterize these olivine-phyric shergottites by analyzing all mineral phases for major, minor and trace elements and examining their textural relationships. The goal is to constrain their crystallization history and place these shergottites among other Martian meteorites in order to better understand Martian geological history.
2009-10-29
Presidential Initiative to End Hunger in Africa ( IEHA )—which represented the U.S. strategy to help fulfill the MDG goal of halving hunger by 2015...was constrained in funding and limited in scope. In 2005, USAID, the primary agency that implemented IEHA , committed to providing an estimated $200...Development Assistance (DA) and other accounts. IEHA was intended to build an African-led partnership to cut hunger and poverty by investing in efforts
NASA Astrophysics Data System (ADS)
Belda, Santiago; Heinkelmann, Robert; Ferrándiz, José M.; Karbon, Maria; Nilsson, Tobias; Schuh, Harald
2017-10-01
Very Long Baseline Interferometry (VLBI) is the only space geodetic technique capable of measuring all the Earth orientation parameters (EOP) accurately and simultaneously. Modeling the Earth's rotational motion in space within the stringent consistency goals of the Global Geodetic Observing System (GGOS) makes VLBI observations essential for constraining the rotation theories. However, the inaccuracy of early VLBI data and the outdated products could cause non-compliance with these goals. In this paper, we perform a global VLBI analysis of sessions with different processing settings to determine a new set of empirical corrections to the precession offsets and rates, and to the amplitudes of a wide set of terms included in the IAU 2006/2000A precession-nutation theory. We discuss the results in terms of consistency, systematic errors, and physics of the Earth. We find that the largest improvements w.r.t. the values from IAU 2006/2000A precession-nutation theory are associated with the longest periods (e.g., 18.6-yr nutation). A statistical analysis of the residuals shows that the provided corrections attain an error reduction at the level of 15 μas. Additionally, including a Free Core Nutation (FCN) model into a priori Celestial Pole Offsets (CPOs) provides the lowest Weighted Root Mean Square (WRMS) of residuals. We show that the CPO estimates are quite insensitive to TRF choice, but slightly sensitive to the a priori EOP and the inclusion of different VLBI sessions. Finally, the remaining residuals reveal two apparent retrograde signals with periods of nearly 2069 and 1034 days.
Background Model for the Majorana Demonstrator
NASA Astrophysics Data System (ADS)
Cuesta, C.; Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Leviner, L. E.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S.; Mertens, S.; Nomachi, M.; Orrell, J. L.; O'Shaughnessy, C.; Overman, N. R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, A. G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Suriano, A. M.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.
The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.
Adaptive management of ecosystem services across different land use regimes.
Ruhl, J B
2016-12-01
Using adaptive management to manage desired flows of ecosystem services may seem on the surface to be a good fit, but many social, economic, environmental, legal, and political factors influence how good a fit. One strongly influential factor is the land use regime within which the profile of ecosystem services is being managed. Shaped largely by legal mandates, market forces, and social and cultural practices, different land use regimes present different opportunities for and constraints on goals for ecosystem services and pose different decision making environments. Even where all other conditions appear amenable to using adaptive management, therefore, it is essential to consider the constraining (or liberating) effects of different land use regimes when deciding whether to adopt adaptive management to achieve those goals and, if so, how to implement it. Copyright © 2016 Elsevier Ltd. All rights reserved.
Speed Profiles for Deceleration Guidance During Rollout and Turnoff (ROTO)
NASA Technical Reports Server (NTRS)
Barker, L. Keith; Hankins, Walter W., III; Hueschen, Richard M.
1999-01-01
Two NASA goals are to enhance airport safety and to improve capacity in all weather conditions. This paper contributes to these goals by examining speed guidance profiles to aid a pilot in decelerating along the runway to an exit. A speed profile essentially tells the pilot what the airplane's speed should be as a function of where the airplane is on the runway. While it is important to get off the runway as soon as possible (when striving to minimize runway occupancy time), the deceleration along a speed profile should be constrained by passenger comfort. Several speed profiles are examined with respect to their maximum decelerations and times to reach exit speed. One profile varies speed linearly with distance; another has constant deceleration; and two related nonlinear profiles delay maximum deceleration (braking) to reduce time spent on the runway.
Cognitive Modeling of Social Behaviors
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Damer. Bruce; Brodsky, Boris
2004-01-01
The driving theme of cognitive modeling for many decades has been that knowledge affects how and which goals are accomplished by an intelligent being (Newell 1991). But when one examines groups of people living and working together, one is forced to recognize that whose knowledge is called into play, at a particular time and location, directly affects what the group accomplishes. Indeed, constraints on participation, including roles, procedures, and norms, affect whether an individual is able to act at all (Lave & Wenger 1991; Jordan 1992; Scribner & Sachs 1991). To understand both individual cognition and collective activity, perhaps the greatest opportunity today is to integrate the cognitive modeling approach (which stresses how beliefs are formed and drive behavior) with social studies (which stress how relationships and informal practices drive behavior). The crucial insight is that norms are conceptualized in the individual &nd as ways of carrying out activities (Clancey 1997a, 2002b). This requires for the psychologist a shift from only modeling goals and tasks - why people do what they do - to modeling behavioral patterns-what people do-as they are engaged in purposeful activities. Instead of a model that exclusively deduces actions from goals, behaviors are also, if not primarily, driven by broader patterns of chronological and located activities (akin to scripts). This analysis is particular inspired by activity theory (Leont ev 1979). While acknowledging that knowledge (relating goals and operations) is fundamental for intelligent behavior, activity theory claims that a broader driver is the person s motives and conceptualization of activities. Such understanding of human interaction is normative (i.e., viewed with respect to social standards), affecting how knowledge is called into play and applied in practice. Put another way, how problems are discovered and framed, what methods are chosen, and indeed who even cares or has the authority to act, are all constrained by norms, which are conceived and enacted by individuals.
Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation
NASA Astrophysics Data System (ADS)
Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.
2017-06-01
Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.
Xu, Jiuping; Feng, Cuiying
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.
Constrained structural dynamic model verification using free vehicle suspension testing methods
NASA Technical Reports Server (NTRS)
Blair, Mark A.; Vadlamudi, Nagarjuna
1988-01-01
Verification of the validity of a spacecraft's structural dynamic math model used in computing ascent (or in the case of the STS, ascent and landing) loads is mandatory. This verification process requires that tests be carried out on both the payload and the math model such that the ensuing correlation may validate the flight loads calculations. To properly achieve this goal, the tests should be performed with the payload in the launch constraint (i.e., held fixed at only the payload-booster interface DOFs). The practical achievement of this set of boundary conditions is quite difficult, especially with larger payloads, such as the 12-ton Hubble Space Telescope. The development of equations in the paper will show that by exciting the payload at its booster interface while it is suspended in the 'free-free' state, a set of transfer functions can be produced that will have minima that are directly related to the fundamental modes of the payload when it is constrained in its launch configuration.
Unveiling the nucleon tensor charge at Jefferson Lab: A study of the SoLID case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Zhihong; Sato, Nobuo; Allada, Kalyan
2017-01-27
Here, future experiments at the Jefferson Lab 12 GeV upgrade, in particular, the Solenoidal Large Intensity Device (SoLID), aim at a very precise data set in the region where the partonic structure of the nucleon is dominated by the valence quarks. One of the main goals is to constrain the transversity quark distributions. We apply recent theoretical advances of the global QCD extraction of the transversity distributions to study the impact of future experimental data from the SoLID. Especially, we develop a model-independent method based on the hessian matrix analysis that allows to estimate the uncertainties of the transversity quarkmore » distributions and their tensor charge contributions extracted from the pseudo-data for the SoLID. Both u and d-quark transversity distributions are shown to be very well constrained in the kinematical region of the future experiments with the proton and the effective neutron targets.« less
Xu, Jiuping
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708
NASA Technical Reports Server (NTRS)
Johnson, Kirk R.; Hickey, Leo J.
1988-01-01
The spatial and temporal distribution of vegetation in the terminal Cretaceous of Western Interior North America was a complex mosaic resulting from the interaction of factors including a shifting coastline, tectonic activity, a mild, possibly deteriorating climate, dinosaur herbivory, local facies effects, and a hypothesized bolide impact. In order to achieve sufficient resolution to analyze this vegetational pattern, over 100 megafloral collecting sites were established, yielding approximately 15,000 specimens, in Upper Cretaceous and lower Paleocene strata in the Williston, Powder River, and Bighorn basins in North Dakota, Montana, and Wyoming. These localities were integrated into a lithostratigraphic framework that is based on detailed local reference sections and constrained by vertebrate and palynomorph biostratigraphy, magnetostratigraphy, and sedimentary facies analysis. A regional biostratigraphy based on well located and identified plant megafossils that can be used to address patterns of floral evolution, ecology, and extinction is the goal of this research. Results of the analyses are discussed.
Lunar surface mine feasibility study
NASA Astrophysics Data System (ADS)
Blair, Brad R.
This paper describes a lunar surface mine, and demonstrates the economic feasibility of mining oxygen from the moon. The mine will be at the Apollo 16 landing site. Mine design issues include pit size and shape, excavation equipment, muck transport, and processing requirements. The final mine design will be driven by production requirements, and constrained by the lunar environment. This mining scenario assumes the presence of an operating lunar base. Lunar base personnel will set-up a and run the mine. The goal of producing lunar oxygen is to reduce dependence on fuel shipped from Earth. Thus, the lunar base is the customer for the finished product. The perspective of this paper is that of a mining contractor who must produce a specific product at a remote location, pay local labor, and sell the product to an onsite captive market. To make a profit, it must be less costly to build and ship specialized equipment to the site, and pay high labor and operating costs, than to export the product directly to the site.
A method for the dynamic management of genetic variability in dairy cattle
Colleau, Jean-Jacques; Moureaux, Sophie; Briend, Michèle; Bechu, Jérôme
2004-01-01
According to the general approach developed in this paper, dynamic management of genetic variability in selected populations of dairy cattle is carried out for three simultaneous purposes: procreation of young bulls to be further progeny-tested, use of service bulls already selected and approval of recently progeny-tested bulls for use. At each step, the objective is to minimize the average pairwise relationship coefficient in the future population born from programmed matings and the existing population. As a common constraint, the average estimated breeding value of the new population, for a selection goal including many important traits, is set to a desired value. For the procreation of young bulls, breeding costs are additionally constrained. Optimization is fully analytical and directly considers matings. Corresponding algorithms are presented in detail. The efficiency of these procedures was tested on the current Norman population. Comparisons between optimized and real matings, clearly showed that optimization would have saved substantial genetic variability without reducing short-term genetic gains. PMID:15231230
Microbial habitability of Europa sustained by radioactive sources.
Altair, Thiago; de Avellar, Marcio G B; Rodrigues, Fabio; Galante, Douglas
2018-01-10
There is an increasing interest in the icy moons of the Solar System due to their potential habitability and as targets for future exploratory missions, which include astrobiological goals. Several studies have reported new results describing the details of these moons' geological settings; however, there is still a lack of information regarding the deep subsurface environment of the moons. The purpose of this article is to evaluate the microbial habitability of Europa constrained by terrestrial analogue environments and sustained by radioactive energy provided by natural unstable isotopes. The geological scenarios are based on known deep environments on Earth, and the bacterial ecosystem is based on a sulfate-reducing bacterial ecosystem found 2.8 km below the surface in a basin in South Africa. The results show the possibility of maintaining the modeled ecosystem based on the proposed scenarios and provides directions for future models and exploration missions for a more complete evaluation of the habitability of Europa and of icy moons in general.
NASA Technical Reports Server (NTRS)
Boulanger, Richard; Overland, David
2004-01-01
Technologies that facilitate the design and control of complex, hybrid, and resource-constrained systems are examined. This paper focuses on design methodologies, and system architectures, not on specific control methods that may be applied to life support subsystems. Honeywell and Boeing have estimated that 60-80Y0 of the effort in developing complex control systems is software development, and only 20-40% is control system development. It has also been shown that large software projects have failure rates of as high as 50-65%. Concepts discussed include the Unified Modeling Language (UML) and design patterns with the goal of creating a self-improving, self-documenting system design process. Successful architectures for control must not only facilitate hardware to software integration, but must also reconcile continuously changing software with much less frequently changing hardware. These architectures rely on software modules or components to facilitate change. Architecting such systems for change leverages the interfaces between these modules or components.
Ngai, Steven Sek Yum; Cheung, Jacky Chau-Kiu; To, Siu-ming; Luan, Hui; Zhao, Ruiling
2014-01-01
This study draws on data from focus groups involving 50 young people from low-income families in Hong Kong to investigate their school-to-work experiences. In line with the ecological–developmental perspective, our results show that contextual influences, including lower levels of parental involvement and lack of opportunities for further education or skill development, constrain both the formulation and pursuit of educational and career goals. In contrast, service use and supportive interactions with parents and non-family adults were found to help young people find a career direction and foster more adaptive transition. Furthermore, our results indicate a striking difference in intrapersonal agency and coping styles between youths who were attending further education or engaged in jobs with career advancement opportunities and those who were not. We discuss the implications of our findings, both for future research and for policy development to enhance the school-to-work transition of economically disadvantaged young people. PMID:25364087
Aircraft Wake Vortex Measurements at Denver International Airport
NASA Technical Reports Server (NTRS)
Dougherty, Robert P.; Wang, Frank Y.; Booth, Earl R.; Watts, Michael E.; Fenichel, Neil; D'Errico, Robert E.
2004-01-01
Airport capacity is constrained, in part, by spacing requirements associated with the wake vortex hazard. NASA's Wake Vortex Avoidance Project has a goal to establish the feasibility of reducing this spacing while maintaining safety. Passive acoustic phased array sensors, if shown to have operational potential, may aid in this effort by detecting and tracking the vortices. During August/September 2003, NASA and the USDOT sponsored a wake acoustics test at the Denver International Airport. The central instrument of the test was a large microphone phased array. This paper describes the test in general terms and gives an overview of the array hardware. It outlines one of the analysis techniques that is being applied to the data and gives sample results. The technique is able to clearly resolve the wake vortices of landing aircraft and measure their separation, height, and sinking rate. These observations permit an indirect estimate of the vortex circulation. The array also provides visualization of the vortex evolution, including the Crow instability.
Real Time Computation of Kinetic Constraints to Support Equilibrium Reconstruction
NASA Astrophysics Data System (ADS)
Eggert, W. J.; Kolemen, E.; Eldon, D.
2016-10-01
A new method for quickly and automatically applying kinetic constraints to EFIT equilibrium reconstructions using readily available data is presented. The ultimate goal is to produce kinetic equilibrium reconstructions in real time and use them to constrain the DCON stability code as part of a disruption avoidance scheme. A first effort presented here replaces CPU-time expensive modules, such as the fast ion pressure profile calculation, with a simplified model. We show with a DIII-D database analysis that we can achieve reasonable predictions for selected applications by modeling the fast ion pressure profile and determining the fit parameters as functions of easily measured quantities including neutron rate and electron temperature on axis. Secondly, we present a strategy for treating Thomson scattering and Charge Exchange Recombination data to automatically form constraints for a kinetic equilibrium reconstruction, a process that historically was performed by hand. Work supported by US DOE DE-AC02-09CH11466 and DE-FC02-04ER54698.
Optimal vibration control of a rotating plate with self-sensing active constrained layer damping
NASA Astrophysics Data System (ADS)
Xie, Zhengchao; Wong, Pak Kin; Lo, Kin Heng
2012-04-01
This paper proposes a finite element model for optimally controlled constrained layer damped (CLD) rotating plate with self-sensing technique and frequency-dependent material property in both the time and frequency domain. Constrained layer damping with viscoelastic material can effectively reduce the vibration in rotating structures. However, most existing research models use complex modulus approach to model viscoelastic material, and an additional iterative approach which is only available in frequency domain has to be used to include the material's frequency dependency. It is meaningful to model the viscoelastic damping layer in rotating part by using the anelastic displacement fields (ADF) in order to include the frequency dependency in both the time and frequency domain. Also, unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Thus, in this work, a single layer finite element is adopted to model a three-layer active constrained layer damped rotating plate in which the constraining layer is made of piezoelectric material to work as both the self-sensing sensor and actuator under an linear quadratic regulation (LQR) controller. After being compared with verified data, this newly proposed finite element model is validated and could be used for future research.
The CCAT-prime Extreme Field-of-View Submillimeter Telescope on Cerro Chajnantor
NASA Astrophysics Data System (ADS)
Koopman, Brian; Bertoldi, Frank; Chapman, Scott; Fich, Michel; Giovanelli, Riccardo; Haynes, Martha P.; Herter, Terry L.; Murray, Norman W.; Niemack, Michael D.; Riechers, Dominik; Schilke, Peter; Stacey, Gordon J.; Stutzki, Juergen; CCAT-prime Collaboration
2017-01-01
CCAT-prime is a six meter aperture off-axis submillimeter telescope that we plan to build at 5600m elevation on Cerro Chajnantor in Chile. The CCAT-prime optics are based on a cross-Dragone design with high throughput and a wide field-of-view optimized to increase the mapping speed of next generation cosmic microwave background (CMB) observations. These characteristics make CCAT-prime an excellent platform for a wide range of next generation millimeter and submillimeter science goals, and a potential platform for CMB stage-IV measurements. Here we present the telescope design for CCAT-prime and review the science goals.Taking advantage of the high elevation site, the first generation instrument for CCAT-prime will measure seven different frequency bands from 350um to 3mm. These seven bands will enable precise measurements of the Sunyaev-Zel’dovich effects (SZE) by separating contributions from CMB, thermal SZE, kinetic SZE, bright submm galaxies, and radio sources with a goal of extracting the peculiar velocities from a large number of galaxy clusters. Additional science priorities for CCAT-prime include: Galactic Ecology studies of the dynamic intersteller medium by mapping the fine structure lines [CI], [CII] and [NII] as well as high-excitation CO lines at the shortest wavelength bands; high redshift intensity mapping of [CII] emission from star-forming galaxies that likely dominates cosmic reionization at z~5-9 to probe the Epoch of Reionization; and next generation CMB polarization measurements to constrain inflation and cosmological models. The CCAT-prime facility will further our understanding of astrophysical processes from moments after the Big Bang to the present-day evolution of the Milky Way.
Public health ethics. Public justification and public trust.
Childress, J F; Bernheim, R Gaare
2008-02-01
Viewing public health as a political and social undertaking as well as a goal of this activity, the authors develop some key elements in a framework for public health ethics, with particular attention to the formation of public health policies and to decisions by public health officials that are not fully determined by established public policies. They concentrate on ways to approach ethical conflicts about public health interventions. These conflicts arise because, in addition to the value of public health, societies have a wide range of other values that sometimes constrain the selection of means to achieve public health goals. The authors analyze three approaches for resolving these conflicts (absolutist, contextualist, and presumptivist), argue for the superiority of the presumptivist approach, and briefly explicate five conditions for rebutting presumptions in a process of public justification. In a liberal, pluralistic, democratic society, a presumptivist approach that engages the public in the context of a variety of relationships can provide a foundation for public trust, which is essential to public health as a political and social practice as well as to achieving public health goals.
Vocational Psychology: Agency, Equity, and Well-Being.
Brown, Steven D; Lent, Robert W
2016-01-01
The present review organizes the vocational psychology literature published between 2007 and 2014 into three overarching themes: Promoting (a) agency in career development, (b) equity in the work force, and (c) well-being in work and educational settings. Research on career adaptability, self-efficacy beliefs, and work volition is reviewed in the agency section, with the goal of delineating variables that promote or constrain the exercise of personal agency in academic and occupational pursuits. The equity theme covers research on social class and race/ethnicity in career development; entry and retention of women and people of color in science, technology, engineering, and math (STEM) fields; and the career service needs of survivors of domestic violence and of criminal offenders. The goal was to explore how greater equity in the work force could be promoted for these groups. In the well-being section, we review research on hedonic (work, educational, and life satisfaction) and eudaimonic (career calling, meaning, engagement, and commitment) variables, with the goal of understanding how well-being might be promoted at school and at work. Future research needs related to each theme are also discussed.
Mapping the Solar Wind from its Source Region into the Outer Corona
NASA Technical Reports Server (NTRS)
Esser, Ruth
1997-01-01
Knowledge of the radial variation of the plasma conditions in the coronal source region of the solar wind is essential to exploring coronal heating and solar wind acceleration mechanisms. The goal of the proposal was to determine as many plasma parameters in the solar wind acceleration region and beyond as possible by coordinating different observational techniques, such as Interplanetary Scintillation Observations, spectral line intensity observations, polarization brightness measurements and X-ray observations. The inferred plasma parameters were then used to constrain solar wind models.
Identity-Based Motivation: Constraints and Opportunities in Consumer Research
Shavitt, Sharon; Torelli, Carlos J.; Wong, Jimmy
2009-01-01
This commentary underscores the integrative nature of the identity-based motivation model (Oyserman, 2009). We situate the model within existing literatures in psychology and consumer behavior, and illustrate its novel elements with research examples. Special attention is devoted to, 1) how product- and brand-based affordances constrain identity-based motivation processes and, 2) the mindsets and action tendencies that can be triggered by specific cultural identities in pursuit of consumer goals. Future opportunities are suggested for researching the antecedents of product meanings and relevant identities. PMID:20161045
Strategies in the development of vaccines to prevent infections with group A streptococcus
Good, Michael F; Batzloff, Michael; Pandey, Manisha
2013-01-01
There has long been interest and demand for the development of a vaccine to prevent infections caused by the Gram-positive organism group A streptococcus. Despite numerous efforts utilizing advanced approaches such as genomics, proteomics and bio-informatics, there is currently no vaccine. Here we review various strategies employed to achieve this goal. We also discuss the approach that we have pursued, a non-host reactive, conformationally constrained minimal B cell epitope from within the C-repeat region of M-protein, and the potential limitations in moving forward. PMID:23863455
Analysis of and Techniques for Adaptive Equalization for Underwater Acoustic Communication
2011-09-01
used and in (b) an averaging window of 1000 samples is used. There was an overlap length of half the averaging window. The solid black line shows the ...array processing by constraining the signal covariance matrix to be realizable when the received signal was a sum of narrowband plane waves. Their goal...u0 −u0 v(u)vH(u)du. (4.4.9) A key part of the proof in [48] uses the Poincare separation theorem to show that the eigenvectors corresponding to the
Astromaterials Research Office (KR) Overview
NASA Technical Reports Server (NTRS)
Draper, David S.
2014-01-01
The fundamental goal of our research is to understand the origin and evolution of the solar system, particularly the terrestrial, "rocky" bodies. Our research involves analysis of, and experiments on, astromaterials in order to understand their nature, sources, and processes of formation. Our state-of-the-art analytical laboratories include four electron microbeam laboratories for mineral analysis, four spectroscopy laboratories for chemical and mineralogical analysis, and four mass spectrometry laboratories for isotopic analysis. Other facilities include the experimental impact laboratory and both 1-atm gas mixing and high-pressure experimental petrology laboratories. Recent research has emphasized a diverse range of topics, including: Study of the solar system's primitive materials, such as carbonaceous chondrites and interplanetary dust; Study of early solar system chronology using short-lived radioisotopes and early nebular processes through detailed geochemical and isotopic characterizations; Study of large-scale planetary differentiation and evolution via siderophile and incompatible trace element partitioning, magma ocean crystallization simulations, and isotopic systematics; Study of the petrogenesis of Martian meteorites through petrographic, isotopic, chemical, and experimental melting and crystallization studies; Interpretation of remote sensing data, especially from current robotic lunar and Mars missions, and study of terrestrial analog materials; Study of the role of organic geochemical processes in the evolution of astromaterials and the extent to which they constrain the potential for habitability and the origin of life.
Cosmology from galaxy clusters as observed by Planck
NASA Astrophysics Data System (ADS)
Pierpaoli, Elena
We propose to use current all-sky data on galaxy clusters in the radio/infrared bands in order to constrain cosmology. This will be achieved performing parameter estimation with number counts and power spectra for galaxy clusters detected by Planck through their Sunyaev—Zeldovich signature. The ultimate goal of this proposal is to use clusters as tracers of matter density in order to provide information about fundamental properties of our Universe, such as the law of gravity on large scale, early Universe phenomena, structure formation and the nature of dark matter and dark energy. We will leverage on the availability of a larger and deeper cluster catalog from the latest Planck data release in order to include, for the first time, the cluster power spectrum in the cosmological parameter determination analysis. Furthermore, we will extend clusters' analysis to cosmological models not yet investigated by the Planck collaboration. These aims require a diverse set of activities, ranging from the characterization of the clusters' selection function, the choice of the cosmological cluster sample to be used for parameter estimation, the construction of mock samples in the various cosmological models with correct correlation properties in order to produce reliable selection functions and noise covariance matrices, and finally the construction of the appropriate likelihood for number counts and power spectra. We plan to make the final code available to the community and compatible with the most widely used cosmological parameter estimation code. This research makes use of data from the NASA satellites Planck and, less directly, Chandra, in order to constrain cosmology; and therefore perfectly fits the NASA objectives and the specifications of this solicitation.
The BepiColombo MORE gravimetry and rotation experiments with the ORBIT14 software
NASA Astrophysics Data System (ADS)
Cicalò, S.; Schettino, G.; Di Ruzza, S.; Alessi, E. M.; Tommei, G.; Milani, A.
2016-04-01
The BepiColombo mission to Mercury is an ESA/JAXA cornerstone mission, consisting of two spacecraft in orbit around Mercury addressing several scientific issues. One spacecraft is the Mercury Planetary Orbiter, with full instrumentation to perform radio science experiments. Very precise radio tracking from Earth, on-board accelerometer and optical measurements will provide large data sets. From these it will be possible to study the global gravity field of Mercury and its tidal variations, its rotation state and the orbit of its centre of mass. With the gravity field and rotation state, it is possible to constrain the internal structure of the planet. With the orbit of Mercury, it is possible to constrain relativistic theories of gravitation. In order to assess that all the scientific goals are achievable with the required level of accuracy, full cycle numerical simulations of the radio science experiment have been performed. Simulated tracking, accelerometer and optical camera data have been generated, and a long list of variables including the spacecraft initial conditions, the accelerometer calibrations and the gravity field coefficients have been determined by a least-squares fit. The simulation results are encouraging: the experiments are feasible at the required level of accuracy provided that some critical terms in the accelerometer error are moderated. We will show that BepiColombo will be able to provide at least an order of magnitude improvement in the knowledge of Love number k2, libration amplitudes and obliquity, along with a gravity field determination up to degree 25 with a signal-to-noise ratio of 10.
NASA Technical Reports Server (NTRS)
O'D. Alexander, Conel
2003-01-01
The discovery of presolar grains in meteorites is one of the most exciting recent developments in meteoritics. Six types of presolar grain have been discovered: diamond, Sic, graphite, Si3N4, Al2O3 and MgAl2O4. These grains have been identified as presolar because their isotopic compositions are very different from those of Solar System materials. Comparison of their isotopic compositions with astronomical observations and theoretical models indicates most of the grains formed in the envelopes of highly evolved stars. They are, therefore, a new source of information with which to test astrophysical models of the evolution of these stars. In fact, because several elements can often be measured in the same grain, including elements that are not measurable spectroscopically in stars, the grain data provide some very stringent constraints for these models. Our primary goal is to create large, unbiased, multi-isotope databases of single presolar Sic, Si,N,, oxide and graphite grains in meteorites, as well as any new presolar grain types that are identified in the future. These will be used to: (i) test stellar and nucleosynthetic models, (ii) constrain the galactic chemical evolution (GCE) paths of the isotopes of Si, Ti, O and Mg, (iii) establish how many stellar sources contributed to the Solar System, (iv) constrain relative dust production rates of various stellar types and (v) assess how representative of galactic dust production the record in meteorites is. The primary tool for this project is a highly automated grain analysis system on the Carnegie 6f ion probe.
NASA Technical Reports Server (NTRS)
O'D.Alexander, Conel
2004-01-01
The discovery of presolar grains in meteorites is one of the most exciting recent developments in meteoritics. Six types of presolar grain have been discovered: diamond, Sic, graphite, Si3N4, Al2O3 and MgAl2O4. These grains have been identified as presolar because their isotopic compositions are very different from those of Solar System materials. Comparison of their isotopic compositions with astronomical observations and theoretical models indicates most of the grains formed in the envelopes of highly evolved stars. They are, therefore, a new source of information with which to test astrophysical models of the evolution of these stars. In fact, because several elements can often be measured in the same grain, including elements that are not measurable spectroscopically in stars, the grain data provide some very stringent constraints for these models. Our primary goal is to create large, unbiased, multi-isotope databases of single presolar Sic, Si,N,, oxide and graphite grains in meteorites, as well as any new presolar grain types that are identified in the future. These will be used to: (i) test stellar and nucleosynthetic models, (ii) constrain the galactic chemical evolution (GCE) paths of the isotopes of Si, Ti, 0 and Mg, (iii) establish how many stellar sources contributed to the Solar System, (iv) constrain relative dust production rates of various stellar types and (v) assess how representative of galactic dust production the record in meteorites is. The primary tool for this project is a highly automated grain analysis system we have developed for the Carnegie 6f ion probe.
NASA Astrophysics Data System (ADS)
Griscom, Bronson W.; Adams, Justin; Ellis, Peter W.; Houghton, Richard A.; Lomax, Guy; Miteva, Daniela A.; Schlesinger, William H.; Shoch, David; Siikamäki, Juha V.; Smith, Pete; Woodbury, Peter; Zganjar, Chris; Blackman, Allen; Campari, João; Conant, Richard T.; Delgado, Christopher; Elias, Patricia; Gopalakrishna, Trisha; Hamsik, Marisa R.; Herrero, Mario; Kiesecker, Joseph; Landis, Emily; Laestadius, Lars; Leavitt, Sara M.; Minnemeyer, Susan; Polasky, Stephen; Potapov, Peter; Putz, Francis E.; Sanderman, Jonathan; Silvius, Marcel; Wollenberg, Eva; Fargione, Joseph
2017-10-01
Better stewardship of land is needed to achieve the Paris Climate Agreement goal of holding warming to below 2 °C; however, confusion persists about the specific set of land stewardship options available and their mitigation potential. To address this, we identify and quantify “natural climate solutions” (NCS): 20 conservation, restoration, and improved land management actions that increase carbon storage and/or avoid greenhouse gas emissions across global forests, wetlands, grasslands, and agricultural lands. We find that the maximum potential of NCS—when constrained by food security, fiber security, and biodiversity conservation—is 23.8 petagrams of CO2 equivalent (PgCO2e) y‑1 (95% CI 20.3–37.4). This is ≥30% higher than prior estimates, which did not include the full range of options and safeguards considered here. About half of this maximum (11.3 PgCO2e y‑1) represents cost-effective climate mitigation, assuming the social cost of CO2 pollution is ≥100 USD MgCO2e‑1 by 2030. Natural climate solutions can provide 37% of cost-effective CO2 mitigation needed through 2030 for a >66% chance of holding warming to below 2 °C. One-third of this cost-effective NCS mitigation can be delivered at or below 10 USD MgCO2‑1. Most NCS actions—if effectively implemented—also offer water filtration, flood buffering, soil health, biodiversity habitat, and enhanced climate resilience. Work remains to better constrain uncertainty of NCS mitigation estimates. Nevertheless, existing knowledge reported here provides a robust basis for immediate global action to improve ecosystem stewardship as a major solution to climate change.
Griscom, Bronson W; Adams, Justin; Ellis, Peter W; Houghton, Richard A; Lomax, Guy; Miteva, Daniela A; Schlesinger, William H; Shoch, David; Siikamäki, Juha V; Smith, Pete; Woodbury, Peter; Zganjar, Chris; Blackman, Allen; Campari, João; Conant, Richard T; Delgado, Christopher; Elias, Patricia; Gopalakrishna, Trisha; Hamsik, Marisa R; Herrero, Mario; Kiesecker, Joseph; Landis, Emily; Laestadius, Lars; Leavitt, Sara M; Minnemeyer, Susan; Polasky, Stephen; Potapov, Peter; Putz, Francis E; Sanderman, Jonathan; Silvius, Marcel; Wollenberg, Eva; Fargione, Joseph
2017-10-31
Better stewardship of land is needed to achieve the Paris Climate Agreement goal of holding warming to below 2 °C; however, confusion persists about the specific set of land stewardship options available and their mitigation potential. To address this, we identify and quantify "natural climate solutions" (NCS): 20 conservation, restoration, and improved land management actions that increase carbon storage and/or avoid greenhouse gas emissions across global forests, wetlands, grasslands, and agricultural lands. We find that the maximum potential of NCS-when constrained by food security, fiber security, and biodiversity conservation-is 23.8 petagrams of CO 2 equivalent (PgCO 2 e) y -1 (95% CI 20.3-37.4). This is ≥30% higher than prior estimates, which did not include the full range of options and safeguards considered here. About half of this maximum (11.3 PgCO 2 e y -1 ) represents cost-effective climate mitigation, assuming the social cost of CO 2 pollution is ≥100 USD MgCO 2 e -1 by 2030. Natural climate solutions can provide 37% of cost-effective CO 2 mitigation needed through 2030 for a >66% chance of holding warming to below 2 °C. One-third of this cost-effective NCS mitigation can be delivered at or below 10 USD MgCO 2 -1 Most NCS actions-if effectively implemented-also offer water filtration, flood buffering, soil health, biodiversity habitat, and enhanced climate resilience. Work remains to better constrain uncertainty of NCS mitigation estimates. Nevertheless, existing knowledge reported here provides a robust basis for immediate global action to improve ecosystem stewardship as a major solution to climate change.
Adams, Justin; Ellis, Peter W.; Houghton, Richard A.; Lomax, Guy; Miteva, Daniela A.; Schlesinger, William H.; Shoch, David; Siikamäki, Juha V.; Smith, Pete; Woodbury, Peter; Zganjar, Chris; Blackman, Allen; Campari, João; Conant, Richard T.; Delgado, Christopher; Elias, Patricia; Gopalakrishna, Trisha; Hamsik, Marisa R.; Herrero, Mario; Kiesecker, Joseph; Landis, Emily; Laestadius, Lars; Leavitt, Sara M.; Minnemeyer, Susan; Polasky, Stephen; Potapov, Peter; Putz, Francis E.; Sanderman, Jonathan; Silvius, Marcel; Wollenberg, Eva; Fargione, Joseph
2017-01-01
Better stewardship of land is needed to achieve the Paris Climate Agreement goal of holding warming to below 2 °C; however, confusion persists about the specific set of land stewardship options available and their mitigation potential. To address this, we identify and quantify “natural climate solutions” (NCS): 20 conservation, restoration, and improved land management actions that increase carbon storage and/or avoid greenhouse gas emissions across global forests, wetlands, grasslands, and agricultural lands. We find that the maximum potential of NCS—when constrained by food security, fiber security, and biodiversity conservation—is 23.8 petagrams of CO2 equivalent (PgCO2e) y−1 (95% CI 20.3–37.4). This is ≥30% higher than prior estimates, which did not include the full range of options and safeguards considered here. About half of this maximum (11.3 PgCO2e y−1) represents cost-effective climate mitigation, assuming the social cost of CO2 pollution is ≥100 USD MgCO2e−1 by 2030. Natural climate solutions can provide 37% of cost-effective CO2 mitigation needed through 2030 for a >66% chance of holding warming to below 2 °C. One-third of this cost-effective NCS mitigation can be delivered at or below 10 USD MgCO2−1. Most NCS actions—if effectively implemented—also offer water filtration, flood buffering, soil health, biodiversity habitat, and enhanced climate resilience. Work remains to better constrain uncertainty of NCS mitigation estimates. Nevertheless, existing knowledge reported here provides a robust basis for immediate global action to improve ecosystem stewardship as a major solution to climate change. PMID:29078344
Robust Path Planning and Feedback Design Under Stochastic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars
2008-01-01
Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.
The NEWS Water Cycle Climatology
NASA Astrophysics Data System (ADS)
Rodell, M.; Beaudoing, H. K.; L'Ecuyer, T.; Olson, W. S.
2012-12-01
NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the first phase of the NEWS Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project was a multi-institutional collaboration with more than 20 active contributors. This presentation will describe the results of the water cycle component of the first phase of the project, which include seasonal (monthly) climatologies of water fluxes over land, ocean, and atmosphere at continental and ocean basin scales. The requirement of closure of the water budget (i.e., mass conservation) at various scales was exploited to constrain the flux estimates via an optimization approach that will also be described. Further, error assessments were included with the input datasets, and we examine these in relation to inferred uncertainty in the optimized flux estimates in order to gauge our current ability to close the water budget within an expected uncertainty range.
The NEWS Water Cycle Climatology
NASA Technical Reports Server (NTRS)
Rodell, Matthew; Beaudoing, Hiroko Kato; L'Ecuyer, Tristan; William, Olson
2012-01-01
NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the first phase of the NEWS Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project was a multi-institutional collaboration with more than 20 active contributors. This presentation will describe the results of the water cycle component of the first phase of the project, which include seasonal (monthly) climatologies of water fluxes over land, ocean, and atmosphere at continental and ocean basin scales. The requirement of closure of the water budget (i.e., mass conservation) at various scales was exploited to constrain the flux estimates via an optimization approach that will also be described. Further, error assessments were included with the input datasets, and we examine these in relation to inferred uncertainty in the optimized flux estimates in order to gauge our current ability to close the water budget within an expected uncertainty range.
Two-Phase Item Selection Procedure for Flexible Content Balancing in CAT
ERIC Educational Resources Information Center
Cheng, Ying; Chang, Hua-Hua; Yi, Qing
2007-01-01
Content balancing is an important issue in the design and implementation of computerized adaptive testing (CAT). Content-balancing techniques that have been applied in fixed content balancing, where the number of items from each content area is fixed, include constrained CAT (CCAT), the modified multinomial model (MMM), modified constrained CAT…
NASA Astrophysics Data System (ADS)
iMOST Team; Herd, C. D. K.; Ammannito, E.; Anand, M.; Debaille, V.; Hallis, L. J.; McCubbin, F. M.; Schmitz, N.; Usui, T.; Weiss, B. P.; Altieri, F.; Amelin, Y.; Beaty, D. W.; Benning, L. G.; Bishop, J. L.; Borg, L. E.; Boucher, D.; Brucato, J. R.; Busemann, H.; Campbell, K. A.; Carrier, B. L.; Czaja, A. D.; Des Marais, D. J.; Dixon, M.; Ehlmann, B. L.; Farmer, J. D.; Fernandez-Remolar, D. C.; Fogarty, J.; Glavin, D. P.; Goreva, Y. S.; Grady, M. M.; Harrington, A. D.; Hausrath, E. M.; Horgan, B.; Humayun, M.; Kleine, T.; Kleinhenz, J.; Mangold, N.; Mackelprang, R.; Mayhew, L. E.; McCoy, J. T.; McLennan, S. M.; McSween, H. Y.; Moser, D. E.; Moynier, F.; Mustard, J. F.; Niles, P. B.; Ori, G. G.; Raulin, F.; Rettberg, P.; Rucker, M. A.; Sefton-Nash, E.; Sephton, M. A.; Shaheen, R.; Shuster, D. L.; Siljestrom, S.; Smith, C. L.; Spry, J. A.; Steele, A.; Swindle, T. D.; ten Kate, I. L.; Tosca, N. J.; Van Kranendonk, M. J.; Wadhwa, M.; Werner, S. C.; Westall, F.; Wheeler, R. M.; Zipfel, J.; Zorzano, M. P.
2018-04-01
We present the main sample types from any potential Mars Sample Return landing site that would be required to constrain the geological and geophysical processes on Mars, including the origin and nature of its crust, mantle, and core.
Increasing Patient Safety by Closing the Sterile Production Gap-Part 1. Introduction.
Agalloco, James P
2017-01-01
Terminal sterilization is considered the preferred means for the production of sterile drug products because it affords enhanced safety for the patient as the formulation is filled into its final container, sealed, and sterilized. Despite the obvious patient benefits, the use of terminal sterilization is artificially constrained by unreasonable expectations for the minimum time-temperature process to be used. The core misunderstanding with terminal sterilization is a fixation that destruction of a high population of a resistant biological indicator is required. The origin of this misconception is unclear, but it has resulted in sterilization conditions that are extremely harsh (15 min at 121 °C, of F 0 > 8 min), which limit the use of terminal sterilization to extremely heat-stable formulations. These articles outline the artificial nature of the process constraints and describe a scientifically sound means to expand the use of terminal sterilization by identifying the correct process goal-destruction of the bioburden present in the container prior to sterilization. Recognition that the true intention is bioburden destruction in routine products allows for the use of reduced conditions (lower temperatures, shorter process dwell, or both) without added patient risk. By focusing attention on the correct process target, lower time-temperature conditions can be used to expand the use of terminal sterilization to products unable to withstand the harsh conditions that have been mistakenly applied. The first article provides the background and describes the benefits to patient, producer, and regulator. The second article includes validation and operational advice that can be used in the implementation. LAY ABSTRACT: Terminal sterilization is considered the preferred means for the production of sterile drug products because it affords enhanced safety for the patient as the formulation is filled into its final container, sealed, and sterilized. Despite the obvious patient benefits, the use of terminal sterilization is artificially constrained by unreasonable expectations for the minimum time-temperature process to be used. These articles outline the artificial nature of the process constraints and describe a scientifically sound means to expand the use of terminal sterilization by identifying the correct process goal-destruction of the bioburden present in the container prior to sterilization. By focusing attention on the correct process target, lower time-temperature conditions can be used to expand the use of terminal sterilization to products unable to withstand the harsh conditions that have been mistakenly applied. The first article provides the background, and describes the benefits to patient, producer, and regulator. The second article includes validation and operational advice that can be used in the implementation. © PDA, Inc. 2017.
Constraining radon backgrounds in LZ
NASA Astrophysics Data System (ADS)
Miller, E. H.; Busenitz, J.; Edberg, T. K.; Ghag, C.; Hall, C.; Leonard, R.; Lesko, K.; Liu, X.; Meng, Y.; Piepke, A.; Schnee, R. W.
2018-01-01
The LZ dark matter detector, like many other rare-event searches, will suffer from backgrounds due to the radioactive decay of radon daughters. In order to achieve its science goals, the concentration of radon within the xenon should not exceed 2 µBq/kg, or 20 mBq total within its 10 tonnes. The LZ collaboration is in the midst of a program to screen all significant components in contact with the xenon. The four institutions involved in this effort have begun sharing two cross-calibration sources to ensure consistent measurement results across multiple distinct devices. We present here five preliminary screening results, some mitigation strategies that will reduce the amount of radon produced by the most problematic components, and a summary of the current estimate of radon emanation throughout the detector. This best estimate totals < 17.3 mBq, sufficiently low to meet the detector's science goals.
Predicting Great Lakes fish yields: tools and constraints
Lewis, C.A.; Schupp, D.H.; Taylor, W.W.; Collins, J.J.; Hatch, Richard W.
1987-01-01
Prediction of yield is a critical component of fisheries management. The development of sound yield prediction methodology and the application of the results of yield prediction are central to the evolution of strategies to achieve stated goals for Great Lakes fisheries and to the measurement of progress toward those goals. Despite general availability of species yield models, yield prediction for many Great Lakes fisheries has been poor due to the instability of the fish communities and the inadequacy of available data. A host of biological, institutional, and societal factors constrain both the development of sound predictions and their application to management. Improved predictive capability requires increased stability of Great Lakes fisheries through rehabilitation of well-integrated communities, improvement of data collection, data standardization and information-sharing mechanisms, and further development of the methodology for yield prediction. Most important is the creation of a better-informed public that will in turn establish the political will to do what is required.
Background model for the Majorana Demonstrator
Cuesta, C.; Abgrall, N.; Aguayo, E.; ...
2015-01-01
The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example usingmore » powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.« less
Just allocation and team loyalty: a new virtue ethic for emergency medicine
Girod, J; Beckman, A
2005-01-01
When traditional virtue ethics is applied to clinical medicine, it often claims as its goal the good of the individual patient, and focuses on the dyadic relationship between one physician and one patient. An alternative model of virtue ethics, more appropriate to the practice of emergency medicine, will be outlined by this paper. This alternative model is based on the assumption that the appropriate goal of the practice of emergency medicine is a team approach to the medical wellbeing of individual patients, constrained by the wellbeing of the patient population served by a particular emergency department. By defining boundaries and using the key virtues of justice and team loyalty, this model fits emergency practice well and gives care givers the conceptual clarity to apply this model to various conflicts both within the department and with those outside the department. PMID:16199595
The Neuroscience of Consumer Choice
Hsu, Ming; Yoon, Carolyn
2015-01-01
We review progress and challenges relating to scientific and applied goals of the nascent field of consumer neuroscience. Scientifically, substantial progress has been made in understanding the neurobiology of choice processes. Further advances, however, require researchers to begin clarifying the set of developmental and cognitive processes that shape and constrain choices. First, despite the centrality of preferences in theories of consumer choice, we still know little about where preferences come from and the underlying developmental processes. Second, the role of attention and memory processes in consumer choice remains poorly understood, despite importance ascribed to them in interpreting data from the field. The applied goal of consumer neuroscience concerns our ability to translate this understanding to augment prediction at the population level. Although the use of neuroscientific data for market-level predictions remains speculative, there is growing evidence of superiority in specific cases over existing market research techniques. PMID:26665152
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Constrained evolution in numerical relativity
NASA Astrophysics Data System (ADS)
Anderson, Matthew William
The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.
NASA Astrophysics Data System (ADS)
Boren, E. J.; Boschetti, L.; Johnson, D.
2017-12-01
Water plays a critical role in all plant physiological processes, including transpiration, photosynthesis, nutrient transportation, and maintenance of proper plant cell functions. Deficits in water content cause drought-induced stress conditions, such as constrained plant growth and cellular metabolism, while overabundance of water cause anoxic conditions which limit plant physiological processes and promote disease. Vegetation water content maps can provide agricultural producers key knowledge for improving production capacity and resiliency in agricultural systems while facilitating the ability to pinpoint, monitor, and resolve water scarcity issues. Radiative transfer model (RTM) inversion has been successfully applied to remotely sensed data to retrieve biophysical and canopy parameter estimates, including water content. The successful launch of the Landsat 8 Operational Land Imager (OLI) in 2012, Sentinel 2A Multispectral Instrument (MSI) in 2015, followed by Sentinel 2B in 2017, the systematic acquisition schedule and free data distribution policy provide the opportunity for water content estimation at a spatial and temporal scale that can meet the demands of potential operational users: combined, these polar-orbiting systems provide 10 m to 30 m multi-spectral global coverage up to every 3 days. The goal of the present research is to prototype the generation of a cropland canopy water content product, obtained from the newly developed Landsat 8 and Sentinel 2 atmospherically corrected HLS product, through the inversion of the leaf and canopy model PROSAIL5B. We assess the impact of a novel spatial and temporal stratification, where some parameters of the model are constrained by crop type and phenological phase, based on ancillary biophysical data, collected from various crop species grown in a controlled setting and under different water stress conditions. Canopy-level data, collected coincidently with satellite overpasses during four summer field campaigns in northern Idaho (2014 to 2017), are used to validate the results of the model inversion.
Overview of the 2009 and 2011 Sayarim Infrasound Calibration Experiments
NASA Astrophysics Data System (ADS)
Fee, D.; Waxler, R.; Drob, D.; Gitterman, Y.; Given, J.
2012-04-01
The establishment of the International Monitoring System (IMS) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has stimulated infrasound research and development. However, as the network comes closer to completion there exists a lack of large, well-constrained sources to test the network and its capabilities. Also, significant uncertainties exist in long-range acoustic propagation due to a dynamic, difficult to characterize atmosphere, particularly the thermosphere. In 2009 and 2011 three large scale infrasound calibration experiments were performed in Europe, the Middle East, Africa, and Asia. The goal of the calibration experiments were to test the IMS infrasound network and validate atmospheric and propagation models with large, well-constrained infrasound sources. This presentation provides an overview of the calibration experiments, including deployment, atmospheric conditions during the experiments, explosion characterization, infrasonic signal detection and identification, and a discussion of the results and implications. Each calibration experiment consisted of singular surface detonation of explosives with nominal weights of 82, 10.24, and 102.08 tons on 26 August 2009, 24 January 2011, and 26 January 2011, respectively. These explosions were designed and conducted by the Geophysical Institute of Israel at Sayarim Military Range, Israel and produced significant infrasound detected by numerous permanent and temporary infrasound arrays in the region. The 2009 experiment was performed in the summer to take advantage of the westerly stratospheric winds. Infrasonic arrivals were detected by both IMS and temporary arrays deployed to the north and west of the source, including clear stratospheric arrivals and thermospheric arrivals with low celerities. The 2011 experiment was performed during the winter, when strong easterly stratospheric winds dominated in addition to a strong tropospheric jet (the jet stream). These wind jets allowed detection out to 6500 km, in addition to multiple tropospheric, stratospheric, and thermospheric arrivals at arrays deployed to the east. These experiments represented a considerable, successful collaboration between the CTBTO and numerous other groups and will provide a rich ground-truth dataset for detailed infrasound studies in the future.
NASA Astrophysics Data System (ADS)
Lee, Dae Young
The design of a small satellite is challenging since they are constrained by mass, volume, and power. To mitigate these constraint effects, designers adopt deployable configurations on the spacecraft that result in an interesting and difficult optimization problem. The resulting optimization problem is challenging due to the computational complexity caused by the large number of design variables and the model complexity created by the deployables. Adding to these complexities, there is a lack of integration of the design optimization systems into operational optimization, and the utility maximization of spacecraft in orbit. The developed methodology enables satellite Multidisciplinary Design Optimization (MDO) that is extendable to on-orbit operation. Optimization of on-orbit operations is possible with MDO since the model predictive controller developed in this dissertation guarantees the achievement of the on-ground design behavior in orbit. To enable the design optimization of highly constrained and complex-shaped space systems, the spherical coordinate analysis technique, called the "Attitude Sphere", is extended and merged with an additional engineering tools like OpenGL. OpenGL's graphic acceleration facilitates the accurate estimation of the shadow-degraded photovoltaic cell area. This technique is applied to the design optimization of the satellite Electric Power System (EPS) and the design result shows that the amount of photovoltaic power generation can be increased more than 9%. Based on this initial methodology, the goal of this effort is extended from Single Discipline Optimization to Multidisciplinary Optimization, which includes the design and also operation of the EPS, Attitude Determination and Control System (ADCS), and communication system. The geometry optimization satisfies the conditions of the ground development phase; however, the operation optimization may not be as successful as expected in orbit due to disturbances. To address this issue, for the ADCS operations, controllers based on Model Predictive Control that are effective for constraint handling were developed and implemented. All the suggested design and operation methodologies are applied to a mission "CADRE", which is space weather mission scheduled for operation in 2016. This application demonstrates the usefulness and capability of the methodology to enhance CADRE's capabilities, and its ability to be applied to a variety of missions.
Experimental and modeling studies of small molecule chemistry in expanding spherical flames
NASA Astrophysics Data System (ADS)
Santner, Jeffrey
Accurate models of flame chemistry are required in order to predict emissions and flame properties, such that clean, efficient engines can be designed more easily. There are three primary methods used to improve such combustion chemistry models - theoretical reaction rate calculations, elementary reaction rate experiments, and combustion system experiments. This work contributes to model improvement through the third method - measurements and analysis of the laminar burning velocity at constraining conditions. Modern combustion systems operate at high pressure with strong exhaust gas dilution in order to improve efficiency and reduce emissions. Additionally, flames under these conditions are sensitized to elementary reaction rates such that measurements constrain modeling efforts. Measurement conditions of the present work operate within this intersection between applications and fundamental science. Experiments utilize a new pressure-release, heated spherical combustion chamber with a variety of fuels (high hydrogen content fuels, formaldehyde (via 1,3,5-trioxane), and C2 fuels) at pressures from 0.5--25 atm, often with dilution by water vapor or carbon dioxide to flame temperatures below 2000 K. The constraining ability of these measurements depends on their uncertainty. Thus, the present work includes a novel analytical estimate of the effects of thermal radiative heat loss on burning velocity measurements in spherical flames. For 1,3,5-trioxane experiments, global measurements are sufficiently sensitive to elementary reaction rates that optimization techniques are employed to indirectly measure the reaction rates of HCO consumption. Besides the influence of flame chemistry on propagation, this work also explores the chemistry involved in production of nitric oxide, a harmful pollutant, within flames. We find significant differences among available chemistry models, both in mechanistic structure and quantitative reaction rates. There is a lack of well-defined measurements of nitric oxide formation at high temperatures, contributing to disagreement between chemical models. This work accomplishes several goals. It identifies disagreements in pollutant formation chemistry. It creates a novel database of burning velocity measurements at relevant, sensitive conditions. It presents a simple, conservative estimate of radiation-induced measurement uncertainty in spherical flames. Finally, it utilizes systems-level flame experiments to indirectly measure elementary reaction rates.
NASA Astrophysics Data System (ADS)
Barantsrva, O.
2014-12-01
We present a preliminary analysis of the crustal and upper mantle structure for off-shore regions in the North Atlantic and Arctic oceans. These regions have anomalous oceanic lithosphere: the upper mantle of the North Atlantic ocean is affected by the Iceland plume, while the Arctic ocean has some of the slowest spreading rates. Our specific goal is to constrain the density structure of the upper mantle in order to understand the links between the deep lithosphere dynamics, ocean spreading, ocean floor bathymetry, heat flow and structure of the oceanic lithosphere in the regions where classical models of evolution of the oceanic lithosphere may not be valid. The major focus is on the oceanic lithosphere, but the Arctic shelves with a sufficient data coverage are also included into the analysis. Out major interest is the density structure of the upper mantle, and the analysis is based on the interpretation of GOCE satellite gravity data. To separate gravity anomalies caused by subcrustal anomalous masses, the gravitational effect of water, crust and the deep mantle is removed from the observed gravity field. For bathymetry we use the global NOAA database ETOPO1. The crustal correction to gravity is based on two crustal models: (1) global model CRUST1.0 (Laske, 2013) and, for a comparison, (2) a regional seismic model EUNAseis (Artemieva and Thybo, 2013). The crustal density structure required for the crustal correction is constrained from Vp data. Previous studies have shown that a large range of density values corresponds to any Vp value. To overcome this problem and to reduce uncertainty associated with the velocity-density conversion, we account for regional tectonic variations in the Northern Atlantics as constrained by numerous published seismic profiles and potential-field models across the Norwegian off-shore crust (e.g. Breivik et al., 2005, 2007), and apply different Vp-density conversions for different parts of the region. We present preliminary results, which we use to examine factors that control variations in bathymetry, sedimentary and crustal thicknesses in these anomalous oceanic domains.
The Amelia Bedelia effect: world knowledge and the goal bias in language acquisition.
Srinivasan, Mahesh; Barner, David
2013-09-01
How does world knowledge interact with syntax to constrain linguistic interpretation? We explored this question by testing children's acquisition of verbs like weed and water, which have opposite meanings despite occurring in the same syntactic frames. Whereas "weed the garden" treats "the garden" as a source, "water the garden" treats it as a goal. In five experiments, we asked how children learn these verbs. Previous theories predict that verbs which describe the transfer of an object with respect to its natural origin (e.g., "weed the garden") should receive source interpretations, whereas verbs that describe the transfer of an object with respect to something it is functionally related to (e.g., "water the garden") should receive goal interpretations. Therefore, acquiring world knowledge - about the natural origins and functional uses of objects - should be sufficient for differentiating between source and goal meanings. Experiments 1 and 2 casted doubt on this hypothesis, as 4- and 5-year-olds failed to use their world knowledge when interpreting these verbs and instead overextended goal interpretations. For example, children interpreted "weed the garden" to mean "put weeds onto a garden", even when they knew the natural origin of weeds. Experiment 3 tested children's interpretation of novel verbs and directly manipulated their access to relevant world knowledge. While younger children continued to exhibit a goal bias and failed to use world knowledge, older children generalized goal and source interpretations to novel verbs according to world knowledge. In Experiments 4 and 5, we confirmed that adults use world knowledge to guide their interpretation of novel verbs, but also showed that even adults prefer goal interpretations when they are made contextually plausible. We argue that children ultimately overcome a goal bias by learning to use their world knowledge to weigh the plausibility of events (e.g., of putting weeds into a garden). Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Smith, B. D.; Kass, A.; Saltus, R. W.; Minsley, B. J.; Deszcz-Pan, M.; Bloss, B. R.; Burns, L. E.
2013-12-01
Public-domain airborne geophysical surveys (combined electromagnetics and magnetics), mostly collected for and released by the State of Alaska, Division of Geological and Geophysical Surveys (DGGS), are a unique and valuable resource for both geologic interpretation and geophysical methods development. A new joint effort by the US Geological Survey (USGS) and the DGGS aims to add value to these data through the application of novel advanced inversion methods and through innovative and intuitive display of data: maps, profiles, voxel-based models, and displays of estimated inversion quality and confidence. Our goal is to make these data even more valuable for interpretation of geologic frameworks, geotechnical studies, and cryosphere studies, by producing robust estimates of subsurface resistivity that can be used by non-geophysicists. The available datasets, which are available in the public domain, include 39 frequency-domain electromagnetic datasets collected since 1993, and continue to grow with 5 more data releases pending in 2013. The majority of these datasets were flown for mineral resource purposes, with one survey designed for infrastructure analysis. In addition, several USGS datasets are included in this study. The USGS has recently developed new inversion methodologies for airborne EM data and have begun to apply these and other new techniques to the available datasets. These include a trans-dimensional Markov Chain Monte Carlo technique, laterally-constrained regularized inversions, and deterministic inversions which include calibration factors as a free parameter. Incorporation of the magnetic data as an additional constraining dataset has also improved the inversion results. Processing has been completed in several areas, including Fortymile and the Alaska Highway surveys, and continues in others such as the Styx River and Nome surveys. Utilizing these new techniques, we provide models beyond the apparent resistivity maps supplied by the original contractors, allowing us to produce a variety of products, such as maps of resistivity as a function of depth or elevation, cross section maps, and 3D voxel models, which have been treated consistently both in terms of processing and error analysis throughout the state. These products facilitate a more fruitful exchange between geologists and geophysicists and a better understanding of uncertainty, and the process results in iterative development and improvement of geologic models, both on small and large scales.
Assessing Potential Propulsion Breakthroughs
NASA Technical Reports Server (NTRS)
Millis, Marc G.
2005-01-01
The term, propulsion breakthrough, refers to concepts like propellantless space drives and faster-than-light travel, the kind of breakthroughs that would make interstellar exploration practical. Although no such breakthroughs appear imminent, a variety of investigations into these goals have begun. From 1996 to 2002, NASA supported the Breakthrough Propulsion Physics Project to examine physics in the context of breakthrough spaceflight. Three facets of these assessments are now reported: (1) predicting benefits, (2) selecting research, and (3) recent technical progress. Predicting benefits is challenging since the breakthroughs are still only notional concepts, but kinetic energy can serve as a basis for comparison. In terms of kinetic energy, a hypothetical space drive could require many orders of magnitude less energy than a rocket for journeys to our nearest neighboring star. Assessing research options is challenging when the goals are beyond known physics and when the implications of success are profound. To mitigate the challenges, a selection process is described where: (a) research tasks are constrained to only address the immediate unknowns, curious effects or critical issues, (b) reliability of assertions is more important than their implications, and (c) reviewers judge credibility rather than feasibility. The recent findings of a number of tasks, some selected using this process, are discussed. Of the 14 tasks included, six reached null conclusions, four remain unresolved, and four have opportunities for sequels. A dominant theme with the sequels is research about the properties of space, inertial frames, and the quantum vacuum.
The Convergence of Intelligences
NASA Astrophysics Data System (ADS)
Diederich, Joachim
Minsky (1985) argued an extraterrestrial intelligence may be similar to ours despite very different origins. ``Problem- solving'' offers evolutionary advantages and individuals who are part of a technical civilisation should have this capacity. On earth, the principles of problem-solving are the same for humans, some primates and machines based on Artificial Intelligence (AI) techniques. Intelligent systems use ``goals'' and ``sub-goals'' for problem-solving, with memories and representations of ``objects'' and ``sub-objects'' as well as knowledge of relations such as ``cause'' or ``difference.'' Some of these objects are generic and cannot easily be divided into parts. We must, therefore, assume that these objects and relations are universal, and a general property of intelligence. Minsky's arguments from 1985 are extended here. The last decade has seen the development of a general learning theory (``computational learning theory'' (CLT) or ``statistical learning theory'') which equally applies to humans, animals and machines. It is argued that basic learning laws will also apply to an evolved alien intelligence, and this includes limitations of what can be learned efficiently. An example from CLT is that the general learning problem for neural networks is intractable, i.e. it cannot be solved efficiently for all instances (it is ``NP-complete''). It is the objective of this paper to show that evolved intelligences will be constrained by general learning laws and will use task-decomposition for problem-solving. Since learning and problem-solving are core features of intelligence, it can be said that intelligences converge despite very different origins.
Ogbolu, Yolanda; Scrandis, Debra A; Fitzpatrick, Grace
2018-01-01
To examine chief nurse executives' perspectives on: (1) the provision of culturally and linguistically appropriate services in hospitals and (2) to identify barriers and facilitators associated with the implementation of culturally and linguistically appropriate services. Hospitals continue to face challenges providing care to diverse patients. The uptake of standards related to culturally and linguistically appropriate services into clinical practice is sluggish, despite potential benefits, including reducing health disparities, patient errors, readmissions and improving patient experiences. A qualitative study with chief nurse executives from one eastern United States (US). Data were analysed using content analysis. Seven themes emerged: (1) lack of awareness of resources for health care organisations; (2) constrained cultural competency training; (3) suboptimal resources (cost and time); (4) mutual understanding; (5) limited workplace diversity; (6) community outreach programmes; and (7) the management of unvoiced patient expectations. As the American population diversifies, providing culturally and linguistically appropriate services remains a priority for nurse leaders. Being aware and utilizing the resources, policies and best practices available for the implementation of culturally and linguistically appropriate services can assist nursing managers in reaching their goals of providing high quality care to diverse populations. Nurse managers are key in aligning the unit's resources with organisational goals related to the provision of culturally and linguistically appropriate services by providing the operational leadership to eliminate barriers and to enhance the uptake of best practices related to culturally and linguistically appropriate services. © 2017 John Wiley & Sons Ltd.
Constrained orbital intercept-evasion
NASA Astrophysics Data System (ADS)
Zatezalo, Aleksandar; Stipanovic, Dusan M.; Mehra, Raman K.; Pham, Khanh
2014-06-01
An effective characterization of intercept-evasion confrontations in various space environments and a derivation of corresponding solutions considering a variety of real-world constraints are daunting theoretical and practical challenges. Current and future space-based platforms have to simultaneously operate as components of satellite formations and/or systems and at the same time, have a capability to evade potential collisions with other maneuver constrained space objects. In this article, we formulate and numerically approximate solutions of a Low Earth Orbit (LEO) intercept-maneuver problem in terms of game-theoretic capture-evasion guaranteed strategies. The space intercept-evasion approach is based on Liapunov methodology that has been successfully implemented in a number of air and ground based multi-player multi-goal game/control applications. The corresponding numerical algorithms are derived using computationally efficient and orbital propagator independent methods that are previously developed for Space Situational Awareness (SSA). This game theoretical but at the same time robust and practical approach is demonstrated on a realistic LEO scenario using existing Two Line Element (TLE) sets and Simplified General Perturbation-4 (SGP-4) propagator.
NASA Astrophysics Data System (ADS)
Haddout, Soufiane
2018-01-01
The equations of motion of a bicycle are highly nonlinear and rolling of wheels without slipping can only be expressed by nonholonomic constraint equations. A geometrical theory of general nonholonomic constrained systems on fibered manifolds and their jet prolongations, based on so-called Chetaev-type constraint forces, was proposed and developed in the last decade by O. Krupková (Rossi) in 1990's. Her approach is suitable for study of all kinds of mechanical systems-without restricting to Lagrangian, time-independent, or regular ones, and is applicable to arbitrary constraints (holonomic, semiholonomic, linear, nonlinear or general nonholonomic). The goal of this paper is to apply Krupková's geometric theory of nonholonomic mechanical systems to study a concrete problem in nonlinear nonholonomic dynamics, i.e., autonomous bicycle. The dynamical model is preserved in simulations in its original nonlinear form without any simplifying. The results of numerical solutions of constrained equations of motion, derived within the theory, are in good agreement with measurements and thus they open the possibility of direct application of the theory to practical situations.
Microgrid Optimal Scheduling With Chance-Constrained Islanding Capability
Liu, Guodong; Starke, Michael R.; Xiao, B.; ...
2017-01-13
To facilitate the integration of variable renewable generation and improve the resilience of electricity sup-ply in a microgrid, this paper proposes an optimal scheduling strategy for microgrid operation considering constraints of islanding capability. A new concept, probability of successful islanding (PSI), indicating the probability that a microgrid maintains enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation after instantaneously islanding from the main grid, is developed. The PSI is formulated as mixed-integer linear program using multi-interval approximation taking into account the probability distributions of forecast errors of wind, PV and load. With themore » goal of minimizing the total operating cost while preserving user specified PSI, a chance-constrained optimization problem is formulated for the optimal scheduling of mirogrids and solved by mixed integer linear programming (MILP). Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling strategy. Lastly, we verify the relationship between PSI and various factors.« less
NASA Astrophysics Data System (ADS)
Wang, Yaping
One of the primary goals of the spin physics program at STAR is to constrain the polarized gluon distribution function, Δg(x), by measuring the longitudinal double-spin asymmetry (ALL) of various final-state channels. Using a jet in the mid-rapidity region |η| < 0.9 correlated with an azimuthally back-to-back π0 in the forward rapidity region 0.8 < η < 2.0 provides a new possibility to access the Δg(x) distribution at Bjorken-x down to 0.01. Compared to inclusive jet or inclusive π0 measurements, this channel also allows to constrain the initial parton kinematics. In these proceedings, we will present the status of the analysis of the π0-jet ALL in longitudinally polarized proton+proton collisions at s =510 GeV with 80 pb‑1 of data taken during the 2012 RHIC run. We also compare the projected ALL uncertainties to theoretical predictions of the ALL by next-to-leading order (NLO) model calculations with different polarized parton distribution functions.
Building a functional multiple intelligences theory to advance educational neuroscience.
Cerruti, Carlo
2013-01-01
A key goal of educational neuroscience is to conduct constrained experimental research that is theory-driven and yet also clearly related to educators' complex set of questions and concerns. However, the fields of education, cognitive psychology, and neuroscience use different levels of description to characterize human ability. An important advance in research in educational neuroscience would be the identification of a cognitive and neurocognitive framework at a level of description relatively intuitive to educators. I argue that the theory of multiple intelligences (MI; Gardner, 1983), a conception of the mind that motivated a past generation of teachers, may provide such an opportunity. I criticize MI for doing little to clarify for teachers a core misunderstanding, specifically that MI was only an anatomical map of the mind but not a functional theory that detailed how the mind actually processes information. In an attempt to build a "functional MI" theory, I integrate into MI basic principles of cognitive and neural functioning, namely interregional neural facilitation and inhibition. In so doing I hope to forge a path toward constrained experimental research that bears upon teachers' concerns about teaching and learning.
NASA Technical Reports Server (NTRS)
Smrekar, S. E.; Raymond, C. A.; McGill, G. E.
2004-01-01
The Martian dichotomy divides the smooth, northern lowlands from the rougher southern highlands. The northern lowlands are largely free of magnetic anomalies, while the majority of the significant magnetic anomalies are located in the southern highlands. An elevation change of 2-4 km is typical across the dichotomy, and is up to 6 km locally. We examine a part of the dichotomy that is likely to preserve the early history of the dichotomy as it is relatively unaffected by major impacts and erosion. This study contains three parts: 1) the geologic history, which is summarized below and detailed in McGill et al., 2) the study of the gravity and magnetic field to better constrain the subsurface structure and history of the magnetic field (this abstract), and 3) modeling of the relaxation of this area. Our overall goal is to place constraints on formation models of the dichotomy by constraining lithospheric properties. Initial results for the analysis of the geology, gravity, and magnetic field studies are synthesized in Smrekar et al..
On the utilization of engineering knowledge in design optimization
NASA Technical Reports Server (NTRS)
Papalambros, P.
1984-01-01
Some current research work conducted at the University of Michigan is described to illustrate efforts for incorporating knowledge in optimization in a nontraditional way. The incorporation of available knowledge in a logic structure is examined in two circumstances. The first examines the possibility of introducing global design information in a local active set strategy implemented during the iterations of projection-type algorithms for nonlinearly constrained problems. The technique used algorithms for nonlinearly constrained problems. The technique used combines global and local monotinicity analysis of the objective and constraint functions. The second examines a knowledge-based program which aids the user to create condigurations that are most desirable from the manufacturing assembly viewpoint. The data bank used is the classification scheme suggested by Boothroyd. The important aspect of this program is that it is an aid for synthesis intended for use in the design concept phase in a way similar to the so-called idea-triggers in creativity-enhancement techniques like brain-storming. The idea generation, however, is not random but it is driven by the goal of achieving the best acceptable configuration.
Correction method for stripe nonuniformity.
Qian, Weixian; Chen, Qian; Gu, Guohua; Guan, Zhiqiang
2010-04-01
Stripe nonuniformity is very typical in line infrared focal plane arrays (IR-FPA) and uncooled staring IR-FPA. In this paper, the mechanism of the stripe nonuniformity is analyzed, and the gray-scale co-occurrence matrix theory and optimization theory are studied. Through these efforts, the stripe nonuniformity correction problem is translated into the optimization problem. The goal of the optimization is to find the minimal energy of the image's line gradient. After solving the constrained nonlinear optimization equation, the parameters of the stripe nonuniformity correction are obtained and the stripe nonuniformity correction is achieved. The experiments indicate that this algorithm is effective and efficient.
Discrete X-Ray Source Populations and Star Formation History in Nearby Galaxies
NASA Technical Reports Server (NTRS)
Zezas, Andreas; Hasan, Hashima (Technical Monitor)
2005-01-01
This program aims in understanding the connection between the discrete X-ray source populations observed in nearby galaxies and the history of star-formation in these galaxies. The ultimate goal is to use this knowledge in order to constrain X-ray binary evolution channels. For this reason although the program is primarily observational it has a significant modeling component. During the second year of this study we focused on detailed studies of the Antennae galaxies and the Small Magellanic Cloud (SMC). We also performed the initial analysis of the 5 galaxies forming a starburst-age sequence.
Solid-Phase Synthesis of Diverse Peptide Tertiary Amides By Reductive Amination
Pels, Kevin; Kodadek, Thomas
2015-01-01
The synthesis of libraries of conformationally-constrained peptide-like oligomers is an important goal in combinatorial chemistry. In this regard an attractive building block is the N-alkylated peptide, also known as peptide tertiary amide (PTA). PTAs are strongly biased conformationally due to allylic 1,3 strain interactions. We report here an improved synthesis of these species on solid supports through the use of reductive amination chemistry using amino acid-terminated, bead-displayed oligomers and diverse aldehydes. The utility of this chemistry is demonstrated by the synthesis of a library of 10,000 mixed peptoid-PTA oligomers. PMID:25695359
Constraint-Based Routing Models for the Transport of Radioactive Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Steven K
2015-01-01
The Department of Energy (DOE) has a historic programmatic interest in the safe and secure routing, tracking, and transportation risk analysis of radiological materials in the United States. In order to address these program goals, DOE has funded the development of several tools and related systems designed to provide insight to planners and other professionals handling radioactive materials shipments. These systems include the WebTRAGIS (Transportation Routing Analysis Geographic Information System) platform. WebTRAGIS is a browser-based routing application developed at Oak Ridge National Laboratory (ORNL) focused primarily on the safe transport of spent nuclear fuel from US nuclear reactors via railway,more » highway, or waterway. It is also used for the transport planning of low-level radiological waste to depositories such as the Waste Isolation Pilot Plant (WIPP) facility. One particular feature of WebTRAGIS is its coupling with high-resolution population data from ORNL s LandScan project. This allows users to obtain highly accurate population count and density information for use in route planning and risk analysis. To perform the routing and risk analysis WebTRAGIS incorporates a basic routing model methodology, with the additional application of various constraints designed to mimic US Department of Transportation (DOT), DOE, and Nuclear Regulatory Commission (NRC) regulations. Aside from the routing models available in WebTRAGIS, the system relies on detailed or specialized modal networks for the route solutions. These include a highly detailed network model of the US railroad system, the inland and coastal waterways, and a specialized highway network that focuses on the US interstate system and the designated hazardous materials and Highway Route Controlled Quantity (HRCQ) -designated roadways. The route constraints in WebTRAGIS rely upon a series of attributes assigned to the various components of the different modal networks. Routes are determined via a constrained shortest-path Dijkstra algorithm that has an assigned impedance factor. The route constraints modify the various impedance weights to bias or prefer particular network characteristics as desired by the user. Both the basic route model and the constrained impedance function calculations are determined by a series of network characteristics and shipment types. The study examines solutions under various constraints modeled by WebTRAGIS including possible routes from select shut-down reactor sites in the US to specific locations in the US. For purposes of illustration, the designated destinations are Oak Ridge National Laboratory in Tennessee and the Savannah River Site in South Carolina. To the degree that routes express sameness or variety under constraints serves to illustrate either a) the determinism of particular transport modes by either configuration or regulatory compliance, and/or b) the variety of constrained routes that are regulation compliant but may not be operationally feasible.« less
Hayhoe, Mary M; Rothkopf, Constantin A
2011-03-01
Historically, the study of visual perception has followed a reductionist strategy, with the goal of understanding complex visually guided behavior by separate analysis of its elemental components. Recent developments in monitoring behavior, such as measurement of eye movements in unconstrained observers, have allowed investigation of the use of vision in the natural world. This has led to a variety of insights that would be difficult to achieve in more constrained experimental contexts. In general, it shifts the focus of vision away from the properties of the stimulus toward a consideration of the behavioral goals of the observer. It appears that behavioral goals are a critical factor in controlling the acquisition of visual information from the world. This insight has been accompanied by a growing understanding of the importance of reward in modulating the underlying neural mechanisms and by theoretical developments using reinforcement learning models of complex behavior. These developments provide us with the tools to understanding how tasks are represented in the brain, and how they control acquisition of information through use of gaze. WIREs Cogni Sci 2011 2 158-166 DOI: 10.1002/wcs.113 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.
Sathish, Thirunavukkarasu; Williams, Emily D; Pasricha, Naanki; Absetz, Pilvikki; Lorgelly, Paula; Wolfe, Rory; Mathews, Elezebeth; Aziz, Zahra; Thankappan, Kavumpurathu Raman; Zimmet, Paul; Fisher, Edwin; Tapp, Robyn; Hollingsworth, Bruce; Mahal, Ajay; Shaw, Jonathan; Jolley, Damien; Daivadanam, Meena; Oldenburg, Brian
2013-11-04
India currently has more than 60 million people with Type 2 Diabetes Mellitus (T2DM) and this is predicted to increase by nearly two-thirds by 2030. While management of those with T2DM is important, preventing or delaying the onset of the disease, especially in those individuals at 'high risk' of developing T2DM, is urgently needed, particularly in resource-constrained settings. This paper describes the protocol for a cluster randomised controlled trial of a peer-led lifestyle intervention program to prevent diabetes in Kerala, India. A total of 60 polling booths are randomised to the intervention arm or control arm in rural Kerala, India. Data collection is conducted in two steps. Step 1 (Home screening): Participants aged 30-60 years are administered a screening questionnaire. Those having no history of T2DM and other chronic illnesses with an Indian Diabetes Risk Score value of ≥60 are invited to attend a mobile clinic (Step 2). At the mobile clinic, participants complete questionnaires, undergo physical measurements, and provide blood samples for biochemical analysis. Participants identified with T2DM at Step 2 are excluded from further study participation. Participants in the control arm are provided with a health education booklet containing information on symptoms, complications, and risk factors of T2DM with the recommended levels for primary prevention. Participants in the intervention arm receive: (1) eleven peer-led small group sessions to motivate, guide and support in planning, initiation and maintenance of lifestyle changes; (2) two diabetes prevention education sessions led by experts to raise awareness on T2DM risk factors, prevention and management; (3) a participant handbook containing information primarily on peer support and its role in assisting with lifestyle modification; (4) a participant workbook to guide self-monitoring of lifestyle behaviours, goal setting and goal review; (5) the health education booklet that is given to the control arm. Follow-up assessments are conducted at 12 and 24 months. The primary outcome is incidence of T2DM. Secondary outcomes include behavioural, psychosocial, clinical, and biochemical measures. An economic evaluation is planned. Results from this trial will contribute to improved policy and practice regarding lifestyle intervention programs to prevent diabetes in India and other resource-constrained settings. Australia and New Zealand Clinical Trials Registry: ACTRN12611000262909.
On the age and parent body of the daytime Arietids meteor shower
NASA Astrophysics Data System (ADS)
Abedin, A.; Wiegert, P.; Pokorny, P.; Brown, P.
2016-01-01
The daytime Arietid meteor shower is active from mid-May to late June and is among the strongest of the annual meteor showers, comparable in activity and duration to the Perseids and the Geminids. Due to the daytime nature of the shower, the Arietids have mostly been constrained by radar studies. The Arietids exhibit a long-debated discrepancy in the semi-major axis and the eccentricity of meteoroid orbits as measured by radar and optical surveys. Radar studies yield systematically lower values for the semi-major axis and eccentricity, where the origin of these discrepancies remain unclear. The proposed parent bodies of the stream include comet 96P/Machholz and more recently the Marsden's group of sun-skirting comets. In this work, we present detailed numerical modelling of the daytime Arietid meteoroid stream, with the goal to identifying the parent body and constraining the age of the stream. We use observational data from an extensive survey of the Arietids by the Canadian Meteor Orbit Radar (CMOR), in the period of 2002-2013, and several optical observations by the SonotaCo meteor network and the Cameras for All-sky Meteor Surveillance (CAMS). Our simulations suggest that the age and observed characteristics of the daytime Arietids are consistent with cometary activity from 96P, over the past 12000 years. The sunskirting comets that presumably formed in a major comet breakup between 100 - 950 AD (Chodas and Sekanina, 2005), alone, cannot explain the observed shower characteristics of the Arietids. Thus, the Marsden sunskirters cannot be the dominant parent, though our simulations suggest that they contribute to the core of the stream.
NASA Astrophysics Data System (ADS)
Monna, A.; Seitz, S.; Zitrin, A.; Geller, M. J.; Grillo, C.; Mercurio, A.; Greisel, N.; Halkola, A.; Suyu, S. H.; Postman, M.; Rosati, P.; Balestra, I.; Biviano, A.; Coe, D.; Fabricant, D. G.; Hwang, H. S.; Koekemoer, A.
2015-02-01
We use velocity dispersion measurements of 21 individual cluster members in the core of Abell 383, obtained with Multiple Mirror Telescope Hectospec, to separate the galaxy and the smooth dark halo (DH) lensing contributions. While lensing usually constrains the overall, projected mass density, the innovative use of velocity dispersion measurements as a proxy for masses of individual cluster members breaks inherent degeneracies and allows us to (a) refine the constraints on single galaxy masses and on the galaxy mass-to-light scaling relation and, as a result, (b) refine the constraints on the DM-only map, a high-end goal of lens modelling. The knowledge of cluster member velocity dispersions improves the fit by 17 per cent in terms of the image reproduction χ2, or 20 per cent in terms of the rms. The constraints on the mass parameters improve by ˜10 per cent for the DH, while for the galaxy component, they are refined correspondingly by ˜50 per cent, including the galaxy halo truncation radius. For an L* galaxy with M^{*}B=-20.96, for example, we obtain best-fitting truncation radius r_tr^{*}=20.5^{+9.6}_{-6.7} kpc and velocity dispersion σ* = 324 ± 17 km s-1. Moreover, by performing the surface brightness reconstruction of the southern giant arc, we improve the constraints on rtr of two nearby cluster members, which have measured velocity dispersions, by more than ˜30 per cent. We estimate the stripped mass for these two galaxies, getting results that are consistent with numerical simulations. In the future, we plan to apply this analysis to other galaxy clusters for which velocity dispersions of member galaxies are available.
Depths and Ages of Deep-Sea Corals From the Medusa Expedition
NASA Astrophysics Data System (ADS)
Fernandez, D.; Adkins, J. F.; Robinson, L. F.; Scheirer, D.; Shank, T.
2003-12-01
From May-June 2003 we used the DSV Alvin and the RSV Atlantis to collect modern and fossil deep-sea corals from the New England and Muir Seamounts. Our goal was to collect depth transects of corals from a variety of ages to measure paleo chemical profiles in the North Atlantic. Because deep-sea corals can be dated with both U-series and radiocarbon methods, we are especially interested in measuring past D14C profiles to constrain the paleo overturning rate of the deep ocean. We collected over 3,300 fossil Desmophyllum cristagalli individuals, 10s of kgs of Solenosmillia sp. and numerous Enallopsamia rostrata and Caryophilia sp. These samples spanned a depth range from 1,150-2,500 meters and refute the notion that deep-sea corals are too sparsely distributed to be useful for paleoclimate reconstructions. Despite widespread evidence for mass wasting on the seamounts, fossil corals were almost always found in growth position. This observation alleviates some of the concern associated with dredge samples where down-slope transport of samples can not be characterized. Fossil scleractinia were often found to have recruited onto other carbonate skeletons, including large branching gorgonians. The U-series age distribution of these recruitment patterns will constrain how much paleoclimatic time a particular "patch" can represent. In addition, U-series ages, combined with the observed differences in species distribution, will begin to inform our understanding of deep-sea coral biogeography. A lack of modern D. cristagalli on Muir seamount, but an abundance of fossil samples at this site, is the most striking example of changes in oceanic conditions playing a role in where deep-sea corals grow.
NASA Astrophysics Data System (ADS)
Huesca Martinez, M.; Garcia, M.; Roth, K. L.; Casas, A.; Ustin, S.
2015-12-01
There is a well-established need within the remote sensing community for improved estimation of canopy structure and understanding of its influence on the retrieval of leaf biochemical properties. The aim of this project was to evaluate the estimation of structural properties directly from hyperspectral data, with the broader goal that these might be used to constrain retrievals of canopy chemistry. We used NASA's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) to discriminate different canopy structural types, defined in terms of biomass, canopy height and vegetation complexity, and compared them to estimates of these properties measured by LiDAR data. We tested a large number of optical metrics, including single narrow band reflectance and 1st derivative, sub-pixel cover fractions, narrow-band indices, spectral absorption features, and Principal Component Analysis components. Canopy structural types were identified and classified from different forest types by integrating structural traits measured by optical metrics using the Random Forest (RF) classifier. The classification accuracy was above 70% in most of the vegetation scenarios. The best overall accuracy was achieved for hardwood forest (>80% accuracy) and the lowest accuracy was found in mixed forest (~70% accuracy). Furthermore, similarly high accuracy was found when the RF classifier was applied to a spatially independent dataset, showing significant portability for the method used. Results show that all spectral regions played a role in canopy structure assessment, thus the whole spectrum is required. Furthermore, optical metrics derived from AVIRIS proved to be a powerful technique for structural attribute mapping. This research illustrates the potential for using optical properties to distinguish several canopy structural types in different forest types, and these may be used to constrain quantitative measurements of absorbing properties in future research.
Research Report to the National Aeronautics and Space Administration Cosmochemistry Program
NASA Technical Reports Server (NTRS)
Alexander, Conel O'D.
2004-01-01
The discovery of presolar grains in meteorites is one of the most exciting recent developments in meteoritics. Six types of presolar grain have been discovered: diamond, Sic, graphite, Si3N4, Al2O3 and MgAl2O4 (NIITLER, 2003). These grains have been identified as presolar because their isotopic compositions are very different from those of Solar System materials. Comparison of their isotopic compositions with astronomical observations and theoretical models indicates that most of the grains formed in the envelopes of highly evolved stars. They are, therefore, a new source of information with which to test astrophysical models of the evolution of these stars. In fact, because several elements can often be measured in the same grain, including elements that are not measurable spectroscopically in stars, the grain data provide some very stringent constraints for these models. Our primary goal is to create large, unbiased, multi-isotope databases of single presolar Sic, Si3N4, oxide and graphite grains in meteorites, as well as any new presolar grain types that are identified in the future. These will be used to: (i) test stellar and nucleosynthetic models, (ii) constrain the galactic chemical evolution (GCE) paths of the isotopes of Si, Ti, O and Mg, (iii) establish how many stellar sources contributed to the Solar System, (iv) constrain relative dust production rates of various stellar types and (v) assess how representative of galactic dust production the record in meteorites is. The primary tool for this project is a highly automated grain analysis system on the Carnegie 6f ion probe. This proposal was part of a long-standing research effort that is still ongoing.
Fan, Yaxin; Zhu, Xinyan; Guo, Wei; Guo, Tao
2018-01-01
The analysis of traffic collisions is essential for urban safety and the sustainable development of the urban environment. Reducing the road traffic injuries and the financial losses caused by collisions is the most important goal of traffic management. In addition, traffic collisions are a major cause of traffic congestion, which is a serious issue that affects everyone in the society. Therefore, traffic collision analysis is essential for all parties, including drivers, pedestrians, and traffic officers, to understand the road risks at a finer spatio-temporal scale. However, traffic collisions in the urban context are dynamic and complex. Thus, it is important to detect how the collision hotspots evolve over time through spatio-temporal clustering analysis. In addition, traffic collisions are not isolated events in space. The characteristics of the traffic collisions and their surrounding locations also present an influence of the clusters. This work tries to explore the spatio-temporal clustering patterns of traffic collisions by combining a set of network-constrained methods. These methods were tested using the traffic collision data in Jianghan District of Wuhan, China. The results demonstrated that these methods offer different perspectives of the spatio-temporal clustering patterns. The weighted network kernel density estimation provides an intuitive way to incorporate attribute information. The network cross K-function shows that there are varying clustering tendencies between traffic collisions and different types of POIs. The proposed network differential Local Moran’s I and network local indicators of mobility association provide straightforward and quantitative measures of the hotspot changes. This case study shows that these methods could help researchers, practitioners, and policy-makers to better understand the spatio-temporal clustering patterns of traffic collisions. PMID:29672551
Paleozoic tectonics of the Ouachita Orogen through Nd isotopes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gleason, J.D.; Patchett, P.J.; Dickinson, W.R.
1992-01-01
A combined isotopic and trace-element study of the Late Paleozoic Ouachita Orogenic belt has the following goals: (1) define changing provenance of Ouachita sedimentary systems throughout the Paleozoic; (2) constrain sources feeding into the Ouachita flysch trough during the Late Paleozoic; (3) isolate the geochemical signature of proposed colliding terranes to the south; (4) build a data base to compare with possible Ouachita System equivalents in Mexico. The ultimate aim is to constrain the tectonic setting of the southern margin of North America during the Paleozoic, with particular emphasis on collisional events leading to the final suturing of Pangea. Ndmore » isotopic data identify 3 distinct groups: (1) Ordovician passive margin sequence; (2) Carboniferous proto-flysch (Stanley Fm.), main flysch (Jackfork and Atoka Fms.) and molasse (foreland Atoka Fm.); (3) Mississippian ash-flow tuffs. The authors interpret the Ordovician signature to be essentially all craton-derived, whereas the Carboniferous signature reflects mixed sources from the craton plus orogenic sources to the east and possibly the south, including the evolving Appalachian Orogen. The proposed southern source is revealed by the tuffs to be too old and evolved to be a juvenile island arc terrane. They interpret the tuffs to have been erupted in a continental margin arc-type setting. Surprisingly, the foreland molasse sequence is indistinguishable from the main trough flysch sequence, suggesting the Ouachita trough and the craton were both inundated with sediment of a single homogenized isotopic signature during the Late Carboniferous. The possibility that Carboniferous-type sedimentary dispersal patterns began as early as the Silurian has important implications for the tectonics and paleogeography of the evolving Appalachian-Ouachita Orogenic System.« less
Life detection strategy for Jovian's icy moons: Lessons from subglacial Lake Vostok exploration
NASA Astrophysics Data System (ADS)
Bulat, Sergey; Alekhina, Irina; Marie, Dominique; Petit, Jean-Robert
2010-05-01
The objective was to estimate the microbial content of accretion ice originating from the subglacial Lake Vostok buried beneath 4-km thick East Antarctic ice sheet with the ultimate goal to discover microbial life in this extreme icy environment. The DNA study constrained by Ancient DNA research criteria was used as a main approach. The flow cytometry was implemented in cell enumerating. As a result, both approaches showed that the accretion ice contains the very low unevenly distributed biomass indicating that the water body should also be hosting a highly sparse life. Up to now, the only accretion ice featured by mica-clay sediments presence allowed the recovery a pair of bacterial phylotypes. This unexpectedly included the chemolithoautotrophic thermophile Hydrogenophilus thermoluteolus and one more unclassified phylotype both passing numerous contaminant controls. In contrast, the deeper and cleaner accretion ice with no sediments presence and near detection limit gas content gave no reliable signals. Thus, the results obtained testify that the search for life in the Lake Vostok is constrained by a high chance of forward-contamination. The subglacial Lake Vostok seems to represent the only extremely clean giant aquatic system on the Earth providing a unique test area for searching for life on icy worlds. The life detection strategy for (sub)glacial environments elsewhere (e.g., Jovian's Europa) should be based on stringent decontamination procedures in clean-room facilities, establishment of on-site contaminant library, implementation of appropriate methods to reach detection level for signal as low as possible, verification of findings through ecological settings of a given environment and repetition at an independent laboratory within the specialized laboratory network.
NASA Astrophysics Data System (ADS)
Reger, Darren; Madanat, Samer; Horvath, Arpad
2015-11-01
Transportation agencies are being urged to reduce their greenhouse gas (GHG) emissions. One possible solution within their scope is to alter their pavement management system to include environmental impacts. Managing pavement assets is important because poor road conditions lead to increased fuel consumption of vehicles. Rehabilitation activities improve pavement condition, but require materials and construction equipment, which produce GHG emissions as well. The agency’s role is to decide when to rehabilitate the road segments in the network. In previous work, we sought to minimize total societal costs (user and agency costs combined) subject to an emissions constraint for a road network, and demonstrated that there exists a range of potentially optimal solutions (a Pareto frontier) with tradeoffs between costs and GHG emissions. However, we did not account for the case where the available financial budget to the agency is binding. This letter considers an agency whose main goal is to reduce its carbon footprint while operating under a constrained financial budget. A Lagrangian dual solution methodology is applied, which selects the optimal timing and optimal action from a set of alternatives for each segment. This formulation quantifies GHG emission savings per additional dollar of agency budget spent, which can be used in a cap-and-trade system or to make budget decisions. We discuss the importance of communication between agencies and their legislature that sets the financial budgets to implement sustainable policies. We show that for a case study of Californian roads, it is optimal to apply frequent, thin overlays as opposed to the less frequent, thick overlays recommended in the literature if the objective is to minimize GHG emissions. A promising new technology, warm-mix asphalt, will have a negligible effect on reducing GHG emissions for road resurfacing under constrained budgets.
NASA Astrophysics Data System (ADS)
Chandran, A.; Schulz, Marc D.; Burnell, F. J.
2016-12-01
Many phases of matter, including superconductors, fractional quantum Hall fluids, and spin liquids, are described by gauge theories with constrained Hilbert spaces. However, thermalization and the applicability of quantum statistical mechanics has primarily been studied in unconstrained Hilbert spaces. In this paper, we investigate whether constrained Hilbert spaces permit local thermalization. Specifically, we explore whether the eigenstate thermalization hypothesis (ETH) holds in a pinned Fibonacci anyon chain, which serves as a representative case study. We first establish that the constrained Hilbert space admits a notion of locality by showing that the influence of a measurement decays exponentially in space. This suggests that the constraints are no impediment to thermalization. We then provide numerical evidence that ETH holds for the diagonal and off-diagonal matrix elements of various local observables in a generic disorder-free nonintegrable model. We also find that certain nonlocal observables obey ETH.
NASA Astrophysics Data System (ADS)
Peralta, Richard C.; Forghani, Ali; Fayad, Hala
2014-04-01
Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.
Constraining Exoplanet Habitability with HabEx
NASA Astrophysics Data System (ADS)
Robinson, Tyler
2018-01-01
The Habitable Exoplanet Imaging mission, or HabEx, is one of four flagship mission concepts currently under study for the upcoming 2020 Decadal Survey of Astronomy and Astrophysics. The broad goal of HabEx will be to image and study small, rocky planets in the Habitable Zones of nearby stars. Additionally, HabEx will pursue a range of other astrophysical investigations, including the characterization of non-habitable exoplanets and detailed observations of stars and galaxies. Critical to the capability of HabEx to understand Habitable Zone exoplanets will be its ability to search for signs of surface liquid water (i.e., habitability) and an active biosphere. Photometry and moderate resolution spectroscopy, spanning the ultraviolet through near-infrared spectral ranges, will enable constraints on key habitability-related atmospheric species and properties (e.g., surface pressure). In this poster, we will discuss approaches to detecting signs of habitability in reflected-light observations of rocky exoplanets. We will also present initial results for modeling experiments aimed at demonstrating the capabilities of HabEx to study and understand Earth-like worlds around other stars.
NASA Astrophysics Data System (ADS)
Zhong, Shuya; Pantelous, Athanasios A.; Beer, Michael; Zhou, Jian
2018-05-01
Offshore wind farm is an emerging source of renewable energy, which has been shown to have tremendous potential in recent years. In this blooming area, a key challenge is that the preventive maintenance of offshore turbines should be scheduled reasonably to satisfy the power supply without failure. In this direction, two significant goals should be considered simultaneously as a trade-off. One is to maximise the system reliability and the other is to minimise the maintenance related cost. Thus, a non-linear multi-objective programming model is proposed including two newly defined objectives with thirteen families of constraints suitable for the preventive maintenance of offshore wind farms. In order to solve our model effectively, the nondominated sorting genetic algorithm II, especially for the multi-objective optimisation is utilised and Pareto-optimal solutions of schedules can be obtained to offer adequate support to decision-makers. Finally, an example is given to illustrate the performances of the devised model and algorithm, and explore the relationships of the two targets with the help of a contrast model.
Feasibility study of a 110 watt per kilogram lightweight solar array system
NASA Technical Reports Server (NTRS)
Shepard, N. F.; Stahle, C. V.; Hanson, K. L.; Schneider, A.; Blomstrom, L. E.; Hansen, W. T.; Kirpich, A.
1973-01-01
The feasibility of a 10,000 watt solar array panel which has a minimum power-to-mass ratio of 110 watt/kg is discussed. The application of this ultralightweight solar array to three possible missions was investigated. With the interplanetary mission as a baseline, the constraining requirements for a geosynchronous mission and for a manned space station mission are presented. A review of existing lightweight solar array system concepts revealed that changes in the system approach are necessary to achieve the specified 110 watt/kg goal. A comprehensive review of existing component technology is presented in the areas of thin solar cells, solar cell covers, welded interconnectors, substrates and deployable booms. Advances in the state-of-the-art of solar cell and deployable boom technology were investigated. System level trade studies required to select the optimum boom bending stiffness, system aspect ratio, bus voltage level, and solar cell circuit arrangement are reported. Design analysis tasks included the thermal analysis of the solar cell blanket, thermal stress analysis of the solar cell interconnectors/substrate, and the thermostructural loading of the deployed boom.
CXBN: a blueprint for an improved measurement of the cosmological x-ray background
NASA Astrophysics Data System (ADS)
Simms, Lance M.; Jernigan, J. G.; Malphrus, Benjamin K.; McNeil, Roger; Brown, Kevin Z.; Rose, Tyler G.; Lim, Hyoung S.; Anderson, Steven; Kruth, Jeffrey A.; Doty, John P.; Wampler-Doty, Matthew; Cominsky, Lynn R.; Prasad, Kamal S.; Thomas, Eric T.; Combs, Michael S.; Kroll, Robert T.; Cahall, Benjamin J.; Turba, Tyler T.; Molton, Brandon L.; Powell, Margaret M.; Fitzpatrick, Jonathan F.; Graves, Daniel C.; Gaalema, Stephen D.; Sun, Shunming
2012-10-01
A precise measurement of the Cosmic X-ray Background (CXB) is crucial for constraining models of the evolution and composition of the universe. While several large, expensive satellites have measured the CXB as a secondary mission, there is still disagreement about normalization of its spectrum. The Cosmic X-ray Background NanoSat (CXBN) is a small, low-cost satellite whose primary goal is to measure the CXB over its two-year lifetime. Benefiting from a low instrument-induced background due to its small mass and size, CXBN will use a novel, pixelated Cadmium Zinc Telluride (CZT) detector with energy resolution < 1 keV over the range 1-60 keV to measure the CXBN with unprecedented accuracy. This paper describes CXBN and its science payload, including the GEANT4 model that has been used to predict overall performance and the backgrounds from secondary particles in Low Earth Orbit. It also addresses the strategy for scanning the sky and calibrating the data, and presents the expected results over the two-year mission lifetime.
Validating Laser-Induced Birefringence Theory with Plasma Interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Cecilia; Cornell Univ., Ithaca, NY
2015-09-02
Intense laser beams crossing paths in plasma is theorized to induce birefringence in the medium, resulting from density and refractive index modulations that affect the polarization of incoming light. The goal of the associated experiment, conducted on Janus at Lawrence Livermore’s Jupiter Laser Facility, was to create a tunable laser-plasma waveplate to verify the relationship between dephasing angle and beam intensity, plasma density, plasma temperature, and interaction length. Interferometry analysis of the plasma channel was performed to obtain a density map and to constrain temperature measured from Thomson scattering. Various analysis techniques, including Fast Fourier transform (FFT) and two variationsmore » of fringe-counting, were tried because interferograms captured in this experiment contained unusual features such as fringe discontinuity at channel edges, saddle points, and islands. The chosen method is flexible, semi-automated, and uses a fringe tracking algorithm on a reduced image of pre-traced synthetic fringes. Ultimately, a maximum dephasing angle of 49.6° was achieved using a 1200 μm interaction length, and the experimental results appear to agree with predictions.« less
Experimentation in machine discovery
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Simon, Herbert A.
1990-01-01
KEKADA, a system that is capable of carrying out a complex series of experiments on problems from the history of science, is described. The system incorporates a set of experimentation strategies that were extracted from the traces of the scientists' behavior. It focuses on surprises to constrain its search, and uses its strategies to generate hypotheses and to carry out experiments. Some strategies are domain independent, whereas others incorporate knowledge of a specific domain. The domain independent strategies include magnification, determining scope, divide and conquer, factor analysis, and relating different anomalous phenomena. KEKADA represents an experiment as a set of independent and dependent entities, with apparatus variables and a goal. It represents a theory either as a sequence of processes or as abstract hypotheses. KEKADA's response is described to a particular problem in biochemistry. On this and other problems, the system is capable of carrying out a complex series of experiments to refine domain theories. Analysis of the system and its behavior on a number of different problems has established its generality, but it has also revealed the reasons why the system would not be a good experimental scientist.
Starr, Ariel; Vendetti, Michael S; Bunge, Silvia A
2018-05-01
Analogical reasoning is considered a key driver of cognitive development and is a strong predictor of academic achievement. However, it is difficult for young children, who are prone to focusing on perceptual and semantic similarities among items rather than relational commonalities. For example, in a classic A:B::C:? propositional analogy task, children must inhibit attention towards items that are visually or semantically similar to C, and instead focus on finding a relational match to the A:B pair. Competing theories of reasoning development attribute improvements in children's performance to gains in either executive functioning or semantic knowledge. Here, we sought to identify key drivers of the development of analogical reasoning ability by using eye gaze patterns to infer problem-solving strategies used by six-year-old children and adults. Children had a greater tendency than adults to focus on the immediate task goal and constrain their search based on the C item. However, large individual differences existed within children, and more successful reasoners were able to maintain the broader goal in mind and constrain their search by initially focusing on the A:B pair before turning to C and the response choices. When children adopted this strategy, their attention was drawn more readily to the correct response option. Individual differences in children's reasoning ability were also related to rule-guided behavior but not to semantic knowledge. These findings suggest that both developmental improvements and individual differences in performance are driven by the use of more efficient reasoning strategies regarding which information is prioritized from the start, rather than the ability to disengage from attractive lure items. Copyright © 2018 Elsevier B.V. All rights reserved.
Counts of galaxy clusters as cosmological probes: the impact of baryonic physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balaguera-Antolínez, Andrés; Porciani, Cristiano, E-mail: abalan@astro.uni-bonn.de, E-mail: porciani@astro.uni-bonn.de
2013-04-01
The halo mass function from N-body simulations of collisionless matter is generally used to retrieve cosmological parameters from observed counts of galaxy clusters. This neglects the observational fact that the baryonic mass fraction in clusters is a random variable that, on average, increases with the total mass (within an overdensity of 500). Considering a mock catalog that includes tens of thousands of galaxy clusters, as expected from the forthcoming generation of surveys, we show that the effect of a varying baryonic mass fraction will be observable with high statistical significance. The net effect is a change in the overall normalizationmore » of the cluster mass function and a milder modification of its shape. Our results indicate the necessity of taking into account baryonic corrections to the mass function if one wants to obtain unbiased estimates of the cosmological parameters from data of this quality. We introduce the formalism necessary to accomplish this goal. Our discussion is based on the conditional probability of finding a given value of the baryonic mass fraction for clusters of fixed total mass. Finally, we show that combining information from the cluster counts with measurements of the baryonic mass fraction in a small subsample of clusters (including only a few tens of objects) will nearly optimally constrain the cosmological parameters.« less
NASA Astrophysics Data System (ADS)
Isham, Brett; Bergman, Jan; Krause, Linda; Rincon-Charris, Amilcar; Bruhn, Fredrik; Funk, Peter; Stramkals, Arturs
2016-07-01
CubeSat missions are intentionally constrained by the limitations of their small platform. Mission payloads designed for low volume, mass, and power, may however be disproportionally limited by available telemetry allocations. In many cases, it is the data delivered to the ground which determines the value of the mission. However, transmitting more data does not necessarily guarantee high value, since the value also depends on data quality. By exploiting fast on-board computing and efficient artificial intelligence (AI) algorithms for analysis and data selection, the usage of the telemetry link can be optimized and value added to the mission. This concept is being implemented on the Puerto Rico CubeSat, which will make measurements of ambient ionospheric radio waves and ion irregularities and turbulence. Principle project goals include providing aerospace and systems engineering experiences to students. Science objectives include the study of natural space plasma processes to aid in better understanding of space weather and the Sun to Earth connection, and in-situ diagnostics of ionospheric modification experiments using high-power ground-based radio transmitters. We hope that this project might point the way to the productive use of AI in space and other remote, low-data-bandwidth environments.
Workshop on Parent-Body and Nebular Modification of Chondritic Materials
NASA Technical Reports Server (NTRS)
Krot, A. N. (Editor); Zolensky, M. E. (Editor); Scott, E. R. D. (Editor)
1997-01-01
The purpose of the workshop was to advance our understanding of solar nebula and asteroidal processes from studies of modification features in chondrites and interplanetary dust particles. As reflected in the program contained in this volume, the workshop included five regular sessions, a summary session, and a poster session. Twenty-three posters and 42 invited and contributed talks were presented. Part 1 of this report contains the abstracts of these presentations. The focus of the workshop included: (1) mineralogical, petrologic, chemical, and isotopic observations of the alteration mineralogy in interplanetary dust particles, ordinary and carbonaceous chondrites, and their components (Ca-Al-rich inclusions, chondrules, and matrix) to constrain the conditions and place of alteration; (2) sources of water in chondrites; (3) the relationship between aqueous alteration and thermal metamorphism; (4) short-lived radionuclides, AI-26, Mn-53, and I-129, as isotopic constraints on timing of alteration; (5) experimental and theoretical modeling of alteration reactions; and (6) the oxidation state of the solar nebula. There were approximately 140 participants at the workshop, probably due in part to the timeliness of the workshop goals and the workshop location. In the end few new agreements were achieved between warring factions, but new research efforts were forged and areas of fruitful future exploration were highlighted. Judged by these results, the workshop was successful.
Bouts of Steps: The Organization of Infant Exploration
Cole, Whitney G.; Robinson, Scott R.; Adolph, Karen E.
2016-01-01
Adults primarily walk to reach a new location, but why do infants walk? Do infants, like adults, walk to travel to a distant goal? We observed 30 13-month-old and 30 19-month-old infants during natural walking in a laboratory playroom. We characterized the bout structure of walking—when infants start and stop walking—to examine why infants start and stop walking. Locomotor activity was composed largely of brief spurts of walking. Half of 13-month-olds’ bouts and 41% of 19-month-olds’ bouts consisted of three or fewer steps—too few to carry infants to a distant goal. Most bouts ended in the middle of the floor, not at a recognizable goal. Survival analyses of the distribution of steps per bout indicated that the probability of continuing to walk was independent of the length of the ongoing bout; infants were just as likely to stop walking after 5 steps as after 50 steps and they showed no bias toward bouts long enough to carry them across the room to a goal. However, 13-month-olds showed an increased probability of stopping after 1-3 steps, and they did not initiate walking more frequently to compensate for their surfeit of short bouts. We propose that infants’ natural walking is not intentionally directed at distant goals; rather, it is a stochastic process that serves exploratory functions. Relations between the bout structure of walking and other measures of walking suggest that locomotor exploration is constrained by walking skill in younger infants, but not in older infants. PMID:26497472
Constraining new physics models with isotope shift spectroscopy
NASA Astrophysics Data System (ADS)
Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias
2017-07-01
Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.
A chance-constrained stochastic approach to intermodal container routing problems.
Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.
A chance-constrained stochastic approach to intermodal container routing problems
Zhao, Yi; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389
Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering
NASA Technical Reports Server (NTRS)
Tilton, James C.
2002-01-01
This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.
NASA Astrophysics Data System (ADS)
Jarkeh, Mohammad Reza; Mianabadi, Ameneh; Mianabadi, Hojjat
2016-10-01
Mismanagement and uneven distribution of water may lead to or increase conflict among countries. Allocation of water among trans-boundary river neighbours is a key issue in utilization of shared water resources. The bankruptcy theory is a cooperative Game Theory method which is used when the amount of demand of riparian states is larger than total available water. In this study, we survey the application of seven methods of Classical Bankruptcy Rules (CBRs) including Proportional (CBR-PRO), Adjusted Proportional (CBR-AP), Constrained Equal Awards (CBR-CEA), Constrained Equal Losses (CBR-CEL), Piniles (CBR-Piniles), Minimal Overlap (CBR-MO), Talmud (CBR-Talmud) and four Sequential Sharing Rules (SSRs) including Proportional (SSR-PRO), Constrained Equal Awards (SSR-CEA), Constrained Equal Losses (SSR-CEL) and Talmud (SSR-Talmud) methods in allocation of the Euphrates River among three riparian countries: Turkey, Syria and Iraq. However, there is not a certain documented method to find more equitable allocation rule. Therefore, in this paper, a new method is established for choosing the most appropriate allocating rule which seems to be more equitable than other allocation rules to satisfy the stakeholders. The results reveal that, based on the new propose model, the CBR-AP seems to be more equitable to allocate the Euphrates River water among Turkey, Syria and Iraq.
Pfaff, Alexander; Robalino, Juan; Herrera, Diego; Sandoval, Catalina
2015-01-01
Protected areas are the leading forest conservation policy for species and ecoservices goals and they may feature in climate policy if countries with tropical forest rely on familiar tools. For Brazil's Legal Amazon, we estimate the average impact of protection upon deforestation and show how protected areas’ forest impacts vary significantly with development pressure. We use matching, i.e., comparisons that are apples-to-apples in observed land characteristics, to address the fact that protected areas (PAs) tend to be located on lands facing less pressure. Correcting for that location bias lowers our estimates of PAs’ forest impacts by roughly half. Further, it reveals significant variation in PA impacts along development-related dimensions: for example, the PAs that are closer to roads and the PAs closer to cities have higher impact. Planners have multiple conservation and development goals, and are constrained by cost, yet still conservation planning should reflect what our results imply about future impacts of PAs. PMID:26225922
Pfaff, Alexander; Robalino, Juan; Herrera, Diego; Sandoval, Catalina
2015-01-01
Protected areas are the leading forest conservation policy for species and ecoservices goals and they may feature in climate policy if countries with tropical forest rely on familiar tools. For Brazil's Legal Amazon, we estimate the average impact of protection upon deforestation and show how protected areas' forest impacts vary significantly with development pressure. We use matching, i.e., comparisons that are apples-to-apples in observed land characteristics, to address the fact that protected areas (PAs) tend to be located on lands facing less pressure. Correcting for that location bias lowers our estimates of PAs' forest impacts by roughly half. Further, it reveals significant variation in PA impacts along development-related dimensions: for example, the PAs that are closer to roads and the PAs closer to cities have higher impact. Planners have multiple conservation and development goals, and are constrained by cost, yet still conservation planning should reflect what our results imply about future impacts of PAs.
NASA Technical Reports Server (NTRS)
Getty, S. A.; Brinckerhoff, W. B.; Arevalo, R. D.; Floyd, M. M.; Li, X.; Cornish, T.; Ecelberger, S. A.
2012-01-01
Future landed missions to Mars will be guided by two strategic directions: (1) sample return to Earth, for comprehensive compositional analyses, as recommended by the 2011 NRC Planetary Decadal Survey; and (2) preparation for human exploration in the 2030s and beyond, as laid out by US space policy. The resultant mission architecture will likely require high-fidelity in situ chemical/organic sample analyses within an extremely constrained resource envelope. Both science goals (e.g., MEPAG Goal 1, return sample selection, etc.) as well as identification of any potential toxic and biological hazards to humans, must be addressed. Over the past several years of instrument development, we have found that the adaptable, compact, and highly capable technique of laser desorption/ionization time-of-flight mass spectrometry (LD-TOF-MS) has significant potential to contribute substantially to these dual objectives. This concept thus addresses Challenge Area 1: instrumentation and Investigation Approaches.
Guenole, Nigel
2018-01-01
The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository.
Guenole, Nigel
2018-01-01
The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository. PMID:29551985
Increasing Patient Safety by Closing the Sterile Production Gap-Part 2. Implementation.
Agalloco, James P
2017-01-01
Terminal sterilization is considered the preferred means for the production of sterile drug products because it affords enhanced safety for the patient as the formulation is filled into its final container, sealed, and sterilized. Despite the obvious patient benefits, the use of terminal sterilization is artificially constrained by unreasonable expectations for the minimum time-temperature process to be used. The core misunderstanding with terminal sterilization is a fixation that destruction of a high concentration of a resistant biological indicator is required. The origin of this misconception is unclear, but it has resulted in sterilization conditions that are extremely harsh (15 min at 121 °C, of F 0 >8 min), which limit the use of terminal sterilization to extremely heat-stable formulations. These articles outline the artificial nature of the process constraints and describe a scientifically sound means to expand the use of terminal sterilization by identifying the correct process goal-the destruction of the bioburden present in the container prior to sterilization. Recognition that the true intention is bioburden destruction in routine products allows for the use of reduced conditions (lower temperatures, shorter process dwell, or both) without added patient risk. By focusing attention on the correct process target, lower time-temperature conditions can be used to expand the use of terminal sterilization to products unable to withstand the harsh conditions that have been mistakenly applied. The first article provides the background, and describes the benefits to patient, producer, and regulator. The second article includes validation and operational advice that can be used in the implementation. LAY ABSTRACT: Terminal sterilization is considered the preferred means for the production of sterile drug products because it affords enhanced safety for the patient as the formulation is filled into its final container, sealed, and sterilized. Despite the obvious patient benefits, the use of terminal sterilization is artificially constrained by unreasonable expectations for the minimum time-temperature process to be used. These articles outline the artificial nature of the process constraints and describe a scientifically sound means to expand the use of terminal sterilization by identifying the correct process goal-the destruction of the bioburden present in the container prior to sterilization. By focusing attention on the correct process target, lower time-temperature conditions can be used to expand the use of terminal sterilization to products unable to withstand the harsh conditions that have been mistakenly applied. The first article provides the background, and describes the benefits to patient, producer, and regulator. The article manuscript includes validation and operational advice that can be used in the implementation. © PDA, Inc. 2017.
NASA Astrophysics Data System (ADS)
Pan, M.; Wood, E. F.
2004-05-01
This study explores a method to estimate various components of the water cycle (ET, runoff, land storage, etc.) based on a number of different info sources, including both observations and observation-enhanced model simulations. Different from existing data assimilations, this constrained Kalman filtering approach keeps the water budget perfectly closed while updating the states of the underlying model (VIC model) optimally using observations. Assimilating different data sources in this way has several advantages: (1) physical model is included to make estimation time series smooth, missing-free, and more physically consistent; (2) uncertainties in the model and observations are properly addressed; (3) model is constrained by observation thus to reduce model biases; (4) balance of water is always preserved along the assimilation. Experiments are carried out in Southern Great Plain region where necessary observations have been collected. This method may also be implemented in other applications with physical constraints (e.g. energy cycles) and at different scales.
NASA Astrophysics Data System (ADS)
Moreenthaler, George W.; Khatib, Nader; Kim, Byoungsoo
2003-08-01
For two decades now, the use of Remote Sensing/Precision Agriculture to improve farm yields while reducing the use of polluting chemicals and the limited water supply has been a major goal. With world population growing exponentially, arable land being consumed by urbanization, and an unfavorable farm economy, farm efficiency must increase to meet future food requirements and to make farming a sustainable, profitable occupation. "Precision Agriculture" refers to a farming methodology that applies nutrients and moisture only where and when they are needed in the field. The real goal is to increase farm profitability by identifying the additional treatments of chemicals and water that increase revenues more than they increase costs and do no exceed pollution standards (constrained optimization). Even though the economic and environmental benefits appear to be great, Remote Sensing/Precision Agriculture has not grown as rapidly as early advocates envisioned. Technology for a successful Remote Sensing/Precision Agriculture system is now in place, but other needed factors have been missing. Commercial satellite systems can now image the Earth (multi-spectrally) with a resolution as fine as 2.5 m. Precision variable dispensing systems using GPS are now available and affordable. Crop models that predict yield as a function of soil, chemical, and irrigation parameter levels have been developed. Personal computers and internet access are now in place in most farm homes and can provide a mechanism for periodically disseminating advice on what quantities of water and chemicals are needed in specific regions of each field. Several processes have been selected that fuse the disparate sources of information on the current and historic states of the crop and soil, and the remaining resource levels available, with the critical decisions that farmers are required to make. These are done in a way that is easy for the farmer to understand and profitable to implement. A "Constrained Optimization Algorithm" to further improve these processes will be presented. The objective function of the model will used to maximize the farmer's profit via increasing yields while decreasing environmental damage and decreasing applications of costly treatments. This model will incorporate information from Remote Sensing, from in-situ weather sources, from soil history, and from tacit farmer knowledge of the relative productivity of selected "Management Zones" of the farm, to provide incremental advice throughout the growing season on the optimum usage of water and chemical treatments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abazajian, Kevork N.
This book lays out the scientific goals to be addressed by the next-generation ground-based cosmic microwave background experiment, CMB-S4, envisioned to consist of dedicated telescopes at the South Pole, the high Chilean Atacama plateau and possibly a northern hemisphere site, all equipped with new superconducting cameras. CMB-S4 will dramatically advance cosmological studies by crossing critical thresholds in the search for the B-mode polarization signature of primordial gravitational waves, in the determination of the number and masses of the neutrinos, in the search for evidence of new light relics, in constraining the nature of dark energy, and in testing general relativitymore » on large scales.« less
Continual Improvement in Shuttle Logistics
NASA Technical Reports Server (NTRS)
Flowers, Jean; Schafer, Loraine
1995-01-01
It has been said that Continual Improvement (CI) is difficult to apply to service oriented functions, especially in a government agency such as NASA. However, a constrained budget and increasing requirements are a way of life at NASA Kennedy Space Center (KSC), making it a natural environment for the application of CI tools and techniques. This paper describes how KSC, and specifically the Space Shuttle Logistics Project, a key contributor to KSC's mission, has embraced the CI management approach as a means of achieving its strategic goals and objectives. An overview of how the KSC Space Shuttle Logistics Project has structured its CI effort and examples of some of the initiatives are provided.
Rothman, Jason; Alemán Bañón, José; González Alonso, Jorge
2015-01-01
This article has two main objectives. First, we offer an introduction to the subfield of generative third language (L3) acquisition. Concerned primarily with modeling initial stages transfer of morphosyntax, one goal of this program is to show how initial stages L3 data make significant contributions toward a better understanding of how the mind represents language and how (cognitive) economy constrains acquisition processes more generally. Our second objective is to argue for and demonstrate how this subfield will benefit from a neuro/psycholinguistic methodological approach, such as event-related potential experiments, to complement the claims currently made on the basis of exclusively behavioral experiments. PMID:26300800
DOE Office of Scientific and Technical Information (OSTI.GOV)
McFarlane, Karis J.
The overall goal of my Early Career research is to constrain belowground carbon turnover times for tropical forests across a broad range in moisture regimes. My group is using 14C analysis and modeling to address two major objectives: quantify age and belowground carbon turnover times across tropical forests spanning a moisture gradient from wetlands to dry forest; and identify specific areas for focused model improvement and data needs through site-specific model-data comparison and belowground carbon modeling for tropic forests.
Results of NASA/NOAA HES Trade Studies
NASA Technical Reports Server (NTRS)
Susskind, Joel
2011-01-01
This slide presentation reviews the trade studies that were done for the Hyperspectral Environmental Suite (HES). The goal of the trade studies was to minimize instrument cost and risk while producing scientifically useful products. Three vendors were selected to perform the trade study, and were to conduct 11 studies, with the first study a complete wish list of things that scientists would like from GEO orbit to the 11th study which was for a Reduced Accommodation Sounder (RAS) which would still result in useful scientific products, within constrains compatible with flight on GEOS-R. The RAS's from each vendor and one other HES sounders designs are reviewed.
Agent-Based Negotiation in Uncertain Environments
NASA Astrophysics Data System (ADS)
Debenham, John; Sierra, Carles
An agent aims to secure his projected needs by attempting to build a set of (business) relationships with other agents. A relationship is built by exchanging private information, and is characterised by its intimacy — degree of closeness — and balance — degree of fairness. Each argumentative interaction between two agents then has two goals: to satisfy some immediate need, and to do so in a way that develops the relationship in a desired direction. An agent's desire to develop each relationship in a particular way then places constraints on the argumentative utterances. The form of negotiation described is argumentative interaction constrained by a desire to develop such relationships.
Exploring Black Hole Accretion in Active Galactic Nuclei with Simbol-X
NASA Astrophysics Data System (ADS)
Goosmann, R. W.; Dovčiak, M.; Mouchet, M.; Czerny, B.; Karas, V.; Gonçalves, A.
2009-05-01
A major goal of the Simbol-X mission is to improve our knowledge about black hole accretion. By opening up the X-ray window above 10 keV with unprecedented sensitivity and resolution we obtain new constraints on the X-ray spectral and variability properties of active galactic nuclei. To interpret the future data, detailed X-ray modeling of the dynamics and radiation processes in the black hole vicinity is required. Relativistic effects must be taken into account, which then allow to constrain the fundamental black hole parameters and the emission pattern of the accretion disk from the spectra that will be obtained with Simbol-X.
Tharakaraman, Kannan; Watanabe, Satoru; Chan, Kuan Rong; Huan, Jia; Subramanian, Vidya; Chionh, Yok Hian; Raguram, Aditya; Quinlan, Devin; McBee, Megan; Ong, Eugenia Z; Gan, Esther S; Tan, Hwee Cheng; Tyagi, Anu; Bhushan, Shashi; Lescar, Julien; Vasudevan, Subhash G; Ooi, Eng Eong; Sasisekharan, Ram
2018-05-09
Following the recent emergence of Zika virus (ZIKV), many murine and human neutralizing anti-ZIKV antibodies have been reported. Given the risk of virus escape mutants, engineering antibodies that target mutationally constrained epitopes with therapeutically relevant potencies can be valuable for combating future outbreaks. Here, we applied computational methods to engineer an antibody, ZAb_FLEP, that targets a highly networked and therefore mutationally constrained surface formed by the envelope protein dimer. ZAb_FLEP neutralized a breadth of ZIKV strains and protected mice in distinct in vivo models, including resolving vertical transmission and fetal mortality in infected pregnant mice. Serial passaging of ZIKV in the presence of ZAb_FLEP failed to generate viral escape mutants, suggesting that its epitope is indeed mutationally constrained. A single-particle cryo-EM reconstruction of the Fab-ZIKV complex validated the structural model and revealed insights into ZAb_FLEP's neutralization mechanism. ZAb_FLEP has potential as a therapeutic in future outbreaks. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Potosnak, M. J.; Beck-Winchatz, B.; Ritter, P.
2016-12-01
High-altitude balloons (HABs) are an engaging platform for citizen science and formal and informal STEM education. However, the logistics of launching, chasing and recovering a payload on a 1200 g or 1500 g balloon can be daunting for many novice school groups and citizen scientists, and the cost can be prohibitive. In addition, there are many interesting scientific applications that do not require reaching the stratosphere, including measuring atmospheric pollutants in the planetary boundary layer. With a large number of citizen scientist flights, these data can be used to constrain satellite retrieval algorithms. In this poster presentation, we discuss a novel approach based on small (30 g) balloons that are cheap and easy to handle, and low-cost tracking devices (SPOT trackers for hikers) that do not require a radio license. Our scientific goal is to measure air quality in the lower troposphere. For example, particulate matter (PM) is an air pollutant that varies on small spatial scales and has sources in rural areas like biomass burning and farming practices such as tilling. Our HAB platform test flight incorporates an optical PM sensor, an integrated single board computer that records the PM sensor signal in addition to flight parameters (pressure, location and altitude), and a low-cost tracking system. Our goal is for the entire platform to cost less than $500. While the datasets generated by these flights are typically small, integrating a network of flight data from citizen scientists into a form usable for comparison to satellite data will require big data techniques.
Yu, Chen; Smith, Linda B.
2013-01-01
The coordination of visual attention among social partners is central to many components of human behavior and human development. Previous research has focused on one pathway to the coordination of looking behavior by social partners, gaze following. The extant evidence shows that even very young infants follow the direction of another's gaze but they do so only in highly constrained spatial contexts because gaze direction is not a spatially precise cue as to the visual target and not easily used in spatially complex social interactions. Our findings, derived from the moment-to-moment tracking of eye gaze of one-year-olds and their parents as they actively played with toys, provide evidence for an alternative pathway, through the coordination of hands and eyes in goal-directed action. In goal-directed actions, the hands and eyes of the actor are tightly coordinated both temporally and spatially, and thus, in contexts including manual engagement with objects, hand movements and eye movements provide redundant information about where the eyes are looking. Our findings show that one-year-olds rarely look to the parent's face and eyes in these contexts but rather infants and parents coordinate looking behavior without gaze following by attending to objects held by the self or the social partner. This pathway, through eye-hand coupling, leads to coordinated joint switches in visual attention and to an overall high rate of looking at the same object at the same time, and may be the dominant pathway through which physically active toddlers align their looking behavior with a social partner. PMID:24236151
Comments on "The multisynapse neural network and its application to fuzzy clustering".
Yu, Jian; Hao, Pengwei
2005-05-01
In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.
NASA Astrophysics Data System (ADS)
Santello, Marco
2015-03-01
The concept of synergy, denoting the coordination of multiple elements working together toward a common goal, has been extensively studied to understand how the central nervous system (CNS) controls movement (for review see [5,9]). Although this definition is appealing in its simplicity, 'multiple elements', 'working together', and 'common goal' each take different meanings depending on the scale at which a given sensorimotor system is studied, whether the 'working together' is defined in spatial and/or temporal domains, and the hypothesized synergy's 'common goal'. For example, the elements involved in a synergy can be defined as single motor units, muscles, or joints. Similarly, the goal of a synergy may be defined as a means available to the CNS to 'simplify' the control of multiple elements, or to minimize a given cost function or movement feature - all of which may differ across tasks and tasks conditions. These considerations underscore the fact that a universally accepted definition of synergies and their functional role remains to be established (for review see [6]). Thus, the nature and functional role(s) of synergies is still debated in the literature. Nevertheless, it is generally agreed that the reduction in the number of independent degrees of freedom that is manifested through synergies emerges from the interaction of biomechanical and neural factors constraining the spatial and temporal coordination of multiple muscles.
Building a functional multiple intelligences theory to advance educational neuroscience
Cerruti, Carlo
2013-01-01
A key goal of educational neuroscience is to conduct constrained experimental research that is theory-driven and yet also clearly related to educators’ complex set of questions and concerns. However, the fields of education, cognitive psychology, and neuroscience use different levels of description to characterize human ability. An important advance in research in educational neuroscience would be the identification of a cognitive and neurocognitive framework at a level of description relatively intuitive to educators. I argue that the theory of multiple intelligences (MI; Gardner, 1983), a conception of the mind that motivated a past generation of teachers, may provide such an opportunity. I criticize MI for doing little to clarify for teachers a core misunderstanding, specifically that MI was only an anatomical map of the mind but not a functional theory that detailed how the mind actually processes information. In an attempt to build a “functional MI” theory, I integrate into MI basic principles of cognitive and neural functioning, namely interregional neural facilitation and inhibition. In so doing I hope to forge a path toward constrained experimental research that bears upon teachers’ concerns about teaching and learning. PMID:24391613
Rosa, William
Over the past several years, holistic nursing education has become more readily available to nurses working in high-income nations, and holistic practice has become better defined and promoted through countless organizational and governmental initiatives. However, global nursing community members, particularly those serving in low- and middle-income countries (LMICs) within resource-constrained health care systems, may not find holistic nursing easily accessible or applicable to practice. The purpose of this article is to assess the readiness of nursing sectors within these resource-constrained settings to access, understand, and apply holistic nursing principles and practices within the context of cultural norms, diverse definitions of the nursing role, and the current status of health care in these countries. The history, current status, and projected national goals of professional nursing in Rwanda is used as an exemplar to forward the discussion regarding the readiness of nurses to adopt holistic education into practice in LMICs. A background of holistic nursing practice in the United States is provided to illustrate the multifaceted aspects of support necessary in order that such a specialty continues to evolve and thrive within health care arenas and the communities it cares for.
An open-source model and solution method to predict co-contraction in the finger.
MacIntosh, Alexander R; Keir, Peter J
2017-10-01
A novel open-source biomechanical model of the index finger with an electromyography (EMG)-constrained static optimization solution method are developed with the goal of improving co-contraction estimates and providing means to assess tendon tension distribution through the finger. The Intrinsic model has four degrees of freedom and seven muscles (with a 14 component extensor mechanism). A novel plugin developed for the OpenSim modelling software applied the EMG-constrained static optimization solution method. Ten participants performed static pressing in three finger postures and five dynamic free motion tasks. Index finger 3D kinematics, force (5, 15, 30 N), and EMG (4 extrinsic muscles and first dorsal interosseous) were used in the analysis. The Intrinsic model predicted co-contraction increased by 29% during static pressing over the existing model. Further, tendon tension distribution patterns and forces, known to be essential to produce finger action, were determined by the model across all postures. The Intrinsic model and custom solution method improved co-contraction estimates to facilitate force propagation through the finger. These tools improve our interpretation of loads in the finger to develop better rehabilitation and workplace injury risk reduction strategies.
Ménard, Lucie; Aubin, Jérôme; Thibeault, Mélanie; Richard, Gabrielle
2012-01-01
The goal of this paper is to assess the validity of various metrics developed to characterize tongue shapes and positions collected through ultrasound imaging in experimental setups where the probe is not constrained relative to the subject's head. Midsagittal contours were generated using an articulatory-acoustic model of the vocal tract. Sections of the tongue were extracted to simulate ultrasound imaging. Various transformations were applied to the tongue contours in order to simulate ultrasound probe displacements: vertical displacement, horizontal displacement, and rotation. The proposed data analysis method reshapes tongue contours into triangles and then extracts measures of angles, x and y coordinates of the highest point of the tongue, curvature degree, and curvature position. Parameters related to the absolute tongue position (tongue height and front/back position) are more sensitive to horizontal and vertical displacements of the probe, whereas parameters related to tongue curvature are less sensitive to such displacements. Because of their robustness to probe displacements, parameters related to tongue shape (especially curvature) are particularly well suited to cases where the transducer is not constrained relative to the head (studies with clinical populations or children). Copyright © 2011 S. Karger AG, Basel.
Empirical constrained Bayes predictors accounting for non-detects among repeated measures.
Moore, Reneé H; Lyles, Robert H; Manatunga, Amita K
2010-11-10
When the prediction of subject-specific random effects is of interest, constrained Bayes predictors (CB) have been shown to reduce the shrinkage of the widely accepted Bayes predictor while still maintaining desirable properties, such as optimizing mean-square error subsequent to matching the first two moments of the random effects of interest. However, occupational exposure and other epidemiologic (e.g. HIV) studies often present a further challenge because data may fall below the measuring instrument's limit of detection. Although methodology exists in the literature to compute Bayes estimates in the presence of non-detects (Bayes(ND)), CB methodology has not been proposed in this setting. By combining methodologies for computing CBs and Bayes(ND), we introduce two novel CBs that accommodate an arbitrary number of observable and non-detectable measurements per subject. Based on application to real data sets (e.g. occupational exposure, HIV RNA) and simulation studies, these CB predictors are markedly superior to the Bayes predictor and to alternative predictors computed using ad hoc methods in terms of meeting the goal of matching the first two moments of the true random effects distribution. Copyright © 2010 John Wiley & Sons, Ltd.
Residual stress at fluid interfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, P.E.
We extend the Navier-Stokes equations to allow for residual stress in Newtonian fluids. A fluid, which undergoes a constrained volume change, will have residual stress. Corresponding to every constrained volume change is an eigenstrain. We present a method to include in the equations of fluid motion the eigenstrain that is a result of the presence in a fluid of a soluble chemical species. This method is used to calculate the residual stress associated with a chemical transformation. 9 refs., 1 fig.
Issues of maternal health in Pakistan: trends towards millennium development goal 5.
Malik, Muhammd Faraz Arshad; Kayani, Mahmood Akhtar
2014-06-01
Pakistan has third highest burden of maternal and children mortality across the globe. This grim situation is further intensified by flaws of planning and implementation set forth in health sector. Natural calamities (earth quakes, floods), disease outbreaks and lack of awareness in different regions of country also further aggravate this situation. Despite of all these limitations, under the banner of Millennium Development Goals (MDGs) a special focus and progress in addressing maternal health issue (set as goal 5) has been made over the last decade. In this review, improvement and short falls pertaining to Goal 5 Improve maternal health have been analyzed in relation to earlier years. A decline in maternal mortality ratio (MMR) (490 maternal deaths in 1990 to 260 maternal deaths per 100,000 women in 2010) is observed. Reduction in MMR by three quarters was not achieved but a decline from very high mortality to high mortality index was observed. Increase usage of contraceptives (with contraceptive prevalence rate of 11.8 in 1990 to 37 in 2013) also shed light on women awareness about their health and social issues. Based on progress level assessment (WHO guidelines),access of Pakistani women to universal reproductive health unit falls in moderate category in 2010 as compared to earlier low access in 1990. From the data it looks that still a lot of effort is required for achieving the said targets. However, keeping in view all challenges, Pakistan suffered in the said duration, like volatile peace, regional political instability, policy implementation constrains, population growth, this slow but progressive trend highlight a national resilience to address the havoc challenge of maternal health. These understandings and sustained efforts will significantly contribute a best possible accomplishment in Millennium Development Goal 5 by 2015.
NASA Astrophysics Data System (ADS)
Reading, A. M.; Staal, T.; Halpin, J.; Whittaker, J. M.; Morse, P. E.
2017-12-01
The lithosphere of East Antarctica is one of the least explored regions of the planet, yet it is gaining in importance in global scientific research. Continental heat flux density and 3D glacial isostatic adjustment studies, for example, rely on a good knowledge of the deep structure in constraining model inputs.In this contribution, we use a multidisciplinary approach to constrain lithospheric domains. To seismic tomography models, we add constraints from magnetic studies and also new geological constraints. Geological knowledge exists around the periphery of East Antarctica and is reinforced in the knowledge of plate tectonic reconstructions. The subglacial geology of the Antarctic hinterland is largely unknown but the plate reconstructions allow the well-posed extrapolation of major terranes into the interior of the continent, guided by the seismic tomography and magnetic images. We find that the northern boundary of the lithospheric domain centred on the Gamburtsev Subglacial Mountains has a possible trend that runs south of the Lambert Glacier region, turning coastward through Wilkes Land. Other periphery-to-interior connections are less well constrained and the possibility of lithospheric domains that are entirely sub-glacial is high. We develop this framework to include a probabilistic method of handling alternate models and quantifiable uncertainties. We also show first results in using a Bayesian approach to predicting lithospheric boundaries from multivariate data.Within the newly constrained domains, we constrain heat flux (density) as the sum of basal heat flux and upper crustal heat flux. The basal heat flux is constrained by geophysical methods while the upper crustal heat flux is constrained by geology or predicted geology. In addition to heat flux constraints, we also consider the variations in friction experienced by moving ice sheets due to varying geology.
Goudarz Mehdikhani, Kaveh; Morales Moreno, Beatriz; Reid, Jeremy J; de Paz Nieves, Ana; Lee, Yuo-Yu; González Della Valle, Alejandro
2016-07-01
We studied the need to use a constrained insert for residual intraoperative instability and the 1-year result of patients undergoing total knee arthroplasty (TKA) for a varus deformity. In a control group, a "classic" subperiosteal release of the medial soft tissue sleeve was performed as popularized by pioneers of TKA. In the study group, an algorithmic approach that selectively releases and pie-crusts posteromedial structures in extension and anteromedial structures in flexion was used. All surgeries were performed by a single surgeon using measured resection technique, and posterior-stabilized, cemented implants. There were 228 TKAs in the control group and 188 in the study group. Outcome variables included the use of a constrained insert, and the Knee Society Score at 6 weeks, 4 months, and 1 year postoperatively. The effect of the release technique on use of constrained inserts and clinical outcomes were analyzed in a multivariate model controlling for age, sex, body mass index, and severity of deformity. The use of constrained inserts was significantly lower in study than in control patients (8% vs 18%; P = .002). There was no difference in the Knee Society Score and range of motion between the groups at last follow-up. No patient developed postoperative medial instability. This algorithmic, pie-crusting release technique resulted in a significant reduction in the use of constrained inserts with no detrimental effects in clinical results, joint function, and stability. As constrained TKA implants are more costly than nonconstrained ones, if the adopted technique proves to be safe in the long term, it may cause a positive shift in value for hospitals and cost savings in the health care system. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Quan, Lulin; Yang, Zhixin
2010-05-01
To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.
Deep resistivity structure of Yucca Flat, Nevada Test Site, Nevada
Asch, Theodore H.; Rodriguez, Brian D.; Sampson, Jay A.; Wallin, Erin L.; Williams, Jackie M.
2006-01-01
The Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) at their Nevada Site Office are addressing groundwater contamination resulting from historical underground nuclear testing through the Environmental Management program and, in particular, the Underground Test Area project. One issue of concern is the nature of the somewhat poorly constrained pre Tertiary geology and its effects on ground-water flow in the area adjacent to a nuclear test. Ground water modelers would like to know more about the hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey, supported by the DOE and NNSA-NSO, collected and processed data from 51 magnetotelluric (MT) and audio-magnetotelluric (AMT) stations at the Nevada Test Site in and near Yucca Flat to assist in characterizing the pre-Tertiary geology in that area. The primary purpose was to refine the character, thickness, and lateral extent of pre Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (late Devonian - Mississippian-age siliciclastic rocks assigned to the Eleana Formation and Chainman Shale) in the Yucca Flat area. The MT and AMT data have been released in separate USGS Open File Reports. The Nevada Test Site magnetotelluric data interpretation presented in this report includes the results of detailed two-dimensional (2 D) resistivity modeling for each profile (including alternative interpretations) and gross inferences on the three dimensional (3 D) character of the geology beneath each station. The character, thickness, and lateral extent of the Chainman Shale and Eleana Formation that comprise the Upper Clastic Confining Unit are generally well determined in the upper 5 km. Inferences can be made regarding the presence of the Lower Clastic Confining Unit at depths below 5 km. Large fault structures such as the CP Thrust fault, the Carpetbag fault, and the Yucca fault that cross Yucca Flat are also discernable as are other smaller faults. The subsurface electrical resistivity distribution and inferred geologic structures determined by this investigation should help constrain the hydrostratigraphic framework model that is under development.
Mars Exploration Rover Athena Panoramic Camera (Pancam) investigation
Bell, J.F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.N.; Arneson, H.M.; Brown, D.; Collins, S.A.; Dingizian, A.; Elliot, S.T.; Hagerott, E.C.; Hayes, A.G.; Johnson, M.J.; Johnson, J. R.; Joseph, J.; Kinch, K.; Lemmon, M.T.; Morris, R.V.; Scherr, L.; Schwochert, M.; Shepard, M.K.; Smith, G.H.; Sohl-Dickstein, J. N.; Sullivan, R.J.; Sullivan, W.T.; Wadsworth, M.
2003-01-01
The Panoramic Camera (Pancam) investigation is part of the Athena science payload launched to Mars in 2003 on NASA's twin Mars Exploration Rover (MER) missions. The scientific goals of the Pancam investigation are to assess the high-resolution morphology, topography, and geologic context of each MER landing site, to obtain color images to constrain the mineralogic, photometric, and physical properties of surface materials, and to determine dust and aerosol opacity and physical properties from direct imaging of the Sun and sky. Pancam also provides mission support measurements for the rovers, including Sun-finding for rover navigation, hazard identification and digital terrain modeling to help guide long-term rover traverse decisions, high-resolution imaging to help guide the selection of in situ sampling targets, and acquisition of education and public outreach products. The Pancam optical, mechanical, and electronics design were optimized to achieve these science and mission support goals. Pancam is a multispectral, stereoscopic, panoramic imaging system consisting of two digital cameras mounted on a mast 1.5 m above the Martian surface. The mast allows Pancam to image the full 360?? in azimuth and ??90?? in elevation. Each Pancam camera utilizes a 1024 ?? 1024 active imaging area frame transfer CCD detector array. The Pancam optics have an effective focal length of 43 mm and a focal ratio f/20, yielding an instantaneous field of view of 0.27 mrad/pixel and a field of view of 16?? ?? 16??. Each rover's two Pancam "eyes" are separated by 30 cm and have a 1?? toe-in to provide adequate stereo parallax. Each eye also includes a small eight position filter wheel to allow surface mineralogic studies, multispectral sky imaging, and direct Sun imaging in the 400-1100 nm wavelength region. Pancam was designed and calibrated to operate within specifications on Mars at temperatures from -55?? to +5??C. An onboard calibration target and fiducial marks provide the capability to validate the radiometric and geometric calibration on Mars. Copyright 2003 by the American Geophysical Union.
Challenges in Soft Computing: Case Study with Louisville MSD CSO Modeling
NASA Astrophysics Data System (ADS)
Ormsbee, L.; Tufail, M.
2005-12-01
The principal constituents of soft computing include fuzzy logic, neural computing, evolutionary computation, machine learning, and probabilistic reasoning. There are numerous applications of these constituents (both individually and combination of two or more) in the area of water resources and environmental systems. These range from development of data driven models to optimal control strategies to assist in more informed and intelligent decision making process. Availability of data is critical to such applications and having scarce data may lead to models that do not represent the response function over the entire domain. At the same time, too much data has a tendency to lead to over-constraining of the problem. This paper will describe the application of a subset of these soft computing techniques (neural computing and genetic algorithms) to the Beargrass Creek watershed in Louisville, Kentucky. The application include development of inductive models as substitutes for more complex process-based models to predict water quality of key constituents (such as dissolved oxygen) and use them in an optimization framework for optimal load reductions. Such a process will facilitate the development of total maximum daily loads for the impaired water bodies in the watershed. Some of the challenges faced in this application include 1) uncertainty in data sets, 2) model application, and 3) development of cause-and-effect relationships between water quality constituents and watershed parameters through use of inductive models. The paper will discuss these challenges and how they affect the desired goals of the project.
Astrophysical Model Selection in Gravitational Wave Astronomy
NASA Technical Reports Server (NTRS)
Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.
2012-01-01
Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.
Davidowitz, Goggy; Roff, Derek; Nijhout, H Frederik
2016-11-01
Natural selection acts on multiple traits simultaneously. How mechanisms underlying such traits enable or constrain their response to simultaneous selection is poorly understood. We show how antagonism and synergism among three traits at the developmental level enable or constrain evolutionary change in response to simultaneous selection on two focal traits at the phenotypic level. After 10 generations of 25% simultaneous directional selection on all four combinations of body size and development time in Manduca sexta (Sphingidae), the changes in the three developmental traits predict 93% of the response of development time and 100% of the response of body size. When the two focal traits were under synergistic selection, the response to simultaneous selection was enabled by juvenile hormone and ecdysteroids and constrained by growth rate. When the two focal traits were under antagonistic selection, the response to selection was due primarily to change in growth rate and constrained by the two hormonal traits. The approach used here reduces the complexity of the developmental and endocrine mechanisms to three proxy traits. This generates explicit predictions for the evolutionary response to selection that are based on biologically informed mechanisms. This approach has broad applicability to a diverse range of taxa, including algae, plants, amphibians, mammals, and insects.
High Angular Resolution Imaging of Solar Radio Bursts from the Lunar Surface
NASA Technical Reports Server (NTRS)
MacDowall, Robert J.; Lazio, Joseph; Bale, Stuart; Burns, Jack O.; Farrell, William M.; Gopalswamy, Nat; Jones, Dayton L.; Kasper, Justin Christophe; Weiler, Kurt
2012-01-01
Locating low frequency radio observatories on the lunar surface has a number of advantages, including positional stability and a very low ionospheric radio cutoff. Here, we describe the Radio Observatory on the lunar Surface for Solar studies (ROLSS), a concept for a low frequency, radio imaging interferometric array designed to study particle acceleration in the corona and inner heliosphere. ROLSS would be deployed during an early lunar sortie or by a robotic rover as part of an unmanned landing. The preferred site is on the lunar near side to simplify the data downlink to Earth. The prime science mission is to image type II and type III solar radio bursts with the aim of determining the sites at and mechanisms by which the radiating particles are accelerated. Secondary science goals include constraining the density of the lunar ionosphere by measuring the low radio frequency cutoff of the solar radio emissions or background galactic radio emission, measuring the flux, particle mass, and arrival direction of interplanetary and interstellar dust, and constraining the low energy electron population in astrophysical sources. Furthermore, ROLSS serves a pathfinder function for larger lunar radio arrays. Key design requirements on ROLSS include the operational frequency and angular resolution. The electron densities in the solar corona and inner heliosphere are such that the relevant emission occurs below 10 M Hz, essentially unobservable from Earth's surface due to the terrestrial ionospheric cutoff. Resolving the potential sites of particle acceleration requires an instrument with an angular resolution of at least 2 deg at 10 MHz, equivalent to a linear array size of approximately one kilometer. The major components of the ROLSS array are 3 antenna arms, each of 500 m length, arranged in a Y formation, with a central electronics package (CEP) at their intersection. Each antenna arm is a linear strip of polyimide film (e.g., Kapton(TradeMark)) on which 16 single polarization dipole antennas are located by depositing a conductor (e.g., silver). The arms also contain transmission lines for carrying the radio signals from the science antennas to the CEP. Operations would consist of data acquisition during the lunar day, with data downlinks to Earth one or more times every 24 hours.
Constrained space camera assembly
Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.
1999-01-01
A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.
A robust approach to chance constrained optimal power flow with renewable generation
Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.
2016-09-01
Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved usingmore » a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.« less
Targeting kinase signaling pathways with constrained peptide scaffolds
Hanold, Laura E.; Fulton, Melody D.; Kennedy, Eileen J.
2017-01-01
Kinases are amongst the largest families in the human proteome and serve as critical mediators of a myriad of cell signaling pathways. Since altered kinase activity is implicated in a variety of pathological diseases, kinases have become a prominent class of proteins for targeted inhibition. Although numerous small molecule and antibody-based inhibitors have already received clinical approval, several challenges may still exist with these strategies including resistance, target selection, inhibitor potency and in vivo activity profiles. Constrained peptide inhibitors have emerged as an alternative strategy for kinase inhibition. Distinct from small molecule inhibitors, peptides can provide a large binding surface area that allows them to bind shallow protein surfaces rather than defined pockets within the target protein structure. By including chemical constraints within the peptide sequence, additional benefits can be bestowed onto the peptide scaffold such as improved target affinity and target selectivity, cell permeability and proteolytic resistance. In this review, we highlight examples of diverse chemistries that are being employed to constrain kinase-targeting peptide scaffolds and highlight their application to modulate kinase signaling as well as their potential clinical implications. PMID:28185915
Hierarchically Parallelized Constrained Nonlinear Solvers with Automated Substructuring
NASA Technical Reports Server (NTRS)
Padovan, Joe; Kwang, Abel
1994-01-01
This paper develops a parallelizable multilevel multiple constrained nonlinear equation solver. The substructuring process is automated to yield appropriately balanced partitioning of each succeeding level. Due to the generality of the procedure,_sequential, as well as partially and fully parallel environments can be handled. This includes both single and multiprocessor assignment per individual partition. Several benchmark examples are presented. These illustrate the robustness of the procedure as well as its capability to yield significant reductions in memory utilization and calculational effort due both to updating and inversion.
Microwave-Assisted Ignition for Improved Internal Combustion Engine Efficiency
NASA Astrophysics Data System (ADS)
DeFilippo, Anthony Cesar
The ever-present need for reducing greenhouse gas emissions associated with transportation motivates this investigation of a novel ignition technology for internal combustion engine applications. Advanced engines can achieve higher efficiencies and reduced emissions by operating in regimes with diluted fuel-air mixtures and higher compression ratios, but the range of stable engine operation is constrained by combustion initiation and flame propagation when dilution levels are high. An advanced ignition technology that reliably extends the operating range of internal combustion engines will aid practical implementation of the next generation of high-efficiency engines. This dissertation contributes to next-generation ignition technology advancement by experimentally analyzing a prototype technology as well as developing a numerical model for the chemical processes governing microwave-assisted ignition. The microwave-assisted spark plug under development by Imagineering, Inc. of Japan has previously been shown to expand the stable operating range of gasoline-fueled engines through plasma-assisted combustion, but the factors limiting its operation were not well characterized. The present experimental study has two main goals. The first goal is to investigate the capability of the microwave-assisted spark plug towards expanding the stable operating range of wet-ethanol-fueled engines. The stability range is investigated by examining the coefficient of variation of indicated mean effective pressure as a metric for instability, and indicated specific ethanol consumption as a metric for efficiency. The second goal is to examine the factors affecting the extent to which microwaves enhance ignition processes. The factors impacting microwave enhancement of ignition processes are individually examined, using flame development behavior as a key metric in determining microwave effectiveness. Further development of practical combustion applications implementing microwave-assisted spark technology will benefit from predictive models which include the plasma processes governing the observed combustion enhancement. This dissertation documents the development of a chemical kinetic mechanism for the plasma-assisted combustion processes relevant to microwave-assisted spark ignition. The mechanism includes an existing mechanism for gas-phase methane oxidation, supplemented with electron impact reactions, cation and anion chemical reactions, and reactions involving vibrationally-excited and electronically-excited species. Calculations using the presently-developed numerical model explain experimentally-observed trends, highlighting the relative importance of pressure, temperature, and mixture composition in determining the effectiveness of microwave-assisted ignition enhancement.
The VIRUS data reduction pipeline
NASA Astrophysics Data System (ADS)
Goessl, Claus A.; Drory, Niv; Relke, Helena; Gebhardt, Karl; Grupp, Frank; Hill, Gary; Hopp, Ulrich; Köhler, Ralf; MacQueen, Phillip
2006-06-01
The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) will measure baryonic acoustic oscillations, first discovered in the Cosmic Microwave Background (CMB), to constrain the nature of dark energy by performing a blind search for Ly-α emitting galaxies within a 200 deg2 field and a redshift bin of 1.8 < z < 3.7. This will be achieved by VIRUS, a wide field, low resolution, 145 IFU spectrograph. The data reduction pipeline will have to extract ~ 35.000 spectra per exposure (~5 million per night, i.e. 500 million in total), perform an astrometric, photometric, and wavelength calibration, and find and classify objects in the spectra fully automatically. We will describe our ideas how to achieve this goal.
The Large Benefits of Small-Satellite Missions
NASA Astrophysics Data System (ADS)
Baker, Daniel N.; Worden, S. Pete
2008-08-01
Small-spacecraft missions play a key and compelling role in space-based scientific and engineering programs [Moretto and Robinson, 2008]. Compared with larger satellites, which can be in excess of 2000 kilograms, small satellites range from 750 kilograms-roughly the size of a golf cart-to less than 1 kilogram, about the size of a softball. They have been responsible for greatly reducing the time needed to obtain science and technology results. The shorter development times for smaller missions can reduce overall costs and can thus provide welcome budgetary options for highly constrained space programs. In many cases, we contend that 80% (or more) of program goals can be achieved for 20% of the cost by using small-spacecraft solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler
This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less
Du, Feng; Abrams, Richard A
2012-09-01
To avoid sensory overload, people are able to selectively attend to a particular color or direction of motion while ignoring irrelevant stimuli that differ from the desired one. We show here for the first time that it is also possible to selectively attend to a specific line orientation-but with an important caveat: orientations that are perpendicular to the target orientation cannot be suppressed. This effect reflects properties of the neural mechanisms selective for orientation and reveals the extent to which contingent capture is constrained not only by one's top-down goals but also by feature preferences of visual neurons. Copyright © 2012 Elsevier B.V. All rights reserved.
Petrologic and Chemical Characterization of a Suite of Antarctic Diogenites
NASA Technical Reports Server (NTRS)
Mittlefehldt, D. W.; Mertzman, S. A.; Peng, Z. X.; Mertzman, K. R.
2013-01-01
The origin of diogenites, ultramafic cumulates related to eucrites, is an unresolved problem [1]. Most diogenites are orthopyroxenites, a few are harzburgites [2], and some are transitional to cumulate eucrites [1, 3]. Cumulate eucrites are gabbros formed by crystal fractionation from basaltic eucrites [4]. The consensus view is that basaltic eucrites are residual melts from global-magma-ocean crystallization on their parent asteroid [4] which is plausibly Vesta [5]. However, the petrologic and compositional characteristics of diogenites seem to preclude a magma ocean origin [1, 4]. We are doing a petrologic and chemical study of new or unusual diogenites with the ultimate goals of constraining their genesis, and the geologic evolution of Vesta.
TLNS3D/CDISC Multipoint Design of the TCA Concept
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Mann, Michael J.
1999-01-01
This paper presents the work done to date by the authors on developing an efficient approach to multipoint design and applying it to the design of the HSR TCA (High Speed Research Technology Concept Aircraft) configuration. While the title indicates that this exploratory study has been performed using the TLNS3DMB flow solver and the CDISC (Constrained Direct Iterative Surface Curvature) design method, the CDISC method could have been used with any flow solver, and the multipoint design approach does not require the use of CDISC. The goal of the study was to develop a multipoint design method that could achieve a design in about the same time as 10 analysis runs.
Optical study of the DAFT/FADA galaxy cluster survey
NASA Astrophysics Data System (ADS)
Martinet, N.; Durret, F.; Clowe, D.; Adami, C.
2013-11-01
DAFT/FADA (Dark energy American French Team) is a large survey of ˜90 high redshift (0.4
An Ethical Governor for Constraining Lethal Action in an Autonomous System
2009-01-01
property is prohibited from being attacked, including buildings dedicated to religion, art , science … Activity Active Logical Form TargetDiscriminated...attacked, including buildings dedicated to religion, art , science , charitable purposes, and historic monuments. Prohibition LOW Civilian
2013-01-01
Background India currently has more than 60 million people with Type 2 Diabetes Mellitus (T2DM) and this is predicted to increase by nearly two-thirds by 2030. While management of those with T2DM is important, preventing or delaying the onset of the disease, especially in those individuals at ‘high risk’ of developing T2DM, is urgently needed, particularly in resource-constrained settings. This paper describes the protocol for a cluster randomised controlled trial of a peer-led lifestyle intervention program to prevent diabetes in Kerala, India. Methods/design A total of 60 polling booths are randomised to the intervention arm or control arm in rural Kerala, India. Data collection is conducted in two steps. Step 1 (Home screening): Participants aged 30–60 years are administered a screening questionnaire. Those having no history of T2DM and other chronic illnesses with an Indian Diabetes Risk Score value of ≥60 are invited to attend a mobile clinic (Step 2). At the mobile clinic, participants complete questionnaires, undergo physical measurements, and provide blood samples for biochemical analysis. Participants identified with T2DM at Step 2 are excluded from further study participation. Participants in the control arm are provided with a health education booklet containing information on symptoms, complications, and risk factors of T2DM with the recommended levels for primary prevention. Participants in the intervention arm receive: (1) eleven peer-led small group sessions to motivate, guide and support in planning, initiation and maintenance of lifestyle changes; (2) two diabetes prevention education sessions led by experts to raise awareness on T2DM risk factors, prevention and management; (3) a participant handbook containing information primarily on peer support and its role in assisting with lifestyle modification; (4) a participant workbook to guide self-monitoring of lifestyle behaviours, goal setting and goal review; (5) the health education booklet that is given to the control arm. Follow-up assessments are conducted at 12 and 24 months. The primary outcome is incidence of T2DM. Secondary outcomes include behavioural, psychosocial, clinical, and biochemical measures. An economic evaluation is planned. Discussion Results from this trial will contribute to improved policy and practice regarding lifestyle intervention programs to prevent diabetes in India and other resource-constrained settings. Trial registration Australia and New Zealand Clinical Trials Registry: ACTRN12611000262909. PMID:24180316
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabita, F. Robert
2013-07-30
In this study, the Principal Investigator, F.R. Tabita has teamed up with J. C. Liao from UCLA. This project's main goal is to manipulate regulatory networks in phototrophic bacteria to affect and maximize the production of large amounts of hydrogen gas under conditions where wild-type organisms are constrained by inherent regulatory mechanisms from allowing this to occur. Unrestrained production of hydrogen has been achieved and this will allow for the potential utilization of waste materials as a feed stock to support hydrogen production. By further understanding the means by which regulatory networks interact, this study will seek to maximize themore » ability of currently available “unrestrained” organisms to produce hydrogen. The organisms to be utilized in this study, phototrophic microorganisms, in particular nonsulfur purple (NSP) bacteria, catalyze many significant processes including the assimilation of carbon dioxide into organic carbon, nitrogen fixation, sulfur oxidation, aromatic acid degradation, and hydrogen oxidation/evolution. Moreover, due to their great metabolic versatility, such organisms highly regulate these processes in the cell and since virtually all such capabilities are dispensable, excellent experimental systems to study aspects of molecular control and biochemistry/physiology are available.« less
Why Nuclear Forensics Needs New Plasma Chemistry Data
NASA Astrophysics Data System (ADS)
Rose, T.; Armstrong, M.; Chernov, A.; Crowhurst, J.; Dai, Z.; Knight, K.; Koroglu, B.; Radousky, H.; Stavrou, E.; Weisz, D.; Zaug, J.; Azer, M.; Finko, M.; Curreli, D.
2016-10-01
The mechanisms that control the distribution of radionuclides in fallout after a nuclear detonation are not adequately constrained. Current capabilities for assessing post-detonation scenarios often rely on empirical observations and approximations. Deeper insight into chemical condensation requires a coupled experimental, theoretical, and modeling approach. The behavior of uranium during plasma condensation is perplexing. Two independent methods are being developed to investigate gas phase uranium chemistry and speciation during plasma condensation: (1) laser-induced breakdown spectroscopy and (2) a unique steady-state ICP flow reactor. Both methods use laser absorption spectroscopy to obtain in situ data for vapor phase molecular species as they form. We are developing a kinetic model to describe the relative abundance of uranium species in the evolving plasma. Characterization of the uranium-oxygen system will be followed by other chemical components, including `carrier' materials such as silica. The goal is to develop a semi-empirical model to describe the chemical fractionation of uranium during fallout formation. Prepared by LLNL under Contract DE-AC52-07NA27344. This project was sponsored in part by the Department of the Defense, Defense Threat Reduction Agency, under Grant Number HDTRA1-16-1-0020.
Evolution and behavioural responses to human-induced rapid environmental change
Sih, Andrew; Ferrari, Maud C O; Harris, David J
2011-01-01
Almost all organisms live in environments that have been altered, to some degree, by human activities. Because behaviour mediates interactions between an individual and its environment, the ability of organisms to behave appropriately under these new conditions is crucial for determining their immediate success or failure in these modified environments. While hundreds of species are suffering dramatically from these environmental changes, others, such as urbanized and pest species, are doing better than ever. Our goal is to provide insights into explaining such variation. We first summarize the responses of some species to novel situations, including novel risks and resources, habitat loss/fragmentation, pollutants and climate change. Using a sensory ecology approach, we present a mechanistic framework for predicting variation in behavioural responses to environmental change, drawing from models of decision-making processes and an understanding of the selective background against which they evolved. Where immediate behavioural responses are inadequate, learning or evolutionary adaptation may prove useful, although these mechanisms are also constrained by evolutionary history. Although predicting the responses of species to environmental change is difficult, we highlight the need for a better understanding of the role of evolutionary history in shaping individuals’ responses to their environment and provide suggestion for future work. PMID:25567979
Strengthening Faculty Recruitment for Health Professions Training in Basic Sciences in Zambia
Simuyemba, Moses; Talib, Zohray; Michelo, Charles; Mutale, Wilbroad; Zulu, Joseph; Andrews, Ben; Katubulushi, Max; Njelesani, Evariste; Bowa, Kasonde; Maimbolwa, Margaret; Mudenda, John; Mulla, Yakub
2014-01-01
Zambia is facing a crisis in its human resources for health (HRH), with deficits in the number and skill mix of health workers. The University of Zambia School of Medicine (UNZA SOM) was the only medical school in the country for decades, but recently it was joined by three new medical schools—two private and one public. In addition to expanding medical education, the government has also approved several allied health programs, including pharmacy, physiotherapy, biomedical sciences, and environmental health. This expansion has been constrained by insufficient numbers of faculty. Through a grant from the Medical Education Partnership Initiative (MEPI), UNZA SOM has been investing in ways to address faculty recruitment, training, and retention. The MEPI-funded strategy involves directly sponsoring a cohort of faculty at UNZA SOM during the five-year grant, as well as establishing more than a dozen new master’s programs, with the goal that all sponsored faculty are locally trained and retained. Because the issue of limited basic science faculty plagues medical schools throughout Sub-Saharan Africa, this strategy of using seed funding to build sustainable local capacity to recruit, train, and retain faculty could be a model for the region. PMID:25072591
Management of Service Projects in Support of Space Flight Research
NASA Technical Reports Server (NTRS)
Love, J.
2009-01-01
Goal:To provide human health and performance countermeasures, knowledge, technologies, and tools to enable safe, reliable, and productive human space exploration . [HRP-47051] Specific Objectives: 1) Develop capabilities, necessary countermeasures, and technologies in support of human space exploration, focusing on mitigating the highest risks to human health and performance. 2) Define and improve human spaceflight medical, environmental, and human factors standards. 3) Develop technologies that serve to reduce medical and environmental risks, to reduce human systems resource requirements (mass, volume, power, data, etc.) and to ensure effective human-system integration across exploration systems. 4) Ensure maintenance of Agency core competencies necessary to enable risk reduction in the following areas: A. Space medicine B. Physiological and behavioral effects of long duration spaceflight on the human body C. Space environmental effects, including radiation, on human health and performance D. Space "human factors" [HRP-47051]. Service projects can form integral parts of research-based project-focused programs to provide specialized functions. Traditional/classic project management methodologies and agile approaches are not mutually exclusive paradigms. Agile strategies can be combined with traditional methods and applied in the management of service projects functioning in changing environments. Creative collaborations afford a mechanism for mitigation of constrained resource limitations.
Evolution and behavioural responses to human-induced rapid environmental change.
Sih, Andrew; Ferrari, Maud C O; Harris, David J
2011-03-01
Almost all organisms live in environments that have been altered, to some degree, by human activities. Because behaviour mediates interactions between an individual and its environment, the ability of organisms to behave appropriately under these new conditions is crucial for determining their immediate success or failure in these modified environments. While hundreds of species are suffering dramatically from these environmental changes, others, such as urbanized and pest species, are doing better than ever. Our goal is to provide insights into explaining such variation. We first summarize the responses of some species to novel situations, including novel risks and resources, habitat loss/fragmentation, pollutants and climate change. Using a sensory ecology approach, we present a mechanistic framework for predicting variation in behavioural responses to environmental change, drawing from models of decision-making processes and an understanding of the selective background against which they evolved. Where immediate behavioural responses are inadequate, learning or evolutionary adaptation may prove useful, although these mechanisms are also constrained by evolutionary history. Although predicting the responses of species to environmental change is difficult, we highlight the need for a better understanding of the role of evolutionary history in shaping individuals' responses to their environment and provide suggestion for future work.
Impact of heavy sterile neutrinos on the triple Higgs coupling
NASA Astrophysics Data System (ADS)
Baglio, J.; Weiland, C.
2017-07-01
New physics beyond the Standard Model is required to give mass to the light neutrinos. One of the simplest ideas is to introduce new heavy, gauge singlet fermions that play the role of right-handed neutrinos in a seesaw mechanism. They could have large Yukawa couplings to the Higgs boson, affecting the Higgs couplings and in particular the triple Higgs coupling $\\lambda_{HHH}^{}$, the measure of which is one of the major goals of the LHC and of future colliders. We present a study of the impact of these heavy neutrinos on $\\lambda_{HHH}^{}$ at the one-loop level, first in a simplified 3+1 model with one heavy Dirac neutrino and then in the inverse seesaw model. Taking into account all possible experimental constraints, we find that sizeable deviations of the order of 35% are possible, large enough to be detected at future colliders, making the triple Higgs coupling a new, viable observable to constrain neutrino mass models. The effects are generic and are expected in any new physics model including TeV-scale fermions with large Yukawa couplings to the Higgs boson, such as those using the neutrino portal.
Strengthening faculty recruitment for health professions training in basic sciences in Zambia.
Simuyemba, Moses; Talib, Zohray; Michelo, Charles; Mutale, Wilbroad; Zulu, Joseph; Andrews, Ben; Nzala, Selestine; Katubulushi, Max; Njelesani, Evariste; Bowa, Kasonde; Maimbolwa, Margaret; Mudenda, John; Mulla, Yakub
2014-08-01
Zambia is facing a crisis in its human resources for health, with deficits in the number and skill mix of health workers. The University of Zambia School of Medicine (UNZA SOM) was the only medical school in the country for decades, but recently it was joined by three new medical schools--two private and one public. In addition to expanding medical education, the government has also approved several allied health programs, including pharmacy, physiotherapy, biomedical sciences, and environmental health. This expansion has been constrained by insufficient numbers of faculty. Through a grant from the Medical Education Partnership Initiative (MEPI), UNZA SOM has been investing in ways to address faculty recruitment, training, and retention. The MEPI-funded strategy involves directly sponsoring a cohort of faculty at UNZA SOM during the five-year grant, as well as establishing more than a dozen new master's programs, with the goal that all sponsored faculty are locally trained and retained. Because the issue of limited basic science faculty plagues medical schools throughout Sub-Saharan Africa, this strategy of using seed funding to build sustainable local capacity to recruit, train, and retain faculty could be a model for the region.
NASA Astrophysics Data System (ADS)
Paterson, Matthew
2006-11-01
The car, and the range of social and political institutions which sustain its dominance, play an important role in many of the environmental problems faced by contemporary society. But in order to understand the possibilities for moving towards sustainability and 'greening cars', it is first necessary to understand the political forces that have made cars so dominant. This book identifies these forces as a combination of political economy and cultural politics. From the early twentieth century, the car became central to the organization of capitalism and deeply embedded in individual identities, providing people with a source of value and meaning but in a way which was broadly consistent with social imperatives for mobility. Projects for sustainability to reduce the environmental impacts of cars are therefore constrained by these forces but must deal with them in order to shape and achieve their goals. Addresses the increasingly controversial debate on the place of the car in contemporary society and its contribution to environmental problems Questions whether automobility is sustainable and what political, social and economic forces might prevent this Will appeal to scholars and advanced students from a wide range of disciplines including environmental politics, political economy, environmental studies, cultural studies and geography
Causal mapping of emotion networks in the human brain: Framework and initial findings.
Dubois, Julien; Oya, Hiroyuki; Tyszka, J Michael; Howard, Matthew; Eberhardt, Frederick; Adolphs, Ralph
2017-11-13
Emotions involve many cortical and subcortical regions, prominently including the amygdala. It remains unknown how these multiple network components interact, and it remains unknown how they cause the behavioral, autonomic, and experiential effects of emotions. Here we describe a framework for combining a novel technique, concurrent electrical stimulation with fMRI (es-fMRI), together with a novel analysis, inferring causal structure from fMRI data (causal discovery). We outline a research program for investigating human emotion with these new tools, and provide initial findings from two large resting-state datasets as well as case studies in neurosurgical patients with electrical stimulation of the amygdala. The overarching goal is to use causal discovery methods on fMRI data to infer causal graphical models of how brain regions interact, and then to further constrain these models with direct stimulation of specific brain regions and concurrent fMRI. We conclude by discussing limitations and future extensions. The approach could yield anatomical hypotheses about brain connectivity, motivate rational strategies for treating mood disorders with deep brain stimulation, and could be extended to animal studies that use combined optogenetic fMRI. Copyright © 2017 Elsevier Ltd. All rights reserved.
The CuSPED Mission: CubeSat for GNSS Sounding of the Ionosphere-Plasmasphere Electron Density
NASA Technical Reports Server (NTRS)
Gross, Jason N.; Keesee, Amy M.; Christian, John A.; Gu, Yu; Scime, Earl; Komjathy, Attila; Lightsey, E. Glenn; Pollock, Craig J.
2016-01-01
The CubeSat for GNSS Sounding of Ionosphere-Plasmasphere Electron Density (CuSPED) is a 3U CubeSat mission concept that has been developed in response to the NASA Heliophysics program's decadal science goal of the determining of the dynamics and coupling of the Earth's magnetosphere, ionosphere, and atmosphere and their response to solar and terrestrial inputs. The mission was formulated through a collaboration between West Virginia University, Georgia Tech, NASA GSFC and NASA JPL, and features a 3U CubeSat that hosts both a miniaturized space capable Global Navigation Satellite System (GNSS) receiver for topside atmospheric sounding, along with a Thermal Electron Capped Hemispherical Spectrometer (TECHS) for the purpose of in situ electron precipitation measurements. These two complimentary measurement techniques will provide data for the purpose of constraining ionosphere-magnetosphere coupling models and will also enable studies of the local plasma environment and spacecraft charging; a phenomenon which is known to lead to significant errors in the measurement of low-energy, charged species from instruments aboard spacecraft traversing the ionosphere. This paper will provide an overview of the concept including its science motivation and implementation.
Effects on Training Using Illumination in Virtual Environments
NASA Technical Reports Server (NTRS)
Maida, James C.; Novak, M. S. Jennifer; Mueller, Kristian
1999-01-01
Camera based tasks are commonly performed during orbital operations, and orbital lighting conditions, such as high contrast shadowing and glare, are a factor in performance. Computer based training using virtual environments is a common tool used to make and keep CTW members proficient. If computer based training included some of these harsh lighting conditions, would the crew increase their proficiency? The project goal was to determine whether computer based training increases proficiency if one trains for a camera based task using computer generated virtual environments with enhanced lighting conditions such as shadows and glare rather than color shaded computer images normally used in simulators. Previous experiments were conducted using a two degree of freedom docking system. Test subjects had to align a boresight camera using a hand controller with one axis of rotation and one axis of rotation. Two sets of subjects were trained on two computer simulations using computer generated virtual environments, one with lighting, and one without. Results revealed that when subjects were constrained by time and accuracy, those who trained with simulated lighting conditions performed significantly better than those who did not. To reinforce these results for speed and accuracy, the task complexity was increased.
NASA Astrophysics Data System (ADS)
Nanni, Ambra; Marigo, Paola; Groenewegen, Martin A. T.; Aringer, Bernhard; Girardi, Léo; Pastorelli, Giada; Bressan, Alessandro; Bladh, Sara
2016-10-01
We present a new approach aimed at constraining the typical size and optical properties of carbon dust grains in circumstellar envelopes (CSEs) of carbon-rich stars (C-stars) in the Small Magellanic Cloud (SMC). To achieve this goal, we apply our recent dust growth description, coupled with a radiative transfer code to the CSEs of C-stars evolving along the thermally pulsing asymptotic giant branch, for which we compute spectra and colours. Then, we compare our modelled colours in the near- and mid-infrared (NIR and MIR) bands with the observed ones, testing different assumptions in our dust scheme and employing several data sets of optical constants for carbon dust available in the literature. Different assumptions adopted in our dust scheme change the typical size of the carbon grains produced. We constrain carbon dust properties by selecting the combination of grain size and optical constants which best reproduce several colours in the NIR and MIR at the same time. The different choices of optical properties and grain size lead to differences in the NIR and MIR colours greater than 2 mag in some cases. We conclude that the complete set of observed NIR and MIR colours are best reproduced by small grains, with sizes between ˜0.035 and ˜0.12 μm, rather than by large grains between ˜0.2 and 0.7 μm. The inability of large grains to reproduce NIR and MIR colours seems independent of the adopted optical data set. We also find a possible trend of the grain size with mass-loss and/or carbon excess in the CSEs of these stars.
Development of combined low-emissions burner devices for low-power boilers
NASA Astrophysics Data System (ADS)
Roslyakov, P. V.; Proskurin, Yu. V.; Khokhlov, D. A.
2017-08-01
Low-power water boilers are widely used for autonomous heat supply in various industries. Firetube and water-tube boilers of domestic and foreign manufacturers are widely represented on the Russian market. However, even Russian boilers are supplied with licensed foreign burner devices, which reduce their competitiveness and complicate operating conditions. A task of developing efficient domestic low-emissions burner devices for low-power boilers is quite acute. A characteristic property of ignition and fuel combustion in such boilers is their flowing in constrained conditions due to small dimensions of combustion chambers and flame tubes. These processes differ significantly from those in open combustion chambers of high-duty power boilers, and they have not been sufficiently studied yet. The goals of this paper are studying the processes of ignition and combustion of gaseous and liquid fuels, heat and mass transfer and NO x emissions in constrained conditions, and the development of a modern combined low-emissions 2.2 MW burner device that provides efficient fuel combustion. A burner device computer model is developed and numerical studies of its operation on different types of fuel in a working load range from 40 to 100% of the nominal are carried out. The main features of ignition and combustion of gaseous and liquid fuels in constrained conditions of the flame tube at nominal and decreased loads are determined, which differ fundamentally from the similar processes in steam boiler furnaces. The influence of the burner devices design and operating conditions on the fuel underburning and NO x formation is determined. Based on the results of the design studies, a design of the new combined low-emissions burner device is proposed, which has several advantages over the prototype.
Commercialization and internationalization of the next-generation launch system
NASA Astrophysics Data System (ADS)
Bille, Matthew A.; Richie, George E.; Bille, Deborah A.
1996-03-01
The United States, ESA, Russia, and Japan are all pursuing the goal of a next-generation launch system. However, economic constraints may ground these programs, as they did hypersonic spaceplane efforts. In today's constrained fiscal environment, engineering is secondary unless the most practical economic and political approach is also found. While international efforts face national concerns over jobs and competitiveness, low-cost access to orbit will open up space to whole new industries. In the long run, all involved nations will gain economically if a next-generation launcher is built, and all will lose if individual efforts fail. An international consortium is most likely to amass the resources needed. The consortium would not be dedicated to any single technical concept, but would select from industry proposals to design and build the technology demonstrator. The goal is to get one working system built: after that, it is not critical whether we have one cooperative operational system or a dozen competing ones. What is critical is not to miss another chance to launch the era of space commercialization.
Airborne microwave radar measurements of surface velocity in a tidally-driven inlet
NASA Astrophysics Data System (ADS)
Farquharson, G.; Thomson, J. M.
2012-12-01
A miniaturized dual-beam along-track interferometric (ATI) synthetic aperture radar (SAR), capable of measuring two components of surface velocity at high resolution, was operated during the 2012 Rivers and Inlets Experiment (RIVET) at the New River Inlet in North Carolina. The inlet is predominantly tidally-driven, with little upstream river discharge. Surface velocities in the inlet and nearshore region were measured during ebb and flood tides during a variety of wind and offshore wave conditions. The radar-derived surface velocities range from around ±2~m~s1 during times of maximum flow. We compare these radar-derived surface velocities with surface velocities measured with drifters. The accuracy of the radar-derived velocities is investigated, especially in areas of large velocity gradients where along-track interferometric SAR can show significant differences with surface velocity. The goal of this research is to characterize errors in along-track interferometric SAR velocity so that ATI SAR measurements can be coupled with data assimilative modeling with the goal of developing the capability to adequately constrain nearshore models using remote sensing measurements.
Background Model for the Majorana Demonstrator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cuesta, C.; Abgrall, N.; Aguayo, Estanislao
2015-06-01
The Majorana Collaboration is constructing a prototype system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment to search for neutrinoless double-beta (0v BB) decay in 76Ge. In view of the requirement that the next generation of tonne-scale Ge-based 0vBB-decay experiment be capable of probing the neutrino mass scale in the inverted-hierarchy region, a major goal of theMajorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursuedmore » through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using Geant4 simulations of the different background components whose purity levels are constrained from radioassay measurements.« less
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Autonomous and Autonomic Systems: A Paradigm for Future Space Exploration Missions
NASA Technical Reports Server (NTRS)
Truszkowski, Walter F.; Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2004-01-01
NASA increasingly will rely on autonomous systems concepts, not only in the mission control centers on the ground, but also on spacecraft and on rovers and other assets on extraterrestrial bodies. Automomy enables not only reduced operations costs, But also adaptable goal-driven functionality of mission systems. Space missions lacking autonomy will be unable to achieve the full range of advanced mission objectives, given that human control under dynamic environmental conditions will not be feasible due, in part, to the unavoidably high signal propagation latency and constrained data rates of mission communications links. While autonomy cost-effectively supports accomplishment of mission goals, autonomicity supports survivability of remote mission assets, especially when human tending is not feasible. Autonomic system properties (which ensure self-configuring, self-optimizing self-healing, and self-protecting behavior) conceptually may enable space missions of a higher order into any previously flown. Analysis of two NASA agent-based systems previously prototyped, and of a proposed future mission involving numerous cooperating spacecraft, illustrates how autonomous and autonomic system concepts may be brought to bear on future space missions.
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-03-31
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Development and demonstration of an on-board mission planner for helicopters
NASA Technical Reports Server (NTRS)
Deutsch, Owen L.; Desai, Mukund
1988-01-01
Mission management tasks can be distributed within a planning hierarchy, where each level of the hierarchy addresses a scope of action, and associated time scale or planning horizon, and requirements for plan generation response time. The current work is focused on the far-field planning subproblem, with a scope and planning horizon encompassing the entire mission and with a response time required to be about two minutes. The far-feld planning problem is posed as a constrained optimization problem and algorithms and structural organizations are proposed for the solution. Algorithms are implemented in a developmental environment, and performance is assessed with respect to optimality and feasibility for the intended application and in comparison with alternative algorithms. This is done for the three major components of far-field planning: goal planning, waypoint path planning, and timeline management. It appears feasible to meet performance requirements on a 10 Mips flyable processor (dedicated to far-field planning) using a heuristically-guided simulated annealing technique for the goal planner, a modified A* search for the waypoint path planner, and a speed scheduling technique developed for this project.
NASA Astrophysics Data System (ADS)
Wong, T. E.; Noone, D. C.; Kleiber, W.
2014-12-01
The single largest uncertainty in climate model energy balance is the surface latent heating over tropical land. Furthermore, the partitioning of the total latent heat flux into contributions from surface evaporation and plant transpiration is of great importance, but notoriously poorly constrained. Resolving these issues will require better exploiting information which lies at the interface between observations and advanced modeling tools, both of which are imperfect. There are remarkably few observations which can constrain these fluxes, placing strict requirements on developing statistical methods to maximize the use of limited information to best improve models. Previous work has demonstrated the power of incorporating stable water isotopes into land surface models for further constraining ecosystem processes. We present results from a stable water isotopically-enabled land surface model (iCLM4), including model experiments partitioning the latent heat flux into contributions from plant transpiration and surface evaporation. It is shown that the partitioning results are sensitive to the parameterization of kinetic fractionation used. We discuss and demonstrate an approach to calibrating select model parameters to observational data in a Bayesian estimation framework, requiring Markov Chain Monte Carlo sampling of the posterior distribution, which is shown to constrain uncertain parameters as well as inform relevant values for operational use. Finally, we discuss the application of the estimation scheme to iCLM4, including entropy as a measure of information content and specific challenges which arise in calibration models with a large number of parameters.
Event processing in the visual world: Projected motion paths during spoken sentence comprehension.
Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue
2016-05-01
Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Motion Planning and Synthesis of Human-Like Characters in Constrained Environments
NASA Astrophysics Data System (ADS)
Zhang, Liangjun; Pan, Jia; Manocha, Dinesh
We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.
DuBois, Neil J.; Amaral, Antonio M.
1992-10-27
A damped flexible seal assembly for a torpedo isolates the tailcone thereof rom vibrational energy present in the drive shaft assembly. A pair of outside flanges, each of which include an inwardly facing groove and an O-ring constrained therein, provide a watertight seal against the outer non-rotating surface of the drive shaft assembly. An inside flange includes an outwardly-facing groove and an O-ring constrained therein, and provides a watertight seal against the inner surface of the tail cone. Two cast-in-place elastomeric seals provide a watertight seal between the flanges and further provide a damping barrier between the outside flanges and the inside flanges for damping vibrational energy present in the drive shaft assembly before the energy can reach the tailcone through the seal assembly.
X-ray scattering data and structural genomics
NASA Astrophysics Data System (ADS)
Doniach, Sebastian
2003-03-01
High throughput structural genomics has the ambitious goal of determining the structure of all, or a very large number of protein folds using the high-resolution techniques of protein crystallography and NMR. However, the program is facing significant bottlenecks in reaching this goal, which include problems of protein expression and crystallization. In this talk, some preliminary results on how the low-resolution technique of small-angle X-ray solution scattering (SAXS) can help ameliorate some of these bottlenecks will be presented. One of the most significant bottlenecks arises from the difficulty of crystallizing integral membrane proteins, where only a handful of structures are available compared to thousands of structures for soluble proteins. By 3-dimensional reconstruction from SAXS data, the size and shape of detergent-solubilized integral membrane proteins can be characterized. This information can then be used to classify membrane proteins which constitute some 25% of all genomes. SAXS may also be used to study the dependence of interparticle interference scattering on solvent conditions so that regions of the protein solution phase diagram which favor crystallization can be elucidated. As a further application, SAXS may be used to provide physical constraints on computational methods for protein structure prediction based on primary sequence information. This in turn can help in identifying structural homologs of a given protein, which can then give clues to its function. D. Walther, F. Cohen and S. Doniach. "Reconstruction of low resolution three-dimensional density maps from one-dimensional small angle x-ray scattering data for biomolecules." J. Appl. Cryst. 33(2):350-363 (2000). Protein structure prediction constrained by solution X-ray scattering data and structural homology identification Zheng WJ, Doniach S JOURNAL OF MOLECULAR BIOLOGY , v. 316(#1) pp. 173-187 FEB 8, 2002
Sui, Jing; Adali, Tülay; Pearlson, Godfrey D.; Calhoun, Vince D.
2013-01-01
Extraction of relevant features from multitask functional MRI (fMRI) data in order to identify potential biomarkers for disease, is an attractive goal. In this paper, we introduce a novel feature-based framework, which is sensitive and accurate in detecting group differences (e.g. controls vs. patients) by proposing three key ideas. First, we integrate two goal-directed techniques: coefficient-constrained independent component analysis (CC-ICA) and principal component analysis with reference (PCA-R), both of which improve sensitivity to group differences. Secondly, an automated artifact-removal method is developed for selecting components of interest derived from CC-ICA, with an average accuracy of 91%. Finally, we propose a strategy for optimal feature/component selection, aiming to identify optimal group-discriminative brain networks as well as the tasks within which these circuits are engaged. The group-discriminating performance is evaluated on 15 fMRI feature combinations (5 single features and 10 joint features) collected from 28 healthy control subjects and 25 schizophrenia patients. Results show that a feature from a sensorimotor task and a joint feature from a Sternberg working memory (probe) task and an auditory oddball (target) task are the top two feature combinations distinguishing groups. We identified three optimal features that best separate patients from controls, including brain networks consisting of temporal lobe, default mode and occipital lobe circuits, which when grouped together provide improved capability in classifying group membership. The proposed framework provides a general approach for selecting optimal brain networks which may serve as potential biomarkers of several brain diseases and thus has wide applicability in the neuroimaging research community. PMID:19457398
Reheating and the asymmetric production of matter
NASA Astrophysics Data System (ADS)
Adshead, Peter
The early thermal history of the universe, from the end of inflation until the light elements are produced at big-bang nucleosynthesis, remains one of the most poorly understood periods of our cosmic history. We do not understand how inflation ends, and the connection between the physics that drives inflation and the standard model is poorly constrained. Consequently, the mechanism by which the Universe is reheated from its super-cooled post-inflationary state into a thermalized plasma is unknown. Furthermore, the precise mechanism responsible for the matter-antimatter asymmetry and the detailed particle origin of dark matter are, as yet, unknown. However, it is precisely during this epoch that abundant phenomenology from fundamental physics beyond the standard model is anticipated. The objective of the proposed research is to address this gap in our understanding of the history of the Universe by exploring the connection between the physics that drives the inflationary epoch, and the physics that ignites the hot big-bang. This will be achieved by two detailed studies of the physics of reheating. The first study examines the cosmic history of dark sectors, and addresses the cosmological question of how these sectors are populated in the early universe. The second study examines detailed particle physics models of reheating where the inflaton couples to gauge fields. NASA's strategic objectives in astrophysics are to discover how the universe works and to explore how it began and evolved. The primary goal of this proposal is to address these questions by developing a deeper understanding of the history of the post-inflationary universe through cosmological observations and fundamental theory. Specifically, this proposal will advance NASA's science goal to probe the origin and destiny of our universe, including the nature of black holes, dark energy, dark matter and gravity
Ruge, Hannes; Wolfensteller, Uta
2015-06-01
Higher species commonly learn novel behaviors by evaluating retrospectively whether actions have yielded desirable outcomes. By relying on explicit behavioral instructions, only humans can use an acquisition shortcut that prospectively specifies how to yield intended outcomes under the appropriate stimulus conditions. A recent and largely unexplored hypothesis suggests that striatal areas interact with lateral prefrontal cortex (LPFC) when novel behaviors are learned via explicit instruction, and that regional subspecialization exists for the integration of differential response-outcome contingencies into the current task model. Behaviorally, outcome integration during instruction-based learning has been linked to functionally distinct performance indices. This includes (1) compatibility effects, measured in a postlearning test procedure probing the encoding strength of outcome-response (O-R) associations, and (2) increasing response slowing across learning, putatively indicating active usage of O-R associations for the online control of goal-directed action. In the present fMRI study, we examined correlations between these behavioral indices and the dynamics of fronto-striatal couplings in order to mutually constrain and refine the interpretation of neural and behavioral measures in terms of separable subprocesses during outcome integration. We found that O-R encoding strength correlated with LPFC-putamen coupling, suggesting that the putamen is relevant for the formation of both S-R habits and habit-like O-R associations. By contrast, response slowing as a putative index of active usage of O-R associations correlated with LPFC-caudate coupling. This finding highlights the relevance of the caudate for the online control of goal-directed action also under instruction-based learning conditions, and in turn clarifies the functional relevance of the behavioral slowing effect.
Charging of the Van Allen Probes: Theory and Simulations
NASA Astrophysics Data System (ADS)
Delzanno, G. L.; Meierbachtol, C.; Svyatskiy, D.; Denton, M.
2017-12-01
The electrical charging of spacecraft has been a known problem since the beginning of the space age. Its consequences can vary from moderate (single event upsets) to catastrophic (total loss of the spacecraft) depending on a variety of causes, some of which could be related to the surrounding plasma environment, including emission processes from the spacecraft surface. Because of its complexity and cost, this problem is typically studied using numerical simulations. However, inherent unknowns in both plasma parameters and spacecraft material properties can lead to inaccurate predictions of overall spacecraft charging levels. The goal of this work is to identify and study the driving causes and necessary parameters for particular spacecraft charging events on the Van Allen Probes (VAP) spacecraft. This is achieved by making use of plasma theory, numerical simulations, and on-board data. First, we present a simple theoretical spacecraft charging model, which assumes a spherical spacecraft geometry and is based upon the classical orbital-motion-limited approximation. Some input parameters to the model (such as the warm plasma distribution function) are taken directly from on-board VAP data, while other parameters are either varied parametrically to assess their impact on the spacecraft potential, or constrained through spacecraft charging data and statistical techniques. Second, a fully self-consistent numerical simulation is performed by supplying these parameters to CPIC, a particle-in-cell code specifically designed for studying plasma-material interactions. CPIC simulations remove some of the assumptions of the theoretical model and also capture the influence of the full geometry of the spacecraft. The CPIC numerical simulation results will be presented and compared with on-board VAP data. This work will set the foundation for our eventual goal of importing the full plasma environment from the LANL-developed SHIELDS framework into CPIC, in order to more accurately predict spacecraft charging.
Planning and Control for Microassembly of Structures Composed of Stress-Engineered MEMS Microrobots
Donald, Bruce R.; Levey, Christopher G.; Paprotny, Igor; Rus, Daniela
2013-01-01
We present control strategies that implement planar microassembly using groups of stress-engineered MEMS microrobots (MicroStressBots) controlled through a single global control signal. The global control signal couples the motion of the devices, causing the system to be highly underactuated. In order for the robots to assemble into arbitrary planar shapes despite the high degree of underactuation, it is desirable that each robot be independently maneuverable (independently controllable). To achieve independent control, we fabricated robots that behave (move) differently from one another in response to the same global control signal. We harnessed this differentiation to develop assembly control strategies, where the assembly goal is a desired geometric shape that can be obtained by connecting the chassis of individual robots. We derived and experimentally tested assembly plans that command some of the robots to make progress toward the goal, while other robots are constrained to remain in small circular trajectories (closed-loop orbits) until it is their turn to move into the goal shape. Our control strategies were tested on systems of fabricated MicroStressBots. The robots are 240–280 μm × 60 μm × 7–20 μm in size and move simultaneously within a single operating environment. We demonstrated the feasibility of our control scheme by accurately assembling five different types of planar microstructures. PMID:23580796
Jupiter Systems Data Analysis Program Galileo Multi-Spectral Analysis of the Galilean Satellites
NASA Technical Reports Server (NTRS)
Hendrix, Amanda; Carlson, Robert; Smythe, William
2002-01-01
Progress was made on this project at the University of Colorado, particularly concerning analysis of data of the galilean moons Io and Europa. The goal of the Io portion of this study is to incorporate Near Infrared Mapping Spectrometer (NIMS) measured sulfur dioxide (SO2) frost amounts into models used with Ultraviolet spectrometer (UVS) spectra, in order to better constrain SO2 gas amounts determined by the UVS. The overall goal of this portion of the study is to better understand the thickness and distribution of Io's SO2 atmosphere. The goal of the analysis of the Europa data is to better understand the source of the UV absorption feature centered near 280 rim which has been noted in disk-integrated spectra primarily on the trailing hemisphere. The NIMS data indicate asymmetric water ice bands on Europa, particularly over the trailing hemisphere, and especially concentrated in the visibly dark regions associated with chaotic terrain and lines. The UPS data, the first-ever disk-resolved UV spectra of Europa, shown that the UV absorber is likely concentrated in regions where the NIMS data show asymmetric water ice bands. The material that produces both spectral features is likely the same, and we use data from both wavelength regions to better understand this material, and whether it is endogenically or exogenically produced. This work is still in progress at JPL.
Terrestrial Sagnac delay constraining modified gravity models
NASA Astrophysics Data System (ADS)
Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.
2018-04-01
Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.
Peering to the Heart of Massive Star Birth - II. A Survey of 8 Protostars
NASA Astrophysics Data System (ADS)
Tan, Jonathan
2012-10-01
We propose to follow-up our SOFIA FORCAST Basic Science observation of G35.20-0.74 with similar observations of seven other massive protostars, with a total time request of about 5 hours. Our goal is to use mid-infrared (MIR) and far-infrared (FIR) imaging, especially at wavelengths of 31 and 37 microns that are unique to SOFIA, to constrain detailed radiative transfer models of massive star formation. In particular, we show that if massive stars are forming from high mass surface density cores, then the observed MIR and FIR morphologies are strongly influenced by the presence of protostellar outflow cavities. For typical surface densities of ~1 g cm^2, the observed radiation at wavelengths less than about 30 microns escapes preferentially along the near-facing outflow cavity. At longer wavelengths we begin to see emission from the far-facing cavity, and thus the proposed SOFIA FORCAST observations are particularly powerful for constraining the properties of the star-forming core such as the mass surface density in the immediate vicinity of the protostar. Our full analysis will involve comparing these SOFIA FORCAST data with images at other wavelengths, including Spitzer IRAC (3 to 8 microns), ground-based (10 & 20 microns) and Herschel (70 microns), to derive flux profiles and spectral energy distributions as a function of projected distance along the outflow axis. These observations have the potential to: (1) test basic scenarios of massive star formation; (2) begin to provide detailed measurements such as the mass surface density structure of massive star-forming cores and the line-of-sight orientation, opening angle, degree of symmetry and dust content of their outflow cavities. With a sample of eight protostars in total we will begin to be able to search for trends in these properties with core mass surface density and protostellar luminosity.
NASA Astrophysics Data System (ADS)
Darmenova, Kremena; Sokolik, Irina N.; Darmenov, Anton
2005-01-01
This study presents a detailed examination of east Asian dust events during March-April of 2001, by combining satellite multisensor observation (Total Ozone Mapping Spectrometer (TOMS), Moderate-Resolution Imaging Spectroradiometer (MODIS), and Sea-Viewing Wide Field-of-View Sensor (SeaWiFS)) meteorological data from weather stations in China and Mongolia and the Pennsylania State University/National Center for Atmospheric Research Mesoscale Modeling System (MM5) driven by the National Centers for Environmental Prediction Reanalysis data. The main goal is to determine the extent to which the routine surface meteorological observations (including visibility) and satellite data can be used to characterize the spatiotemporal distribution of dust plumes at a range of scales. We also examine the potential of meteorological time series for constraining the dust emission schemes used in aerosol transport models. Thirty-five dust events were identified in the source region during March and April of 2001 and characterized on a case-by-case basis. The midrange transport routes were reconstructed on the basis of visibility observations and observed and MM5-predicted winds with further validation against satellite data. We demonstrate that the combination of visibility data, TOMS aerosol index, MODIS aerosol optical depth over the land, and a qualitative analysis of MODIS and SeaWiFS imagery enables us to constrain the regions of origin of dust outbreaks and midrange transport, though various limitations of individual data sets were revealed in detecting dust over the land. Only two long-range transport episodes were found. The transport routes and coverage of these dust episodes were reconstructed by using MODIS aerosol optical depth and TOMS aerosol index. Our analysis reveals that over the oceans the presence of persistent clouds poses a main problem in identifying the regions affected by dust transport, so only partial reconstruction of dust transport routes reaching the west coast of the United States was possible.
The AMBRE project: Constraining the lithium evolution in the Milky Way
NASA Astrophysics Data System (ADS)
Guiglion, G.; de Laverny, P.; Recio-Blanco, A.; Worley, C. C.; De Pascale, M.; Masseron, T.; Prantzos, N.; Mikolaitis, Š.
2016-10-01
Context. The chemical evolution of lithium in the Milky Way represents a major problem in modern astrophysics. Indeed, lithium is, on the one hand, easily destroyed in stellar interiors, and, on the other hand, produced at some specific stellar evolutionary stages that are still not well constrained. Aims: The goal of this paper is to investigate the lithium stellar content of Milky Way stars in order to put constraints on the lithium chemical enrichment in our Galaxy, in particular in both the thin and thick discs. Methods: Thanks to high-resolution spectra from the ESO archive and high quality atmospheric parameters, we were able to build a massive and homogeneous catalogue of lithium abundances for 7300 stars derived with an automatic method coupling, a synthetic spectra grid, and a Gauss-Newton algorithm. We validated these lithium abundances with literature values, including those of the Gaia benchmark stars. Results: In terms of lithium galactic evolution, we show that the interstellar lithium abundance increases with metallicity by 1 dex from [M/H] = -1 dex to + 0.0 dex. Moreover, we find that this lithium ISM abundance decreases by about 0.5 dex at super-solar metalllicity. Based on a chemical separation, we also observed that the stellar lithium content in the thick disc increases rather slightly with metallicity, while the thin disc shows a steeper increase. The lithium abundance distribution of α-rich, metal-rich stars has a peak at ALi ~ 3 dex. Conclusions: We conclude that the thick disc stars suffered of a low lithium chemical enrichment, showing lithium abundances rather close to the Spite plateau while the thin disc stars clearly show an increasing lithium chemical enrichment with the metallicity, probably thanks to the contribution of low-mass stars. Full Table 2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/595/A18
Extragalactic science, cosmology, and Galactic archaeology with the Subaru Prime Focus Spectrograph
NASA Astrophysics Data System (ADS)
Takada, Masahiro; Ellis, Richard S.; Chiba, Masashi; Greene, Jenny E.; Aihara, Hiroaki; Arimoto, Nobuo; Bundy, Kevin; Cohen, Judith; Doré, Olivier; Graves, Genevieve; Gunn, James E.; Heckman, Timothy; Hirata, Christopher M.; Ho, Paul; Kneib, Jean-Paul; Le Fèvre, Olivier; Lin, Lihwai; More, Surhud; Murayama, Hitoshi; Nagao, Tohru; Ouchi, Masami; Seiffert, Michael; Silverman, John D.; Sodré, Laerte; Spergel, David N.; Strauss, Michael A.; Sugai, Hajime; Suto, Yasushi; Takami, Hideki; Wyse, Rosemary
2014-02-01
The Subaru Prime Focus Spectrograph (PFS) is a massively multiplexed fiber-fed optical and near-infrared three-arm spectrograph (Nfiber = 2400, 380 ≤ λ ≤ 1260 nm, 1 .^{circ}3 diameter field of view). Here, we summarize the science cases in terms of provisional plans for a 300-night Subaru survey. We describe plans to constrain the nature of dark energy via a survey of emission line galaxies spanning a comoving volume of 9.3 h-3 Gpc3 in the redshift range 0.8 < z < 2.4. In each of six redshift bins, the cosmological distances will be measured to 3% precision via the baryonic acoustic oscillation scale, and redshift-space distortion measures will constrain structure growth to 6% precision. In the near-field cosmology program, radial velocities and chemical abundances of stars in the Milky Way and M 31 will be used to infer the past assembly histories of spiral galaxies and the structure of their dark matter halos. Data will be secured for 106 stars in the Galactic thick-disk, halo, and tidal streams as faint as V ˜ 22, including stars with V < 20 to complement the goals of the Gaia mission. A medium-resolution mode with R = 5000 to be implemented in the red arm will allow the measurement of multiple α-element abundances and more precise velocities for Galactic stars. For the galaxy evolution program, our simulations suggest the wide wavelength range of PFS will be powerful in probing the galaxy population and its clustering over a wide redshift range. We plan to conduct a color-selected survey of 1 < z < 2 galaxies and AGN over 16 deg2 to J ≃ 23.4, yielding a fair sample of galaxies with stellar masses above ˜1010 M⊙ at z ≃ 2. A two-tiered survey of higher redshift Lyman break galaxies and Lyman alpha emitters will quantify the properties of early systems close to the reionization epoch.
Deep CFHT Y-band Imaging of VVDS-F22 Field. I. Data Products and Photometric Redshifts
NASA Astrophysics Data System (ADS)
Liu, Dezi; Yang, Jinyi; Yuan, Shuo; Wu, Xue-Bing; Fan, Zuhui; Shan, Huanyuan; Yan, Haojing; Zheng, Xianzhong
2017-02-01
We present our deep Y-band imaging data of a 2 square degree field within the F22 region of the VIMOS VLT Deep Survey. The observations were conducted using the WIRCam instrument mounted at the Canada-France-Hawaii Telescope (CFHT). The total on-sky time was 9 hr, distributed uniformly over 18 tiles. The scientific goals of the project are to select faint quasar candidates at redshift z> 2.2 and constrain the photometric redshifts for quasars and galaxies. In this paper, we present the observation and the image reduction, as well as the photometric redshifts that we derived by combining our Y-band data with the CFHTLenS {u}* g\\prime r\\prime I\\prime z\\prime optical data and UKIDSS DXS JHK near-infrared data. With the J-band image as a reference, a total of ˜80,000 galaxies are detected in the final mosaic down to a Y-band 5σ point-source limiting depth of 22.86 mag. Compared with the ˜3500 spectroscopic redshifts, our photometric redshifts for galaxies with z< 1.5 and I\\prime ≲ 24.0 mag have a small systematic offset of | {{Δ }}z| ≲ 0.2, 1σ scatter 0.03< {σ }{{Δ }z}< 0.06, and less than 4.0% of catastrophic failures. We also compare with the CFHTLenS photometric redshifts and find that ours are more reliable at z≳ 0.6 because of the inclusion of the near-infrared bands. In particular, including the Y-band data can improve the accuracy at z˜ 1.0{--}2.0 because the location of the 4000 Å break is better constrained. The Y-band images, the multiband photometry catalog, and the photometric redshifts are released at http://astro.pku.edu.cn/astro/data/DYI.html.
SIRGAS: ITRF densification in Latin America and the Caribbean
NASA Astrophysics Data System (ADS)
Brunini, C.; Costa, S.; Mackern, V.; Martínez, W.; Sánchez, L.; Seemüller, W.; da Silva, A.
2009-04-01
The continental reference frame of SIRGAS (Sistema de Referencia Geocéntrico para las Américas) is at present realized by the SIRGAS Continuously Operating Network (SIRGAS-CON) composed by about 200 stations distributed over all Latin America and the Caribbean. SIRGAS member countries are qualifying their national reference frames by installing continuously operating GNSS stations, which have to be consistently integrated into the continental network. As the number of these stations is rapidly increasing, the processing strategy of the SIRGAS-CON network was redefined during the SIRGAS 2008 General Meeting in May 2008. The new strategy relies upon the definition of two hierarchy levels: a) A core network (SIRGAS-CON-C) with homogeneous continental coverage and stabile site locations ensures the long-term stability of the reference frame and provides the primary link to the ITRS. Stations belonging to this network have been selected so that each country contributes with a number of stations defined according to its surface and guarantying that the selected stations are the best in operability, continuity, reliability, and geographical coverage. b) Several densification sub-networks (SIRGAS-CON-D) improve the accessibility to the reference frame. The SIRGAS-CON-D sub-networks shall correspond to the national reference frames, i.e., as an optimum there shall be as many sub-networks as countries in the region. The goal is that each country processes its own continuously stations following the SIRGAS processing guidelines, which are defined in accordance with the IERS and IGS standards and conventions. Since at present not all of the countries are operating a processing centre, the existing stations are classified in three densification networks (a Northern, a middle, and a Southern one), which are processed by three local processing centres until new ones are installed. As SIRGAS is defined as a densification of the ITRS, stations included in the core network, as well as in the densification sub-networks match the requirements, characteristics, and processing performance of the ITRF. The SIRGAS-CON-C network is processed by DGFI (Deutsches Geodätisches Forschungsinstitut, Germany) as the IGS-RNAAC-SIR. The Local Processing Centres are for the Northern sub-network IGAC (Instituto Geográfico Augustín Codazzi, Colombia), for the middle sub-network IBGE (Instituto Brasileiro de Geografia e Estátistica, Brazil), and for the Southern sub-network IGG-CIMA (Instituto de Geodesia y Geodinámica, Universidad Nacional de Cuyo, Argentina). These four Processing Centres deliver loosely constrained weekly solutions for station coordinates (i.e., satellite orbits, satellite clock offsets, and Earth orientation parameters are fixed to the final weekly IGS solutions and coordinates for all sites are constrained to 1 m). The individual contributions are integrated in a unified solution by the SIRGAS Combination Centres (DGFI and IBGE) according to the following strategy: 1) Individual solutions are reviewed/corrected for possible format problems, data inconsistencies, etc. 2) Constraints imposed in the delivered normal equations are removed. 3) Sub-networks are individually aligned to the IGS05 reference frame by applying the No Net Rotation (NNR) and No Net Translation (NNT) conditions. 4) Coordinates obtained in (3) for each sub-network are compared to IGS05 values and to each other in order to identify possible outliers. 5) Stations with large residuals (more than 10 mm in the N-E component, and more than 20 mm in the Up component) are reduced from the normal equations. Steps (3), (4), and (5) are done iteratively. 6) Since at present the four Analysis Centres are processing GPS observations only and all of them use the Bernese Software for computing weekly solutions, relative weighting factors are not applied in the combination. 7) Individual normal equations are accumulated and solved for computing a loosely constrained weekly solution for station coordinates (i.e., coordinates for all stations are constrained to 1 m). This solution in SINEX format is submitted to IGS for the global polyhedron. 8) Combination obtained in (7) is constrained by applying NNR+NNT conditions with respect to the IGS05 stations included the SIRGAS region to provide constrained coordinates for all SIRGAS-CON (core + densification) stations. The applied IGS05 reference coordinates correspond to the weekly IGS solution for the global network, i.e., coordinates included in the igsYYPwwww.snx files. This constrained solution provides the final weekly SIRGAS-CON coordinates for practical applications. The DGFI (i.e. IGS RNAAC SIR) weekly combinations are delivered to the IGS Data Centres for combination in the global polyhedron, and made available for users as official SIRGAS products, respectively. The IBGE weekly combinations provide control and back-up. The above described analysis strategy is applied since GPS week 1495. Before (since June 1996 to August 2008), the SIRGAS-CON network was totally processed by DGFI. Until now, results show a very good agreement with previous computations; however, the present sub-networks distribution has two main disadvantages: 1) Not all SIRGAS-CON stations are included in the same number of individual solutions, i.e., they are unequally weighted in the weekly combinations, and 2) since there are not enough Local Processing Centres, the required redundancy (each station processed by at least three processing centres) is not fulfilled. Therefore, efforts are being made to install additional Local Processing Centres in Latin American countries as Argentina, Ecuador, Mexico, Peru, Uruguay, and Venezuela.
Flexure Based Linear and Rotary Bearings
NASA Technical Reports Server (NTRS)
Voellmer, George M. (Inventor)
2016-01-01
A flexure based linear bearing includes top and bottom parallel rigid plates; first and second flexures connecting the top and bottom plates and constraining exactly four degrees of freedom of relative motion of the plates, the four degrees of freedom being X and Y axis translation and rotation about the X and Y axes; and a strut connecting the top and bottom plates and further constraining exactly one degree of freedom of the plates, the one degree of freedom being one of Z axis translation and rotation about the Z axis.
Aero-Propulsion Technology (APT) Task V Low Noise ADP Engine Definition Study
NASA Technical Reports Server (NTRS)
Holcombe, V.
2003-01-01
A study was conducted to identify and evaluate noise reduction technologies for advanced ducted prop propulsion systems that would allow increased capacity operation and result in an economically competitive commercial transport. The study investigated the aero/acoustic/structural advancements in fan and nacelle technology required to match or exceed the fuel burned and economic benefits of a constrained diameter large Advanced Ducted Propeller (ADP) compared to an unconstrained ADP propulsion system with a noise goal of 5 to 10 EPNDB reduction relative to FAR 36 Stage 3 at each of the three measuring stations namely, takeoff (cutback), approach and sideline. A second generation ADP was selected to operate within the maximum nacelle diameter constrain of 160 deg to allow installation under the wing. The impact of fan and nacelle technologies of the second generation ADP on fuel burn and direct operating costs for a typical 3000 nm mission was evaluated through use of a large, twin engine commercial airplane simulation model. The major emphasis of this study focused on fan blade aero/acoustic and structural technology evaluations and advanced nacelle designs. Results of this study have identified the testing required to verify the interactive performance of these components, along with noise characteristics, by wind tunnel testing utilizing and advanced interaction rig.
NASA Astrophysics Data System (ADS)
Venkatesan, Aparna; Rosenberg, Jessica L.; Salzer, John Joseph; Gronke, Max; Cannon, John M.; Miller, Christopher J.; Dijkstra, Mark
2018-06-01
Low-mass galaxies are thought to play a large role in reionizing the Universe at redshifts, z > 6. However, due to limited UV data on low-mass galaxies, the models used to estimate the escape of radiation are poorly constrained. Using theoretical models of radiation transport in dusty galaxies with clumpy gas media, we translate measurements of the UV slopes of a sample of low-mass low-z KISSR galaxies to their escape fraction values in Ly-alpha radiation, fesc (LyA), and in the Ly-continuum, fesc (LyC). These low-mass starforming systems have potentially steep UV slopes, and could provide a much-needed relation between easily measured spectral properties such as UV slope or LyA line properties, and the escape of LyA/LyC radiation. Such a relation could advance studies of primordial star clusters and the underlying physical conditions characterizing early galaxies, one of the target observation goals of the soon to-be-launched James Webb Space Telescope. This work was supported by the University of San Francisco Faculty Development Fund, and NSF grant AST-1637339. We thank the Aspen Center for Physics, where some of this work was conducted, and which is supported by National Science Foundation grant PHY-1607611.
A pitfall of piecewise-polytropic equation of state inference
NASA Astrophysics Data System (ADS)
Raaijmakers, Geert; Riley, Thomas E.; Watts, Anna L.
2018-05-01
The only messenger radiation in the Universe which one can use to statistically probe the Equation of State (EOS) of cold dense matter is that originating from the near-field vicinities of compact stars. Constraining gravitational masses and equatorial radii of rotating compact stars is a major goal for current and future telescope missions, with a primary purpose of constraining the EOS. From a Bayesian perspective it is necessary to carefully discuss prior definition; in this context a complicating issue is that in practice there exist pathologies in the general relativistic mapping between spaces of local (interior source matter) and global (exterior spacetime) parameters. In a companion paper, these issues were raised on a theoretical basis. In this study we reproduce a probability transformation procedure from the literature in order to map a joint posterior distribution of Schwarzschild gravitational masses and radii into a joint posterior distribution of EOS parameters. We demonstrate computationally that EOS parameter inferences are sensitive to the choice to define a prior on a joint space of these masses and radii, instead of on a joint space interior source matter parameters. We focus on the piecewise-polytropic EOS model, which is currently standard in the field of astrophysical dense matter study. We discuss the implications of this issue for the field.
Spectral edge: gradient-preserving spectral mapping for image fusion.
Connah, David; Drew, Mark S; Finlayson, Graham D
2015-12-01
This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.
Agent Based Intelligence in a Tetrahedral Rover
NASA Technical Reports Server (NTRS)
Phelps, Peter; Truszkowski, Walt
2007-01-01
A tetrahedron is a 4-node 6-strut pyramid structure which is being used by the NASA - Goddard Space Flight Center as the basic building block for a new approach to robotic motion. The struts are extendable; it is by the sequence of activities: strut-extension, changing the center of gravity and falling that the tetrahedron "moves". Currently, strut-extension is handled by human remote control. There is an effort underway to make the movement of the tetrahedron autonomous, driven by an attempt to achieve a goal. The approach being taken is to associate an intelligent agent with each node. Thus, the autonomous tetrahedron is realized as a constrained multi-agent system, where the constraints arise from the fact that between any two agents there is an extendible strut. The hypothesis of this work is that, by proper composition of such automated tetrahedra, robotic structures of various levels of complexity can be developed which will support more complex dynamic motions. This is the basis of the new approach to robotic motion which is under investigation. A Java-based simulator for the single tetrahedron, realized as a constrained multi-agent system, has been developed and evaluated. This paper reports on this project and presents a discussion of the structure and dynamics of the simulator.
NASA Astrophysics Data System (ADS)
Khode, Urmi B.
High Altitude Long Endurance (HALE) airships are platform of interest due to their persistent observation and persistent communication capabilities. A novel HALE airship design configuration incorporates a composite sandwich propulsive hull duct between the front and the back of the hull for significant drag reduction via blown wake effects. The sandwich composite shell duct is subjected to hull pressure on its outer walls and flow suction on its inner walls which result in in-plane wall compressive stress, which may cause duct buckling. An approach based upon finite element stability analysis combined with a ply layup and foam thickness determination weight minimization search algorithm is utilized. Its goal is to achieve an optimized solution for the configuration of the sandwich composite as a solution to a constrained minimum weight design problem, for which the shell duct remains stable with a prescribed margin of safety under prescribed loading. The stability analysis methodology is first verified by comparing published analytical results for a number of simple cylindrical shell configurations with FEM counterpart solutions obtained using the commercially available code ABAQUS. Results show that the approach is effective in identifying minimum weight composite duct configurations for a number of representative combinations of duct geometry, composite material and foam properties, and propulsive duct applied pressure loading.
Constrained navigation for unmanned systems
NASA Astrophysics Data System (ADS)
Vasseur, Laurent; Gosset, Philippe; Carpentier, Luc; Marion, Vincent; Morillon, Joel G.; Ropars, Patrice
2005-05-01
The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales as the prime contractor, focuses on about 15 robotic themes which can provide an immediate "operational add-on value". The paper details the "constrained navigation" study (named TEL2), which main goal is to identify and test a well-balanced task sharing between man and machine to accomplish a robotic task that cannot be performed autonomously at the moment because of technological limitations. The chosen function is "obstacle avoidance" on rough ground and quite high speed (40 km/h). State of the art algorithms have been implemented to perform autonomous obstacle avoidance and following of forest borders, using scanner laser sensor and standard localization functions. Such an "obstacle avoidance" function works well most of the time, BUT fails sometimes. The study analyzed how the remote operator can manage such failures so that the system remains fully operationally reliable; he can act according to two ways: a) finely adjust the vehicle current heading; b) take the control of the vehicle "on the fly" (without stopping) and bring it back to autonomous behavior when motion is secured again. The paper also presents the results got from the military acceptance tests performed on French 4x4 DARDS ATD.
Constraining the Mechanism of D" Anisotropy: Diversity of Observation Types Required
NASA Astrophysics Data System (ADS)
Creasy, N.; Pisconti, A.; Long, M. D.; Thomas, C.
2017-12-01
A variety of different mechanisms have been proposed as explanations for seismic anisotropy at the base of the mantle, including crystallographic preferred orientation of various minerals (bridgmanite, post-perovskite, and ferropericlase) and shape preferred orientation of elastically distinct materials such as partial melt. Investigations of the mechanism for D" anisotropy are usually ambiguous, as seismic observations rarely (if ever) uniquely constrain a mechanism. Observations of shear wave splitting and polarities of SdS and PdP reflections off the D" discontinuity are among our best tools for probing D" anisotropy; however, typical data sets cannot constrain a unique scenario suggested by the mineral physics literature. In this work, we determine what types of body wave observations are required to uniquely constrain a mechanism for D" anisotropy. We test multiple possible models based on both single-crystal and poly-phase elastic tensors provided by mineral physics studies. We predict shear wave splitting parameters for SKS, SKKS, and ScS phases and reflection polarities off the D" interface for a range of possible propagation directions. We run a series of tests that create synthetic data sets by random selection over multiple iterations, controlling the total number of measurements, the azimuthal distribution, and the type of phases. We treat each randomly drawn synthetic dataset with the same methodology as in Ford et al. (2015) to determine the possible mechanism(s), carrying out a grid search over all possible elastic tensors and orientations to determine which are consistent with the synthetic data. We find is it difficult to uniquely constrain the starting model with a realistic number of seismic anisotropy measurements with only one measurement technique or phase type. However, having a mix of SKS, SKKS, and ScS measurements, or a mix of shear wave splitting and reflection polarity measurements, dramatically increases the probability of uniquely constraining the starting model. We also explore what types of datasets are needed to uniquely constrain the orientation(s) of anisotropic symmetry if the mechanism is assumed.
NASA Astrophysics Data System (ADS)
Peylin, P. P.; Bacour, C.; MacBean, N.; Maignan, F.; Bastrikov, V.; Chevallier, F.
2017-12-01
Predicting the fate of carbon stocks and their sensitivity to climate change and land use/management strongly relies on our ability to accurately model net and gross carbon fluxes. However, simulated carbon and water fluxes remain subject to large uncertainties, partly because of unknown or poorly calibrated parameters. Over the past ten years, the carbon cycle data assimilation system at the Laboratoire des Sciences du Climat et de l'Environnement has investigated the benefit of assimilating multiple carbon cycle data streams into the ORCHIDEE LSM, the land surface component of the Institut Pierre Simon Laplace Earth System Model. These datasets have included FLUXNET eddy covariance data (net CO2 flux and latent heat flux) to constrain hourly to seasonal time-scale carbon cycle processes, remote sensing of the vegetation activity (MODIS NDVI) to constrain the leaf phenology, biomass data to constrain "slow" (yearly to decadal) processes of carbon allocation, and atmospheric CO2 concentrations to provide overall large scale constraints on the land carbon sink. Furthermore, we have investigated technical issues related to multiple data stream assimilation and choice of optimization algorithm. This has provided a wide-ranging perspective on the challenges we face in constraining model parameters and thus better quantifying, and reducing, model uncertainty in projections of the future global carbon sink. We review our past studies in terms of the impact of the optimization on key characteristics of the carbon cycle, e.g. the partition of the northern latitudes vs tropical land carbon sink, and compare to the classic atmospheric flux inversion approach. Throughout, we discuss our work in context of the abovementioned challenges, and propose solutions for the community going forward, including the potential of new observations such as atmospheric COS concentrations and satellite-derived Solar Induced Fluorescence to constrain the gross carbon fluxes of the ORCHIDEE model.
Constraining Modern and Historic Mercury Emissions From Gold Mining
NASA Astrophysics Data System (ADS)
Strode, S. A.; Jaeglé, L.; Selin, N. E.; Sunderland, E.
2007-12-01
Mercury emissions from both historic gold and silver mining and modern small-scale gold mining are highly uncertain. Historic mercury emissions can affect the modern atmosphere through reemission from land and ocean, and quantifying mercury emissions from historic gold and silver mining can help constrain modern mining sources. While estimates of mercury emissions during historic gold rushes exceed modern anthropogenic mercury emissions in North America, sediment records in many regions do not show a strong gold rush signal. We use the GEOS-Chem chemical transport model to determine the spatial footprint of mercury emissions from mining and compare model runs from gold rush periods to sediment and ice core records of historic mercury deposition. Based on records of gold and silver production, we include mercury emissions from North and South American mining of 1900 Mg/year in 1880, compared to modern global anthropogenic emissions of 3400 Mg/year. Including this large mining source in GEOS-Chem leads to an overestimate of the modeled 1880 to preindustrial enhancement ratio compared to the sediment core record. We conduct sensitivity studies to constrain the level of mercury emissions from modern and historic mining that is consistent with the deposition records for different regions.
Embleton, Lonnie; Mwangi, Ann; Vreeman, Rachel; Ayuku, David; Braitstein, Paula
2013-01-01
Aims To compile and analyze critically the literature published on street children and substance use in resource-constrained settings. Methods We searched the literature systematically and used meta-analytical procedures to synthesize literature that met the review’s inclusion criteria. Pooled-prevalence estimates and 95% confidence intervals (CI) were calculated using the random-effects model for life-time substance use by geographical region as well as by type of substance used. Results Fifty studies from 22 countries were included into the review. Meta-analysis of combined life-time substance use from 27 studies yielded an overall drug use pooled-prevalence estimate of 60% (95% CI = 51–69%). Studies from 14 countries contributed to an overall pooled prevalence for street children’s reported inhalant use of 47% (95% CI = 36–58%). This review reveals significant gaps in the literature, including a dearth of data on physical and mental health outcomes, HIV and mortality in association with street children’s substance use. Conclusions Street children from resource-constrained settings reported high life-time substance use. Inhalants are the predominant substances used, followed by tobacco, alcohol and marijuana. PMID:23844822
Life Goals Matter to Happiness: A Revision of Set-Point Theory
ERIC Educational Resources Information Center
Headey, Bruce
2008-01-01
Using data from the long-running German Socio-Economic Panel Survey (SOEP), this paper provides evidence that life goals matter substantially to subjective well-being (SWB). Non-zero sum goals, which include commitment to family, friends and social and political involvement, promote life satisfaction. Zero sum goals, including commitment to career…
Goal-setting in clinical medicine.
Bradley, E H; Bogardus, S T; Tinetti, M E; Inouye, S K
1999-07-01
The process of setting goals for medical care in the context of chronic disease has received little attention in the medical literature, despite the importance of goal-setting in the achievement of desired outcomes. Using qualitative research methods, this paper develops a theory of goal-setting in the care of patients with dementia. The theory posits several propositions. First, goals are generated from embedded values but are distinct from values. Goals vary based on specific circumstances and alternatives whereas values are person-specific and relatively stable in the face of changing circumstances. Second, goals are hierarchical in nature, with complex mappings between general and specific goals. Third, there are a number of factors that modify the goal-setting process, by affecting the generation of goals from values or the translation of general goals to specific goals. Modifying factors related to individuals include their degree of risk-taking, perceived self-efficacy, and acceptance of the disease. Disease factors that modify the goal-setting process include the urgency and irreversibility of the medical condition. Pertinent characteristics of the patient-family-clinician interaction include the level of participation, control, and trust among patients, family members, and clinicians. The research suggests that the goal-setting process in clinical medicine is complex, and the potential for disagreements regarding goals substantial. The nature of the goal-setting process suggests that explicit discussion of goals for care may be necessary to promote effective patient-family-clinician communication and adequate care planning.
Constrained space camera assembly
Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.
1999-05-11
A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.
Real time simulation of computer-assisted sequencing of terminal area operations
NASA Technical Reports Server (NTRS)
Dear, R. G.
1981-01-01
A simulation was developed to investigate the utilization of computer assisted decision making for the task of sequencing and scheduling aircraft in a high density terminal area. The simulation incorporates a decision methodology termed Constrained Position Shifting. This methodology accounts for aircraft velocity profiles, routes, and weight classes in dynamically sequencing and scheduling arriving aircraft. A sample demonstration of Constrained Position Shifting is presented where six aircraft types (including both light and heavy aircraft) are sequenced to land at Denver's Stapleton International Airport. A graphical display is utilized and Constrained Position Shifting with a maximum shift of four positions (rearward or forward) is compared to first come, first serve with respect to arrival at the runway. The implementation of computer assisted sequencing and scheduling methodologies is investigated. A time based control concept will be required and design considerations for such a system are discussed.
An Empirical Picture for the Evolution of Galaxies outside of Clusters
NASA Astrophysics Data System (ADS)
Saucedo-Morales, Julio; Bieging, John
The main goal of this work is to study the properties of isolated elliptical galaxies with the hope of learning about their formation and evolution. A sample that contains ~25% of the galaxies classified as ellipticals in the Karachentseva Catalog of Isolated Galaxies is investigated. Approximately one half of these galaxies appear to be misclassified, a result which may imply a reduction of the percentage of ellipticals in the Karachentseva catalog to (6+/-2% of the total population of isolated galaxies. A significant number of merger candidates has also been found among the isolated galaxies. It is argued that the fraction of merger candidates to isolated ellipticals can be used to constrain models for the evolution of compact groups into isolated galaxies.
Missions to the sun and to the earth. [planning of NASA Solar Terrestrial Program
NASA Technical Reports Server (NTRS)
Timothy, A. F.
1978-01-01
The program outlined in the present paper represents an optimized plan of solar terrestrial physics. It is constrained only in the sense that it involves not more than one new major mission per year for the Solar Terrestrial Division during the 1980-1985 period. However, the flight activity proposed, if accepted by the Agency and by Congress, would involve a growth in the existing Solar Terrestrial budget by more than a factor of 2. Thus, the program may be considered as somewhat optimistic when viewed in the broader context of the NASA goals and budget. The Agency's integrated FY 1980 Five Year Plan will show how many missions proposed will survive this planning process.
Spectropolarimetric Observations of a Small Active Region with IBIS
NASA Astrophysics Data System (ADS)
Tarr, Lucas; Judge, Philip G.
2014-06-01
We have used the Interferometric BI--dimensional Spectrograph (IBIS) instrument at the Dunn Solar Telescope to measure the polarimetric Stokes IQUV signals for the small active region, NOAA 11304. We used three lines generally corresponding to three atmospheric heights ranging from the photosphere to low corona: Fe I 6302Å, NaI 5896Å, and CaII 8542Å. Each set of profiles has been inverted using the NICOLE code to determine the vector magnetic field at the three heights throughout the field of view, or the line--of--sight field, as allowed by the level of polarization signal. Comparisons are made between the magnetic and thermal structures with the goal of constraining chromospheric models with the information obtained at multiple heights.
The Tobacco Use Management System: Analyzing Tobacco Control From a Systems Perspective
Young, David; Coghill, Ken; Zhang, Jian Ying
2010-01-01
We use systems thinking to develop a strategic framework for analyzing the tobacco problem and we suggest solutions. Humans are vulnerable to nicotine addiction, and the most marketable form of nicotine delivery is the most harmful. A tobacco use management system has evolved out of governments’ attempts to regulate tobacco marketing and use and to support services that provide information about tobacco's harms and discourage its use. Our analysis identified 5 systemic problems that constrain progress toward the elimination of tobacco-related harm. We argue that this goal would be more readily achieved if the regulatory subsystem had dynamic power to regulate tobacco products and the tobacco industry as well as a responsive process for resourcing tobacco use control activities. PMID:20466970
Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood
Bondell, Howard D.; Stefanski, Leonard A.
2013-01-01
Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chorover, Jon; Mueller, Karl; O'Day, Peggy
2016-04-02
Objectives of the project: 1. Determine the process coupling that occurs between mineral transformation and contaminant (U and Sr) speciation in acid-uranium waste weathered Hanford sediments. 2. Establish linkages between molecular-scale contaminant speciation and meso-scale contaminant lability, release and reactive transport. 3. Make conjunctive use of molecular- to bench-scale data to constrain the development of a mechanistic, reactive transport model that includes coupling of contaminant sorption-desorption and mineral transformation reactions. Hypotheses tested: - Uranium and strontium speciation in legacy sediments from the U-8 and U-12 Crib sites can be reproduced in bench-scale weathering experiments conducted on unimpacted Hanford sediments frommore » the same formations. - Reactive transport modeling of future uranium and strontium releases from the vadose zone of acid-waste weathered sediments can be effectively constrained by combining molecular-scale information on contaminant bonding environment with grain-scale information on contaminant phase partitioning, and meso-scale kinetic data on contaminant release from the waste-weathered porous media. - Although field contamination and laboratory experiments differ in their diagenetic time scales (decades for field vs. months to years for lab), sediment dissolution, neophase nucleation, and crystal growth reactions that occur during the initial disequilibrium induced by waste-sediment interaction leave a strong imprint that persists over subsequent longer-term equilibration time scales and, therefore, give rise to long-term memory effects. Enabling capabilities developed: Our team developed an iterative measure-model approach that is broadly applicable to elucidate the mechanistic underpinnings of reactive contaminant transport in geomedia subject to active weathering. Experimental design: Hypotheses were tested by comparing (with a similar set of techniques) the geochemical transformations and transport behaviors that occured in bench-scale studies of waste-sediment interaction with parallel model systems studies of homogeneous nucleation and neo-phase dissolution. Initial plans were to compare results with core sample extractions from the acid uranium waste impacted U-8 and U-12 Cribs at Hanford (see original proposal and letter of collaboration from J. Zachara). However, this part of the project was impossible because funding for core extractions were eliminated from the DoE budget. Three distinct crib waste aqueous simulants (whose composition is based on the most up-to-date information from field site investigations) were reacted with Hanford sediments in batch and column systems. Coupling of contaminant uptake to mineral weathering was monitored using a suite of methods both during waste-sediment interaction, and after, when waste-weathered sediments were subjected to infusion with circumneutral background pore water solutions. Our research was designed to adapt as needed to maintain a strong dialogue between laboratory and modeling investigations so that model development was increasingly constrained by emergent data and understanding. Potential impact of the project to DOE: Better prediction of contaminant uranium transport was achieved by employing multi-faceted lines of inquiry to build a strong bridge between molecular- and field-scale information. By focusing multiple lines and scales of observation on a common experimental design, our collaborative team revealed non-linear and emergent behavior in contaminated weathering systems. A goal of the current project was to expand our modeling capabilities, originally focused on hyperalkaline legacy waste streams, to include acidic weathering reactions that, as described above, were expected to result in profoundly different products. We were able to achieve this goal, and showed that these products nonetheless undergo analogous silicate and non-silicate transformation, ripening and aging processes. Our prediction that these weathering reactions would vary with waste stimulant chemistry resulted in data that was incorporated directly into a reactive transport model structure.« less
NASA Astrophysics Data System (ADS)
Johnson, M. V. V.; Behrman, K. D.; Atwood, J. D.; White, M. J.; Norfleet, M. L.
2017-12-01
There is substantial interest in understanding how conservation practices and agricultural management impact water quality, particularly phosphorus dynamics, in the Western Lake Erie Basin (WLEB). In 2016, the US and Canada accepted total phosphorus (TP) load targets recommended by the Great Lakes Water Quality Agreement Annex 4 Objectives and Targets Task Team; these were 6,000 MTA delivered to Lake Erie and 3,660 MTA delivered to WLEB. Outstanding challenges include development of metrics to determine achievement of these goals, establishment of sufficient monitoring capacity to assess progress, and identification of appropriate conservation practices to achieve the most cost-effective results. Process-based modeling can help inform decisions to address these challenges more quickly than can system observation. As part of the NRCS-led Conservation Effects Assessment Project (CEAP), the Soil Water Assessment Tool (SWAT) was used to predict impacts of conservation practice adoption reported by farmers on TP loss and load delivery dynamics in WLEB. SWAT results suggest that once the conservation practices in place in 2003-06 and 2012 are fully functional, TP loads delivered to WLEB will average 3,175 MTA and 3,084 MTA, respectively. In other words, SWAT predicts that currently adopted practices are sufficient to meet Annex 4 TP load targets. Yet, WLEB gauging stations show Annex 4 goals are unmet. There are several reasons the model predictions and current monitoring efforts are not in agreement: 1. SWAT assumes full functionality of simulated conservation practices; 2. SWAT does not simulate changing management over time, nor impacts of past management on legacy loads; 3. SWAT assumes WLEB hydrological system equilibrium under simulated management. The SWAT model runs used to construct the scenarios that informed the Annex 4 targets were similarly constrained by model assumptions. It takes time for a system to achieve equilibrium when management changes and it takes time for monitoring efforts to measure meaningful changes over time. Careful interpretation of model outputs is imperative for appropriate application of current scientific knowledge to inform decision making, especially when models are used to set spatial and temporal goals around conservation practice adoption and water quality.
Kerns, J William; Winter, Jonathan D; Winter, Katherine M; Boyd, Terry; Etz, Rebecca S
2018-01-01
Guidelines, policies, and warnings have been applied to reduce the use of medications for behavioral and psychological symptoms of dementia (BPSD). Because of rare dangerous side effects, antipsychotics have been singled out in these efforts. However, antipsychotics are still prescribed "off label" to hundreds of thousands of seniors residing in nursing homes and communities. Our objective was to evaluate how and why primary-care physicians (PCPs) employ nonpharmacologic strategies and drugs for BPSD. Semi-structured interviews analyzed via template, immersion and crystallization, and thematic development of 26 PCPs (16 family practice, 10 general internal medicine) in full time primary-care practice for at least 3 years in Northwestern Virginia. PCPs described 4 major themes regarding BPSD management: (1) nonpharmacologic methods have substantial barriers; (2) medication use is not constrained by those barriers and is perceived as easy, efficacious, reasonably safe, and appropriate; (3) pharmacologic policies decrease the use of targeted medications, including antipsychotics, but also have unintended consequences such as increased use of alternative risky medications; and (4) PCPs need practical evidence-based guidelines for all aspects of BPSD management. PCPs continue to prescribe medications because they meet patient-oriented goals and because PCPs perceive drugs, including antipsychotics and their alternatives, to be more effective and less dangerous than evidence suggests. To optimally treat BPSD, PCPs need supportive verified prescribing guidelines and access to nonpharmacologic modalities that are as affordable, available, and efficacious as drugs; these require and deserve significant additional research and payer support. Community PCPs should be included in BPSD policy and guideline development. © Copyright 2018 by the American Board of Family Medicine.
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
NASA Astrophysics Data System (ADS)
Beachly, M. W.; Hooft, E. E.; Toomey, D. R.; Waite, G. P.
2011-12-01
Imaging magmatic systems improves our understanding of magma ascent and storage in the crust and contributes to hazard assessment. Seismic tomography reveals crustal magma bodies as regions of low velocity; however the ability of delay-time tomography to detect small, low-velocity bodies is limited by wavefront healing. Alternatively, crustal magma chambers have been identified from secondary phases including P and S wave reflections and conversions. We use a combination of P-wave tomography and finite-difference waveform modeling to characterize a shallow crustal magma body at Newberry Volcano, central Oregon. Newberry's eruptions are silicic within the central caldera and mafic on its periphery suggesting a central silicic magma storage system. The system may still be active with a recent eruption ~1300 years ago and a drill hole temperature of 256° C at only 932 m depth. A low-velocity anomaly previously imaged at 3-5 km beneath the caldera indicates either a magma body or a fractured pluton. With the goal of detecting secondary arrivals from a magma chamber beneath Newberry Volcano, we deployed a line of densely-spaced (~300 m), three-component seismometers that recorded a shot of opportunity from the High Lava Plains Experiment in 2008. The data record a secondary P-wave arrival originating from beneath the caldera. In addition we combine travel-time data from our 2008 experiment with data collected in the 1980's by the USGS for a P-wave tomography inversion to image velocity structure to 6 km depth. The inversion includes 16 active sources, 322 receivers and 1007 P-wave first arrivals. The tomography results reveal a high-velocity, ring-like anomaly beneath the caldera ring faults to 2 km depth that surrounds a shallow low-velocity region. Beneath 2.5 km high-velocity anomalies are concentrated east and west of the caldera. A central low-velocity body lies below 3 km depth. Tomographic inversions of synthetic data suggest that the central low-velocity body beneath 3 km depth is not well resolved and that, for example, an unrealistically large low-velocity body with a volume up to 72 km3 at 40% velocity reduction (representing 30±7% partial melt) could be consistent with the observed travel-times. We use the tomographically derived velocity structure to construct 2D finite difference models and include synthetic low-velocity bodies in these models to test various magma chamber geometries and melt contents. Waveform modeling identifies the observed secondary phase as a transmitted P-wave formed by delaying and focusing P-wave energy through the low-velocity region. We will further constrain the size and shape of the low-velocity region by comparing arrival times and amplitudes of observed and synthetic primary and secondary phases. Secondary arrivals provide compelling evidence for an active crustal magmatic system beneath Newberry volcano and demonstrate the ability of waveform modeling to constrain the nature of magma bodies beyond the limits of seismic tomography.
The On-line Waste Library (OWL): Usage and Inventory Status Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sassani, David; Jang, Je-Hun; Mariner, Paul
The Waste Form Disposal Options Evaluation Report (SNL 2014) evaluated disposal of both Commercial Spent Nuclear Fuel (CSNF) and DOE-managed HLW and Spent Nuclear Fuel (DHLW and DSNF) in the variety of disposal concepts being evaluated within the Used Fuel Disposition Campaign. That work covered a comprehensive inventory and a wide range of disposal concepts. The primary goal of this work is to evaluate the information needs for analyzing disposal solely of a subset of those wastes in a Defense Repository (DRep; i.e., those wastes that are either defense related, or managed by DOE but are not commercial in origin).more » A potential DRep also appears to be safe in the range of geologic mined repository concepts, but may have different concepts and features because of the very different inventory of waste that would be included. The focus of this status report is to cover the progress made in FY16 toward: (1) developing a preliminary DRep included inventory for engineering/design analyses; (2) assessing the major differences of this included inventory relative to that in other analyzed repository systems and the potential impacts to disposal concepts; (3) designing and developing an on-line waste library (OWL) to manage the information of all those wastes and their waste forms (including CSNF if needed); and (4) constraining post-closure waste form degradation performance for safety assessments of a DRep. In addition, some continuing work is reported on identifying potential candidate waste types/forms to be added to the full list from SNL (2014 – see Table C-1) which also may be added to the OWL in the future. The status for each of these aspects is reported herein.« less
Constraining storm-scale forecasts of deep convective initiation with surface weather observations
NASA Astrophysics Data System (ADS)
Madaus, Luke
Successfully forecasting when and where individual convective storms will form remains an elusive goal for short-term numerical weather prediction. In this dissertation, the convective initiation (CI) challenge is considered as a problem of insufficiently resolved initial conditions and dense surface weather observations are explored as a possible solution. To better quantify convective-scale surface variability in numerical simulations of discrete convective initiation, idealized ensemble simulations of a variety of environments where CI occurs in response to boundary-layer processes are examined. Coherent features 1-2 hours prior to CI are found in all surface fields examined. While some features were broadly expected, such as positive temperature anomalies and convergent winds, negative temperature anomalies due to cloud shadowing are the largest surface anomaly seen prior to CI. Based on these simulations, several hypotheses about the required characteristics of a surface observing network to constrain CI forecasts are developed. Principally, these suggest that observation spacings of less than 4---5 km would be required, based on correlation length scales. Furthermore, it is anticipated that 2-m temperature and 10-m wind observations would likely be more relevant for effectively constraining variability than surface pressure or 2-m moisture observations based on the magnitudes of observed anomalies relative to observation error. These hypotheses are tested with a series of observing system simulation experiments (OSSEs) using a single CI-capable environment. The OSSE results largely confirm the hypotheses, and with 4-km and particularly 1-km surface observation spacing, skillful forecasts of CI are possible, but only within two hours of CI time. Several facets of convective-scale assimilation, including the need for properly-calibrated localization and problems from non-Gaussian ensemble estimates of the cloud field are discussed. Finally, the characteristics of one candidate dense surface observing network are examined: smartphone pressure observations. Available smartphone pressure observations (and 1-hr pressure tendency observations) are tested by assimilating them into convective-allowing ensemble forecasts for a three-day active convective period in the eastern United States. Although smartphone observations contain noise and internal disagreement, they are effective at reducing short-term forecast errors in surface pressure, wind and precipitation. The results suggest that smartphone pressure observations could become a viable mesoscale observation platform, but more work is needed to enhance their density and reduce error. This work concludes by reviewing and suggesting other novel candidate observation platforms with a potential to improve convective-scale forecasts of CI.
Reliability of Source Mechanisms for a Hydraulic Fracturing Dataset
NASA Astrophysics Data System (ADS)
Eyre, T.; Van der Baan, M.
2016-12-01
Non-double-couple components have been inferred for induced seismicity due to fluid injection, yet these components are often poorly constrained due to the acquisition geometry. Likewise non-double-couple components in microseismic recordings are not uncommon. Microseismic source mechanisms provide an insight into the fracturing behaviour of a hydraulically stimulated reservoir. However, source inversion in a hydraulic fracturing environment is complicated by the likelihood of volumetric contributions to the source due to the presence of high pressure fluids, which greatly increases the possible solution space and therefore the non-uniqueness of the solutions. Microseismic data is usually recorded on either 2D surface or borehole arrays of sensors. In many cases, surface arrays appear to constrain source mechanisms with high shear components, whereas borehole arrays tend to constrain more variable mechanisms including those with high tensile components. The abilities of each geometry to constrain the true source mechanisms are therefore called into question.The ability to distinguish between shear and tensile source mechanisms with different acquisition geometries is investigated using synthetic data. For both inversions, both P- and S- wave amplitudes recorded on three component sensors need to be included to obtain reliable solutions. Surface arrays appear to give more reliable solutions due to a greater sampling of the focal sphere, but in reality tend to record signals with a low signal to noise ratio. Borehole arrays can produce acceptable results, however the reliability is much more affected by relative source-receiver locations and source orientation, with biases produced in many of the solutions. Therefore more care must be taken when interpreting results.These findings are taken into account when interpreting a microseismic dataset of 470 events recorded by two vertical borehole arrays monitoring a horizontal treatment well. Source locations and mechanisms are calculated and the results discussed, including the biases caused by the array geometry. The majority of the events are located within the target reservoir, however a small, seemingly disconnected cluster of events appears 100 m above the reservoir.
NASA Technical Reports Server (NTRS)
Danielson, Lisa R.; Righter, K.; Sutton S.; Newville, M.; Le, L.
2007-01-01
Tungsten is important in constraining core formation of the Earth because this element is a moderately siderophile element (depleted approx. 10 relative to chondrites) and, as a member of the Hf-W isotopic system, it is useful in constraining the timing of core formation. A number of previous experimental studies have been carried out to determine the silicate solubility and metal-silicate partitioning behavior of W, including its concomitant oxidation state. However, results of previous studies (figure 1) are inconsistent on whether W occurs as W(4+) or W(6+).
NASA Astrophysics Data System (ADS)
Katavouta, Anna; Thompson, Keith R.
2016-08-01
The overall goal is to downscale ocean conditions predicted by an existing global prediction system and evaluate the results using observations from the Gulf of Maine, Scotian Shelf and adjacent deep ocean. The first step is to develop a one-way nested regional model and evaluate its predictions using observations from multiple sources including satellite-borne sensors of surface temperature and sea level, CTDs, Argo floats and moored current meters. It is shown that the regional model predicts more realistic fields than the global system on the shelf because it has higher resolution and includes tides that are absent from the global system. However, in deep water the regional model misplaces deep ocean eddies and meanders associated with the Gulf Stream. This is not because the regional model's dynamics are flawed but rather is the result of internally generated variability in deep water that leads to decoupling of the regional model from the global system. To overcome this problem, the next step is to spectrally nudge the regional model to the large scales (length scales > 90 km) of the global system. It is shown this leads to more realistic predictions off the shelf. Wavenumber spectra show that even though spectral nudging constrains the large scales, it does not suppress the variability on small scales; on the contrary, it favours the formation of eddies with length scales below the cutoff wavelength of the spectral nudging.
Magnetotelluric Data, Central Yucca Flat, Nevada Test Site, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.M. Williams; B.D. Rodriguez, and T.H. Asch
2005-11-23
Nuclear weapons are integral to the defense of the United States. The U.S. Department of Energy, as the steward of these devices, must continue to gauge the efficacy of the individual weapons. This could be accomplished by occasional testing at the Nevada Test Site (NTS) in Nevada, northwest of Las Vegas. Yucca Flat Basin is one of the testing areas at the NTS. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about themore » hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS) collected and processed Magnetotelluric (MT) and Audio-magnetotelluric (AMT) data at the Nevada Test Site in and near Yucca Flat to help characterize this pre-Tertiary geology. That work will help to define the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU) in the Yucca Flat area. Interpretation will include a three-dimensional (3-D) character analysis and two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT sounding data for Central Yucca Flat, Profile 1, as shown in figure 1. No interpretation of the data is included here.« less
Magnetotelluric Data, North Central Yucca Flat, Nevada Test Site, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.M. Williams; B.D. Rodriguez, and T.H. Asch
2005-11-23
Nuclear weapons are integral to the defense of the United States. The U.S. Department of Energy, as the steward of these devices, must continue to gauge the efficacy of the individual weapons. This could be accomplished by occasional testing at the Nevada Test Site (NTS) in Nevada, northwest of Las Vegas. Yucca Flat Basin is one of the testing areas at the NTS. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about themore » hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS) collected and processed Magnetotelluric (MT) and Audio-magnetotelluric (AMT) data at the Nevada Test Site in and near Yucca Flat to help characterize this pre-Tertiary geology. That work will help to define the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU) in the Yucca Flat area. Interpretation will include a three-dimensional (3-D) character analysis and two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT sounding data for north central Yucca Flat, Profile 7, as shown in Figure 1. No interpretation of the data is included here.« less
Magnetotelluric Data, Northern Frenchman Flat, Nevada Test Site Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.M. Williams; B.D. Rodriguez, and T. H. Asch
2005-11-23
Nuclear weapons are integral to the defense of the United States. The U.S. Department of Energy, as the steward of these devices, must continue to gauge the efficacy of the individual weapons. This could be accomplished by occasional testing at the Nevada Test Site (NTS) in Nevada, northwest of Las Vegas. Yucca Flat Basin is one of the testing areas at the NTS. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about themore » hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS) collected and processed Magnetotelluric (MT) and Audio-magnetotelluric (AMT) data at the Nevada Test Site in and near Yucca Flat to help characterize this pre-Tertiary geology. That work will help to define the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU) in the Yucca Flat area. Interpretation will include a three-dimensional (3-D) character analysis and two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT sounding data for Frenchman Flat Profile 3, as shown in Figure 1. No interpretation of the data is included here.« less
Magnetotelluric Data, Across Quartzite Ridge, Nevada Test Site, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.M. Williams; B.D. Rodriguez, and T.H. Asch
2005-11-23
Nuclear weapons are integral to the defense of the United States. The U.S. Department of Energy, as the steward of these devices, must continue to gauge the efficacy of the individual weapons. This could be accomplished by occasional testing at the Nevada Test Site (NTS) in Nevada, northwest of Las Vegas. Yucca Flat Basin is one of the testing areas at the NTS. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about themore » hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS) collected and processed Magnetotelluric (MT) and Audio-magnetotelluric (AMT) data at the Nevada Test Site in and near Yucca Flat to help characterize this pre-Tertiary geology. That work will help to define the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU) in the Yucca Flat area. Interpretation will include a three-dimensional (3-D) character analysis and two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT soundings across Quartzite Ridge, Profiles 5, 6a, and 6b, as shown in Figure 1. No interpretation of the data is included here.« less
Magnetotelluric Data, Southern Yucca Flat, Nevada Test Site, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.M. Williams; B.D. Rodriguez, and T.H. Asch
2005-11-23
Nuclear weapons are integral to the defense of the United States. The U.S. Department of Energy, as the steward of these devices, must continue to gauge the efficacy of the individual weapons. This could be accomplished by occasional testing at the Nevada Test Site (NTS) in Nevada, northwest of Las Vegas. Yucca Flat Basin is one of the testing areas at the NTS. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about themore » hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS) collected and processed Magnetotelluric (MT) and Audio-magnetotelluric (AMT) data at the Nevada Test Site in and near Yucca Flat to help characterize this pre-Tertiary geology. That work will help to define the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU) in the Yucca Flat area. Interpretation will include a three-dimensional (3-D) character analysis and two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT sounding data for Southern Yucca Flat, Profile 4, as shown in Figure 1. No interpretation of the data is included here.« less
Magnetotelluric Data, Northern Yucca Flat, Nevada Test Site, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.M. Williams; B.D. Rodriguez, and T.H. Asch
2005-11-23
Nuclear weapons are integral to the defense of the United States. The U.S. Department of Energy, as the steward of these devices, must continue to gauge the efficacy of the individual weapons. This could be accomplished by occasional testing at the Nevada Test Site (NTS) in Nevada, northwest of Las Vegas. Yucca Flat Basin is one of the testing areas at the NTS. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about themore » hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS) collected and processed Magnetotelluric (MT) and Audio-magnetotelluric (AMT) data at the Nevada Test Site in and near Yucca Flat to help characterize this pre-Tertiary geology. That work will help to define the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU) in the Yucca Flat area. Interpretation will include a three-dimensional (3-D) character analysis and two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT sounding data for Profile 2, (fig. 1), located in the northern Yucca Flat area. No interpretation of the data is included here.« less
Livestock vaccinations translate into increased human capital and school attendance by girls.
Marsh, Thomas L; Yoder, Jonathan; Deboch, Tesfaye; McElwain, Terry F; Palmer, Guy H
2016-12-01
To fulfill the United Nation's Sustainable Development Goals (SDGs), it is useful to understand whether and how specific agricultural interventions improve human health, educational opportunity, and food security. In sub-Saharan Africa, 75% of the population is engaged in small-scale farming, and 80% of these households keep livestock, which represent a critical asset and provide protection against economic shock. For the 50 million pastoralists, livestock play an even greater role. Livestock productivity for pastoralist households is constrained by multiple factors, including infectious disease. East Coast fever, a tick-borne protozoal disease, is the leading cause of calf mortality in large regions of eastern and Southern Africa. We examined pastoralist decisions to adopt vaccination against East Coast fever and the economic outcomes of adoption. Our estimation strategy provides an integrated model of adoption and impact that includes direct effects of vaccination on livestock health and productivity outcomes, as well as indirect effects on household expenditures, such as child education, food, and health care. On the basis of a cross-sectional study of Kenyan pastoralist households, we found that vaccination provides significant net income benefits from reduction in livestock mortality, increased milk production, and savings by reducing antibiotic and acaricide treatments. Households directed the increased income resulting from East Coast fever vaccination into childhood education and food purchase. These indirect effects of livestock vaccination provide a positive impact on rural, livestock-dependent families, contributing to poverty alleviation at the household level and more broadly to achieving SDGs.
Livestock vaccinations translate into increased human capital and school attendance by girls
Marsh, Thomas L.; Yoder, Jonathan; Deboch, Tesfaye; McElwain, Terry F.; Palmer, Guy H.
2016-01-01
To fulfill the United Nation’s Sustainable Development Goals (SDGs), it is useful to understand whether and how specific agricultural interventions improve human health, educational opportunity, and food security. In sub-Saharan Africa, 75% of the population is engaged in small-scale farming, and 80% of these households keep livestock, which represent a critical asset and provide protection against economic shock. For the 50 million pastoralists, livestock play an even greater role. Livestock productivity for pastoralist households is constrained by multiple factors, including infectious disease. East Coast fever, a tick-borne protozoal disease, is the leading cause of calf mortality in large regions of eastern and Southern Africa. We examined pastoralist decisions to adopt vaccination against East Coast fever and the economic outcomes of adoption. Our estimation strategy provides an integrated model of adoption and impact that includes direct effects of vaccination on livestock health and productivity outcomes, as well as indirect effects on household expenditures, such as child education, food, and health care. On the basis of a cross-sectional study of Kenyan pastoralist households, we found that vaccination provides significant net income benefits from reduction in livestock mortality, increased milk production, and savings by reducing antibiotic and acaricide treatments. Households directed the increased income resulting from East Coast fever vaccination into childhood education and food purchase. These indirect effects of livestock vaccination provide a positive impact on rural, livestock-dependent families, contributing to poverty alleviation at the household level and more broadly to achieving SDGs. PMID:27990491
Plant-animal interactions in suburban environments: implications for floral evolution.
Irwin, Rebecca E; Warren, Paige S; Carper, Adrian L; Adler, Lynn S
2014-03-01
Plant interactions with mutualists and antagonists vary remarkably across space, and have played key roles in the ecology and evolution of flowering plants. One dominant form of spatial variation is human modification of the landscape, including urbanization and suburbanization. Our goal was to assess how suburbanization affected plant-animal interactions in Gelsemium sempervirens in the southeastern United States, including interactions with mutualists (pollination) and antagonists (nectar robbing and florivory). Based on differences in plant-animal interactions measured in multiple replicate sites, we then developed predictions for how these differences would affect patterns of natural selection, and we explored the patterns using measurements of floral and defensive traits in the field and in a common garden. We found that Gelsemium growing in suburban sites experienced more robbing and florivory as well as more heterospecific but not conspecific pollen transfer. Floral traits, particularly corolla length and width, influenced the susceptibility of plants to particular interactors. Observational data of floral traits measured in the field and in a common garden provided some supporting but also some conflicting evidence for the hypothesis that floral traits evolved in response to differences in species interactions in suburban vs. wild sites. However, the degree to which plants can respond to any one interactor may be constrained by correlations among floral morphological traits. Taken together, consideration of the broader geographic context in which organisms interact, in both suburban and wild areas, is fundamental to our understanding of the forces that shape contemporary plant-animal interactions and selection pressures in native species.
Austerity and the "sector-wide approach" to health: The Mozambique experience.
Pfeiffer, James; Gimbel, Sarah; Chilundo, Baltazar; Gloyd, Stephen; Chapman, Rachel; Sherr, Kenneth
2017-08-01
Fiscal austerity policies imposed by the IMF have reduced investments in social services, leaving post-independence nations like Mozambique struggling to recover from civil war and high disease burden. By 2000, a sector-wide approach (SWAp) was promoted to maximize aid effectiveness. 'Like-minded' bilateral donors, from Europe and Canada, promoted a unified approach to health sector support focusing on joint planning, common basket funding, and streamlined monitoring and evaluation to improve sector coordination, amplify country ownership, and build sustainable health systems. Notable donors - including US government and the Global Fund - did not participate in the SWAp, and increased vertical funding weakened the SWAp in favor of non-governmental organizations (NGOs). In spite of some success in harmonizing aid to the health sector, the SWAp experience in Mozambique demonstrates how continued austerity regimes that severely constrain public spending will continue to undermine health system strengthening in Africa, even in the midst of high levels of foreign aid with the ostensible purpose of strengthening those systems. The SWAp story provides a poignant illustration of how continued austerity will impede progress toward Sustainable Development Goal 3 (SDG 3); "Achieve universal health coverage, including financial risk protection, access to quality essential health-care services and access to safe, effective, quality and affordable essential medicines and vaccines for all". However, the SWAp continues to offer an alternative model to health system support that can provide a foundation for resistance to renewed austerity measures. Copyright © 2017 Elsevier Ltd. All rights reserved.
Constant-Time Pattern Matching For Real-Time Production Systems
NASA Astrophysics Data System (ADS)
Parson, Dale E.; Blank, Glenn D.
1989-03-01
Many intelligent systems must respond to sensory data or critical environmental conditions in fixed, predictable time. Rule-based systems, including those based on the efficient Rete matching algorithm, cannot guarantee this result. Improvement in execution-time efficiency is not all that is needed here; it is important to ensure constant, 0(1) time limits for portions of the matching process. Our approach is inspired by two observations about human performance. First, cognitive psychologists distinguish between automatic and controlled processing. Analogously, we partition the matching process across two networks. The first is the automatic partition; it is characterized by predictable 0(1) time and space complexity, lack of persistent memory, and is reactive in nature. The second is the controlled partition; it includes the search-based goal-driven and data-driven processing typical of most production system programming. The former is responsible for recognition and response to critical environmental conditions. The latter is responsible for the more flexible problem-solving behaviors consistent with the notion of intelligence. Support for learning and refining the automatic partition can be placed in the controlled partition. Our second observation is that people are able to attend to more critical stimuli or requirements selectively. Our match algorithm uses priorities to focus matching. It compares priority of information during matching, rather than deferring this comparison until conflict resolution. Messages from the automatic partition are able to interrupt the controlled partition, enhancing system responsiveness. Our algorithm has numerous applications for systems that must exhibit time-constrained behavior.
NASA Astrophysics Data System (ADS)
Sisk-Hilton, Stephanie Lee
This study examines the two way relationship between an inquiry-based professional development model and teacher enactors. The two year study follows a group of teachers enacting the emergent Supporting Knowledge Integration for Inquiry Practice (SKIIP) professional development model. This study seeks to: (a) identify activity structures in the model that interact with teachers' underlying assumptions regarding professional development and inquiry learning; (b) explain key decision points during implementation in terms of these underlying assumptions; and (c) examine the impact of key activity structures on individual teachers' stated belief structures regarding inquiry learning. Linn's knowledge integration framework facilitates description and analysis of teacher development. Three sets of tensions emerge as themes that describe and constrain participants' interaction with and learning through the model. These are: learning from the group vs. learning on one's own; choosing and evaluating evidence based on impressions vs. specific criteria; and acquiring new knowledge vs. maintaining feelings of autonomy and efficacy. In each of these tensions, existing group goals and operating assumptions initially fell at one end of the tension, while the professional development goals and forms fell at the other. Changes to the model occurred as participants reacted to and negotiated these points of tension. As the group engaged in and modified the SKIIP model, they had repeated opportunities to articulate goals and to make connections between goals and model activity structures. Over time, decisions to modify the model took into consideration an increasingly complex set of underlying assumptions and goals. Teachers identified and sought to balance these tensions. This led to more complex and nuanced decision making, which reflected growing capacity to consider multiple goals in choosing activity structures to enact. The study identifies key activity structures that scaffolded this process for teachers, and which ultimately promoted knowledge integration at both the group and individual levels. This study is an "extreme case" which examines implementation of the SKIIP model under very favorable conditions. Lessons learned regarding appropriate levels of model responsiveness, likely areas of conflict between model form and teacher underlying assumptions, and activity structures that scaffold knowledge integration provide a starting point for future, larger scale implementation.
A Linux Workstation for High Performance Graphics
NASA Technical Reports Server (NTRS)
Geist, Robert; Westall, James
2000-01-01
The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.
Reasoning and planning in dynamic domains: An experiment with a mobile robot
NASA Technical Reports Server (NTRS)
Georgeff, M. P.; Lansky, A. L.; Schoppers, M. J.
1987-01-01
Progress made toward having an autonomous mobile robot reason and plan complex tasks in real-world environments is described. To cope with the dynamic and uncertain nature of the world, researchers use a highly reactive system to which is attributed attitudes of belief, desire, and intention. Because these attitudes are explicitly represented, they can be manipulated and reasoned about, resulting in complex goal-directed and reflective behaviors. Unlike most planning systems, the plans or intentions formed by the system need only be partly elaborated before it decides to act. This allows the system to avoid overly strong expectations about the environment, overly constrained plans of action, and other forms of over-commitment common to previous planners. In addition, the system is continuously reactive and has the ability to change its goals and intentions as situations warrant. Thus, while the system architecture allows for reasoning about means and ends in much the same way as traditional planners, it also posseses the reactivity required for survival in complex real-world domains. The system was tested using SRI's autonomous robot (Flakey) in a scenario involving navigation and the performance of an emergency task in a space station scenario.
Silbereisen, Rainer K.; Heckhausen, Jutta
2010-01-01
This paper investigates how individuals deal with demands of social and economic change in the domains of work and family when opportunities for their mastery are unfavorable. Theoretical considerations and empirical research suggest that with unattainable goals and unmanageable demands motivational disengagement and self-protective cognitions bring about superior outcomes than continued goal striving. Building on research on developmental deadlines, this paper introduces the concept of developmental barriers to address socioeconomic conditions of severely constrained opportunities in certain geographical regions. Mixed-effects methods were used to model cross-level interactions between individual-level compensatory secondary control and regional-level opportunity structures in terms of social indicators for the economic prosperity and family friendliness. Results showed that disengagement was positively associated with general life satisfaction in regions that were economically devastated and has less than average services for families. In regions that were economically well off and family-friendly, the association was negative. Similar results were found for self-protection concerning domain-specific satisfaction with life. These findings suggest that compensatory secondary control can be an adaptive way of mastering a demand when primary control is not possible. PMID:21170393
Successful ageing for psychiatrists.
Peisah, Carmelle
2016-04-01
This paper aims to explore the concept and determinants of successful ageing as they apply to psychiatrists as a group, and as they can be applied specifically to individuals. Successful ageing is a heterogeneous, inclusive concept that is subjectively defined. No longer constrained by the notion of "super-ageing", successful ageing can still be achieved in the face of physical and/or mental illness. Accordingly, it remains within the reach of most of us. It can, and should be, person-specific and individually defined, specific to one's bio-psycho-social and occupational circumstances, and importantly, reserves. Successful professional ageing is predicated upon insight into signature strengths, with selection of realistic goal setting and substitution of new goals, given the dynamic nature of these constructs as we age. Other essential elements are generativity and self-care. Given that insight is key, taking a regular stock or inventory of our reserves across bio-psycho-social domains might be helpful. Importantly, for successful ageing, this needs to be suitably matched to the professional task and load. This lends itself to a renewable personal ageing plan, which should be systemically adopted with routine expectations of self-care and professional responsibility. © The Royal Australian and New Zealand College of Psychiatrists 2015.
Image Problems Deplete the Number of Women in Academic Applicant Pools
NASA Astrophysics Data System (ADS)
Sears, Anna L. W.
Despite near numeric parity in graduate schools, women and men in science and mathematics may not perceive the same opportunities for career success. Instead, female doctoral students' career ambitions may often be influenced by perceptions of irreconcilable conflicts between personal and academic goals. This article reports the results of a career goals survey of math and science doctoral students at the University of California, Davis. Fewer women than men began their doctoral programs seeking academic research careers. Of those who initially favored academic research, twice as many women as men downgraded these ambitions during graduate school. Women were more likely to feel geographically constrained by family ties and to express concern about balancing work and family, long work hours, and tenure clock inflexibility. These results partially explain why the percentage of women in academic applicant pools is often well below the number of Ph.D. recipients. The current barriers to gender equity thus cannot be completely ameliorated by increasing the number of women in the pipeline or by altered hiring practices, but changes must be undertaken to make academic research careers more flexible, family friendly, and attractive to women.
Limb versus speech motor control: a conceptual review.
Grimme, Britta; Fuchs, Susanne; Perrier, Pascal; Schöner, Gregor
2011-01-01
This paper presents a comparative conceptual review of speech and limb motor control. Speech is essentially cognitive in nature and constrained by the rules of language, while limb movement is often oriented to physical objects. We discuss the issue of intrinsic vs. extrinsic variables underlying the representations of motor goals as well as whether motor goals specify terminal postures or entire trajectories. Timing and coordination is recognized as an area of strong interchange between the two domains. Although coordination among different motor acts within a sequence and coarticulation are central to speech motor control, they have received only limited attention in manipulatory movements. The biomechanics of speech production is characterized by the presence of soft tissue, a variable number of degrees of freedom, and the challenges of high rates of production, while limb movements deal more typically with inertial constraints from manipulated objects. This comparative review thus leads us to identify many strands of thinking that are shared across the two domains, but also points us to issues on which approaches in the two domains differ. We conclude that conceptual interchange between the fields of limb and speech motor control has been useful in the past and promises continued benefit.
Kite: Status of the External Metrology Testbed for SIM
NASA Technical Reports Server (NTRS)
Dekens, Frank G.; Alvarez-Salazar, Oscar; Azizi, Alireza; Moser, Steven; Nemati, Bijan; Negron, John; Neville, Timothy; Ryan, Daniel
2004-01-01
Kite is a system level testbed for the External Metrology system of the Space Interferometry Mission (SIM). The External Metrology System is used to track the fiducial that are located at the centers of the interferometer's siderostats. The relative changes in their positions needs to be tracked to tens of picometers in order to correct for thermal measurements, the Kite testbed was build to test both the metrology gauges and out ability to optically model the system at these levels. The Kite testbed is an over-constraint system where 6 lengths are measured, but only 5 are needed to determine the system. The agreement in the over-constrained length needs to be on the order of 140 pm for the SIM Wide-Angle observing scenario and 8 pm for the Narrow-Angle observing scenario. We demonstrate that we have met the Wide-Angle goal with our current setup. For the Narrow-Angle case, we have only reached the goal for on-axis observations. We describe the testbed improvements that have been made since our initial results, and outline the future Kite changes that will add further effects that SIM faces in order to make the testbed more SIM like.
Beran, Michael J; Parrish, Audrey E; Futch, Sara E; Evans, Theodore A; Perdue, Bonnie M
2015-05-01
Human and nonhuman primates are not mentally constrained to the present. They can remember the past and-at least to an extent-anticipate the future. Anticipation of the future ranges from long-term prospection such as planning for retirement to more short-term future-oriented cognition such as planning a route through a maze. Here we tested a great ape species (chimpanzees), an Old World monkey species (rhesus macaques), a New World monkey species (capuchin monkeys), and human children on a computerized maze task. All subjects had to move a cursor through a maze to reach a goal at the bottom of the screen. For best performance on the task, subjects had to "plan ahead" to the end of the maze to move the cursor in the correct direction, avoid traps, and reverse directions if necessary. Mazes varied in difficulty. Chimpanzees were better than both monkey species, and monkeys showed a particular deficit when moving away from the goal or changing directions was required. Children showed a similar pattern to monkeys regarding the effects of reversals and moves away from the goal, but their overall performance in terms of correct maze completion was similar to the chimpanzees. The results highlight similarities as well as differences in planning across species and the role that inhibitory control may play in future-oriented cognition in primates. (c) 2015 APA, all rights reserved).
Application of ELJ to create and maintain side channels in a dynamic gravel bed river
NASA Astrophysics Data System (ADS)
Crabbe, E.; Crowe Curran, J.; Ockelford, A.
2017-12-01
Braided and anastomosing rivers create and maintain a large amount of side channel habitat. Unfortunately, many rivers that were once multi-channel rivers have been constrained to single thread channels as a consequence of land use changes that occurred in the 19th and 20th centuries or earlier. An increasingly common management goal today is the re-creation of self-maintaining side and tributary habitat through as natural means as possible. This work examines the geomorphic history of one such channel and the success of recent rehabilitation efforts. Our case study comes from the South Fork Nooksack River in the Cascades Range in Washington State. The Nooksack River is a gravel and sand bed channel with a snowmelt dominated hydrograph. Engineered log jams (ELJ) have been employed to direct flow into side and chute channels with the larger goals of increasing overall channel complexity and salmon spawning opportunities. ELJs have been constructed on the channel since the 2000s, and the ELJs in the study reaches range in age up to 10 years. The size and design of individual jams within the reach vary, enabling a comparison between jam types. ELJs are evaluated for their ability to maintain gravel bar locations and open tributary channels through the snowmelt season over the reach scale. Additional goals of trapping wood onto the jams and existing bars, stabilizing channel banks, and allowing for the growth of bar vegetation are also examined.
Modeling Spectral Turnovers in Interplanetary Shocks Observed by ULYSSES
NASA Astrophysics Data System (ADS)
Summerlin, E. J.; Baring, M. G.
2009-12-01
Interplanetary shocks in the heliosphere provide excellent test cases for the simulation and theory of particle acceleration at shocks thanks to the presence of in-situ measurements and a relatively well understood initial particle distribution. The Monte-Carlo test particle simulation employed in this work has been previously used to study injection and acceleration from thermal energies into the high energy power-law tail at co-rotating interaction regions (CIRs) in the heliosphere presuming a steady state planar shock (Summerlin & Baring, 2006, Baring and Summerlin, 2008). These simulated power-spectra compare favorably with in-situ measurements from the ULYSSES spacecraft below 60 keV. However, to effectively model the high energy exponential cutoff at energies above 60 keV observed in these distributions, simulations must apply spatial or temporal constraints to the acceleration process. This work studies the effects of a variety of temporal and spatial co! nstraints (including spatial constraints on the turbulent region around the shock as determined by magnetometer data, spatial constraints related to the scale size of the shock and constraints on the acceleration time based on the known limits for the shock's lifetime) on the high energy cut-off and compares simulated particle spectra to those observed by the ULYSSES HI-SCALE instrument in an effort to determine which constraint is creating the cut-off and using that constraining parameter to determine additional information about the shock that can not, normally, be determined by a single data point, such as the spatial extent of the shock or how long the shock has been propagating through the heliosphere before it encounters the spacecraft. Shocks observed by multiple spacecraft will be of particular interest as their parameters will be better constrained than shocks observed by only one spacecraft. To achieve these goals, the simulation will be modified to include the re! trodictive approach of Jones (1978) to accurately track time spent dow nstream while maintaining, to large degree, the large dynamic range and short run times that make this type of simulation so attractive. This work is inspired by examinations of acceleration cutoffs in SEP events performed by various authors (see Li et al., 2009, and references therein), and it is hoped that this work will pave the way for a multi-species analysis similar to theirs that should greatly enhance the information one can derive about shocks based on individual observations.
Dosimetric advantages of IMPT over IMRT for laser-accelerated proton beams
NASA Astrophysics Data System (ADS)
Luo, W.; Li, J.; Fourkal, E.; Fan, J.; Xu, X.; Chen, Z.; Jin, L.; Price, R.; Ma, C.-M.
2008-12-01
As a clinical application of an exciting scientific breakthrough, a compact and cost-efficient proton therapy unit using high-power laser acceleration is being developed at Fox Chase Cancer Center. The significance of this application depends on whether or not it can yield dosimetric superiority over intensity-modulated radiation therapy (IMRT). The goal of this study is to show how laser-accelerated proton beams with broad energy spreads can be optimally used for proton therapy including intensity-modulated proton therapy (IMPT) and achieve dosimetric superiority over IMRT for prostate cancer. Desired energies and spreads with a varying δE/E were selected with the particle selection device and used to generate spread-out Bragg peaks (SOBPs). Proton plans were generated on an in-house Monte Carlo-based inverse-planning system. Fifteen prostate IMRT plans previously used for patient treatment have been included for comparison. Identical dose prescriptions, beam arrangement and consistent dose constrains were used for IMRT and IMPT plans to show the dosimetric differences that were caused only by the different physical characteristics of proton and photon beams. Different optimization constrains and beam arrangements were also used to find optimal IMPT. The results show that conventional proton therapy (CPT) plans without intensity modulation were not superior to IMRT, but IMPT can generate better proton plans if appropriate beam setup and optimization are used. Compared to IMRT, IMPT can reduce the target dose heterogeneity ((D5-D95)/D95) by up to 56%. The volume receiving 65 Gy and higher (V65) for the bladder and the rectum can be reduced by up to 45% and 88%, respectively, while the volume receiving 40 Gy and higher (V40) for the bladder and the rectum can be reduced by up to 49% and 68%, respectively. IMPT can also reduce the whole body non-target tissue dose by up to 61% or a factor 2.5. This study has shown that the laser accelerator under development has a potential to generate high-quality proton beams for cancer treatment. Significant improvement in target dose uniformity and normal tissue sparing as well as in reduction of whole body dose can be achieved by IMPT with appropriate optimization and beam setup.
González-Ramírez, Laura R.; Ahmed, Omar J.; Cash, Sydney S.; Wayne, C. Eugene; Kramer, Mark A.
2015-01-01
Epilepsy—the condition of recurrent, unprovoked seizures—manifests in brain voltage activity with characteristic spatiotemporal patterns. These patterns include stereotyped semi-rhythmic activity produced by aggregate neuronal populations, and organized spatiotemporal phenomena, including waves. To assess these spatiotemporal patterns, we develop a mathematical model consistent with the observed neuronal population activity and determine analytically the parameter configurations that support traveling wave solutions. We then utilize high-density local field potential data recorded in vivo from human cortex preceding seizure termination from three patients to constrain the model parameters, and propose basic mechanisms that contribute to the observed traveling waves. We conclude that a relatively simple and abstract mathematical model consisting of localized interactions between excitatory cells with slow adaptation captures the quantitative features of wave propagation observed in the human local field potential preceding seizure termination. PMID:25689136
Gravitational-wave cosmology across 29 decades in frequency
Lasky, Paul D.; Mingarelli, Chiara M. F.; Smith, Tristan L.; ...
2016-03-31
Here, quantum fluctuations of the gravitational field in the early Universe, amplified by inflation, produce a primordial gravitational-wave background across a broad frequency band. We derive constraints on the spectrum of this gravitational radiation, and hence on theories of the early Universe, by combining experiments that cover 29 orders of magnitude in frequency. These include Planck observations of cosmic microwave background temperature and polarization power spectra and lensing, together with baryon acoustic oscillations and big bang nucleosynthesis measurements, as well as new pulsar timing array and ground-based interferometer limits. While individual experiments constrain the gravitational-wave energy density in specific frequencymore » bands, the combination of experiments allows us to constrain cosmological parameters, including the inflationary spectral index n t and the tensor-to-scalar ratio r. Results from individual experiments include the most stringent nanohertz limit of the primordial background to date from the Parkes Pulsar Timing Array, Ω GW(f) < 2.3 × 10 -10. Observations of the cosmic microwave background alone limit the gravitational-wave spectral index at 95% confidence to n t ≲ 5 for a tensor-toscalar ratio of r = 0.11. However, the combination of all the above experiments limits n t < 0.36. Future Advanced LIGO observations are expected to further constrain n t < 0.34 by 2020. When cosmic microwave background experiments detect a nonzero r, our results will imply even more stringent constraints on n t and, hence, theories of the early Universe.« less
NASA Astrophysics Data System (ADS)
Skeie, R. B.; Berntsen, T.; Aldrin, M.; Holden, M.; Myhre, G.
2012-04-01
A key question in climate science is to quantify the sensitivity of the climate system to perturbation in the radiative forcing (RF). This sensitivity is often represented by the equilibrium climate sensitivity, but this quantity is poorly constrained with significant probabilities for high values. In this work the equilibrium climate sensitivity (ECS) is estimated based on observed near-surface temperature change from the instrumental record, changes in ocean heat content and detailed RF time series. RF time series from pre-industrial times to 2010 for all main anthropogenic and natural forcing mechanisms are estimated and the cloud lifetime effect and the semi-direct effect, which are not RF mechanisms in a strict sense, are included in the analysis. The RF time series are linked to the observations of ocean heat content and temperature change through an energy balance model and a stochastic model, using a Bayesian approach to estimate the ECS from the data. The posterior mean of the ECS is 1.9˚C with 90% credible interval (C.I.) ranging from 1.2 to 2.9˚C, which is tighter than previously published estimates. Observational data up to and including year 2010 are used in this study. This is at least ten additional years compared to the majority of previously published studies that have used the instrumental record in attempts to constrain the ECS. We show that the additional 10 years of data, and especially 10 years of additional ocean heat content data, have significantly narrowed the probability density function of the ECS. If only data up to and including year 2000 are used in the analysis, the 90% C.I. is 1.4 to 10.6˚C with a pronounced heavy tail in line with previous estimates of ECS constrained by observations in the 20th century. Also the transient climate response (TCR) is estimated in this study. Using observational data up to and including year 2010 gives a 90% C.I. of 1.0 to 2.1˚C, while the 90% C.I. is significantly broader ranging from 1.1 to 3.4 ˚C if only data up to and including year 2000 is used.
Spin vectors in the Koronis family: III. (832) Karin
NASA Astrophysics Data System (ADS)
Slivan, Stephen M.; Molnar, Lawrence A.
2012-08-01
Studies of asteroid families constrain models of asteroid collisions and evolution processes, and the Karin cluster within the Koronis family is among the youngest families known (Nesvorný, D., Bottke, Jr., W.F., Dones, L., Levison, H.F. [2002]. Nature 417, 720-722). (832) Karin itself is by far the largest member of the Karin cluster, thus knowledge of Karin's spin vector is important to constrain family formation and evolution models that include spin, and to test whether its spin properties are consistent with the Karin cluster being a very young family. We observed rotation lightcurves of Karin during its four consecutive apparitions in 2006-2009, and combined the new observations with previously published lightcurves to determine its spin vector orientation and preliminary model shape. Karin is a prograde rotator with a period of (18.352 ± 0.003) h, spin obliquity near (42 ± 5)°, and pole ecliptic longitude near either (52 ± 5)° or (230 ± 5)°. The spin vector and shape results for Karin will constrain models of family formation that include spin properties; in the meantime we briefly discuss Karin's own spin in the context of those of other members of the Karin cluster and the parent body's siblings in the Koronis family.
Hard x-ray optics: from HEFT to NuSTAR
NASA Astrophysics Data System (ADS)
Koglin, Jason E.; Chen, C. M. H.; Chonko, Jim C.; Christensen, Finn E.; Craig, William W.; Decker, Todd R.; Hailey, Charles J.; Harrison, Fiona A.; Jensen, Carsten P.; Madsen, Kristin K.; Pivovaroff, Michael J.; Stern, Marcela; Windt, David L.; Ziegler, Eric
2004-10-01
Focusing optics are now poised to dramatically improve the sensitivity and angular resolution at energies above 10 keV to levels that were previously unachievable by the past generation of background limited collimated and coded-aperture instruments. Active balloon programs (HEFT), possible Explorer-class satellites (NuSTAR - currently under Phase A study), and major X-ray observatories (Con-X HXT) using focusing optics will play a major role in future observations of a wide range of objects including young supernova remnants, active galactic nuclei, and galaxy clusters. These instruments call for low cost, grazing incidence optics coated with depth-graded multilayer films that can be nested to achieve large collecting areas. Our approach to building such instruments is to mount segmented mirror shells with our novel error-compensating, monolithic assembly and alignment (EMAAL) procedure. This process involves constraining the mirror segments to successive layers of graphite rods that are precisely machined to the required conic-approximation Wolter-I geometry. We present results of our continued development of thermally formed glass substrates that have been used to build three HEFT telescopes and are proposed for NuSTAR. We demonstrate how our experience in manufacturing complete HEFT telescopes, as well as our experience developing higher performance prototype optics, will lead to the successful production of telescopes that meet the NuSTAR design goals.
NASA Technical Reports Server (NTRS)
Duffy, J.; Crane, C.
1993-01-01
The Center for Intelligent Machines and Robotics (CIMAR) of the University of Florida, in conjunction with Rockwell International is developing an electro-mechanical device called a Kinestatic Platform (KP) for aerospace applications. The goal of the current project is to develop a prototype KP which is capable of manipulating a 50 lb. payload. This prototype will demonstrate the feasibility of implementing a scaled up version to perform high precision manipulation of distributed systems and to control contact forces and allowable motions (rotations and translations), which is defined here as Kinestatic Control, in a six dimensional, partially constrained environment, simultaneously and independently. The objectives of the Phase 1 effort were as follows: (1) Identify specific NASA applications where the KP technology can be applied. (2) Select one application for development. (3) Develop a conceptual design of the KP specifically for the selected application. This includes the steps of developing a set of detailed performance criteria, establishing and making selection of the mechanism design parameters, and evaluating the expected system response. (4) Develop a computer graphics animation of the KP as it performs the selected application. This report will proceed by providing a technical description of the KP followed by how each of these objectives was addressed.
Simulation of solution phase electron transfer in a compact donor-acceptor dyad.
Kowalczyk, Tim; Wang, Lee-Ping; Van Voorhis, Troy
2011-10-27
Charge separation (CS) and charge recombination (CR) rates in photosynthetic architectures are difficult to control, yet their ratio can make or break photon-to-current conversion efficiencies. A rational design approach to the enhancement of CS over CR requires a mechanistic understanding of the underlying electron-transfer (ET) process, including the role of the environment. Toward this goal, we introduce a QM/MM protocol for ET simulations and use it to characterize CR in the formanilide-anthraquinone dyad (FAAQ). Our simulations predict fast recombination of the charge-transfer excited state, in agreement with recent experiments. The computed electronic couplings show an electronic state dependence and are weaker in solution than in the gas phase. We explore the role of cis-trans isomerization on the CR kinetics, and we find strong correlation between the vertical energy gaps of the full simulations and a collective solvent polarization coordinate. Our approach relies on constrained density functional theory to obtain accurate diabatic electronic states on the fly for molecular dynamics simulations, while orientational and electronic polarization of the solvent is captured by a polarizable force field based on a Drude oscillator model. The method offers a unified approach to the characterization of driving forces, reorganization energies, electronic couplings, and nonlinear solvent effects in light-harvesting systems.
Puebla-Hellmann, Gabriel; Mayor, Marcel; Lörtscher, Emanuel
2016-01-01
On the road towards the long-term goal of the NCCR Molecular Systems Engineering to create artificial molecular factories, we aim at introducing a compartmentalization strategy based on solid-state silicon technology targeting zeptoliter reaction volumes and simultaneous electrical contact to ensembles of well-oriented molecules. This approach allows the probing of molecular building blocks under a controlled environment prior to their use in a complex molecular factory. Furthermore, these ultra-sensitive electrical conductance measurements allow molecular responses to a variety of external triggers to be used as sensing and feedback mechanisms. So far, we demonstrate the proof-of-concept by electrically contacting self-assembled mono-layers of alkane-dithiols as an established test system. Here, the molecular films are laterally constrained by a circular dielectric confinement, forming a so-called 'nanopore'. Device yields above 85% are consistently achieved down to sub-50 nm nanopore diameters. This generic platform will be extended to create distributed, cascaded reactors with individually addressable reaction sites, including interconnecting micro-fluidic channels for electrochemical communication among nanopores and sensing sites for reaction control and feedback. In this scientific outlook, we will sketch how such a solid-state nanopore concept can be used to study various aspects of molecular compounds tailored for operation in a molecular factory.
The high price of "free" trade: U.S. trade agreements and access to medicines.
Lopert, Ruth; Gleeson, Deborah
2013-01-01
The United States' pursuit of increasingly TRIPS-Plus levels of intellectual property protection for medicines in bilateral and regional trade agreements is well recognized. Less so, however, are U.S. efforts through these agreements to influence and constrain the pharmaceutical coverage programs of its trading partners. Although arguably unsuccessful in the Australia- U.S. Free Trade Agreement (AUSFTA), the U.S. nevertheless succeeded in its bilateral FTA with South Korea (KORUS) in establishing prescriptive provisions pertaining to the operation of coverage and reimbursement programs for medicines and medical devices, which have the potential to adversely impact future access in that country. More recently, draft texts leaked from the current Trans Pacific Partnership Agreement (TPPA) negotiations show that U.S. objectives include not only AUSFTA-Plus and KORUS-Plus IP provisions but also ambitious inroads into the domestic health programs of its TPPA partners. This highlights the apparent conflict between trade goals - pursued through multilateral legal instruments to promote economic "health"- and public health objectives, such as the development of treatments for neglected diseases, the pursuit of efficiency and equity in priority setting, and the procurement of medicines at prices that reflect their therapeutic value and facilitate affordable access. © 2013 American Society of Law, Medicine & Ethics, Inc.
BBN-Based Portfolio Risk Assessment for NASA Technology R&D Outcome
NASA Technical Reports Server (NTRS)
Geuther, Steven C.; Shih, Ann T.
2016-01-01
The NASA Aeronautics Research Mission Directorate (ARMD) vision falls into six strategic thrusts that are aimed to support the challenges of the Next Generation Air Transportation System (NextGen). In order to achieve the goals of the ARMD vision, the Airspace Operations and Safety Program (AOSP) is committed to developing and delivering new technologies. To meet the dual challenges of constrained resources and timely technology delivery, program portfolio risk assessment is critical for communication and decision-making. This paper describes how Bayesian Belief Network (BBN) is applied to assess the probability of a technology meeting the expected outcome. The network takes into account the different risk factors of technology development and implementation phases. The use of BBNs allows for all technologies of projects in a program portfolio to be separately examined and compared. In addition, the technology interaction effects are modeled through the application of object-oriented BBNs. The paper discusses the development of simplified project risk BBNs and presents various risk results. The results presented include the probability of project risks not meeting success criteria, the risk drivers under uncertainty via sensitivity analysis, and what-if analysis. Finally, the paper shows how program portfolio risk can be assessed using risk results from BBNs of projects in the portfolio.
"Don't lock me out": life-story interviews of family business owners facing succession.
Solomon, Alexandra; Breunlin, Douglas; Panattoni, Katherine; Gustafson, Mara; Ransburg, David; Ryan, Carol; Hammerman, Thomas; Terrien, Jean
2011-06-01
This qualitative study used a grounded theory methodology to analyze life-story interviews obtained from 10 family business owners regarding their experiences in their businesses with the goal of understanding the complexities of family business succession. The grounded theory that emerged from this study is best understood as a potential web of constraints that can bear on the succession process. Coding of these interviews revealed four key influences, which seem to have the potential to facilitate or constrain the family business owner's approach to succession. Influence 1, "The business within," captures intrapsychic dynamics of differentiation and control. Influence 2, "The marriage," addresses how traditional gender roles shape succession. Influence 3, "The adult children," examines the role of having a natural (accidental, organic, passively groomed) successor. Influence 4, "The vision of retirement," captures the impact of owners' notions of life post-succession. Family therapists frequently encounter family systems in which the family business is facing succession. Even if succession is not the presenting problem, and even if the business owner is in the indirect (rather than direct) system, this research reminds clinicians of the importance of the family's story about the family business. Therefore, clinical implications and recommendations are included. 2011 © FPI, Inc.
NASA Astrophysics Data System (ADS)
Mendes, Rodolfo M.; de Andrade, Márcio Roberto M.; Tomasella, Javier; de Moraes, Márcio Augusto E.; Scofield, Graziela B.
2018-01-01
Located in a mountainous area of south-eastern Brazil, the municipality of Campos do Jordão has been hit by several landslides in recent history. Among those events, the landslides of early 2000 were significant in terms of the number of deaths (10), the population affected and the destruction of infrastructure that was caused. The purpose of this study is to assess the relative contribution of natural and human factors to triggering the landslides of the 2000 event. To achieve this goal, a detailed geotechnical survey was conducted in three representative slopes of the area to obtain geotechnical parameters needed for slope stability analysis. Then, a set of numerical experiments with GEO-SLOPE software was designed, including separate natural and anthropic factors. Results showed that natural factors, that is, high-intensity rainfall and geotechnical conditions, were not severe enough to trigger landslides in the study area and that human disturbance was entirely responsible for the landslide events of 2000. Since the anthropic effects used in the simulations are typical of hazardous urban areas in Brazil, we concluded that the implementation of public policies that constrain the occupation of landslide susceptible areas are urgently needed.
Finding Mass Constraints Through Third Neutrino Mass Eigenstate Decay
NASA Astrophysics Data System (ADS)
Gangolli, Nakul; de Gouvêa, André; Kelly, Kevin
2018-01-01
In this paper we aim to constrain the decay parameter for the third neutrino mass utilizing already accepted constraints on the other mixing parameters from the Pontecorvo-Maki-Nakagawa-Sakata matrix (PMNS). The main purpose of this project is to determine the parameters that will allow the Jiangmen Underground Neutrino Observatory (JUNO) to observe a decay parameter with some statistical significance. Another goal is to determine the parameters that JUNO could detect in the case that the third neutrino mass is lighter than the first two neutrino species. We also replicate the results that were found in the JUNO Conceptual Design Report (CDR). By utilizing Χ2-squared analysis constraints have been put on the mixing angles, mass squared differences, and the third neutrino decay parameter. These statistical tests take into account background noise and normalization corrections and thus the finalized bounds are a good approximation for the true bounds that JUNO can detect. If the decay parameter is not included in our models, the 99% confidence interval lies within The bounds 0s to 2.80x10-12s. However, if we account for a decay parameter of 3x10-5 ev2, then 99% confidence interval lies within 8.73x10-12s to 8.73x10-11s.
Almasi, Sepideh; Ben-Zvi, Ayal; Lacoste, Baptiste; Gu, Chenghua; Miller, Eric L; Xu, Xiaoyin
2017-03-01
To simultaneously overcome the challenges imposed by the nature of optical imaging characterized by a range of artifacts including space-varying signal to noise ratio (SNR), scattered light, and non-uniform illumination, we developed a novel method that segments the 3-D vasculature directly from original fluorescence microscopy images eliminating the need for employing pre- and post-processing steps such as noise removal and segmentation refinement as used with the majority of segmentation techniques. Our method comprises two initialization and constrained recovery and enhancement stages. The initialization approach is fully automated using features derived from bi-scale statistical measures and produces seed points robust to non-uniform illumination, low SNR, and local structural variations. This algorithm achieves the goal of segmentation via design of an iterative approach that extracts the structure through voting of feature vectors formed by distance, local intensity gradient, and median measures. Qualitative and quantitative analysis of the experimental results obtained from synthetic and real data prove the effcacy of this method in comparison to the state-of-the-art enhancing-segmenting methods. The algorithmic simplicity, freedom from having a priori probabilistic information about the noise, and structural definition gives this algorithm a wide potential range of applications where i.e. structural complexity significantly complicates the segmentation problem.
NASA Astrophysics Data System (ADS)
Pan, Ming; Troy, Tara; Sahoo, Alok; Sheffield, Justin; Wood, Eric
2010-05-01
Documentation of the water cycle and its evolution over time is a primary scientific goal of the Global Energy and Water Cycle Experiment (GEWEX) and fundamental to assessing global change impacts. In developed countries, observation systems that include in-situ, remote sensing and modeled data can provide long-term, consistent and generally high quality datasets of water cycle variables. The export of these technologies to less developed regions has been rare, but it is these regions where information on water availability and change is probably most needed in the face of regional environmental change due to climate, land use and water management. In these data sparse regions, in situ data alone are insufficient to develop a comprehensive picture of how the water cycle is changing, and strategies that merge in-situ, model and satellite observations within a framework that results in consistent water cycle records is essential. Such an approach is envisaged by the Global Earth Observing System of Systems (GOESS), but has yet to be applied. The goal of this study is to quantify the variation and changes in the global water cycle over the past 50 years. We evaluate the global water cycle using a variety of independent large-scale datasets of hydrologic variables that are used to bridge the gap between sparse in-situ observations, including remote-sensing based retrievals, observation-forced hydrologic modeling, and weather model reanalyses. A data assimilation framework that blends these disparate sources of information together in a consistent fashion with attention to budget closure is applied to make best estimates of the global water cycle and its variation. The framework consists of a constrained Kalman filter applied to the water budget equation. With imperfect estimates of the water budget components, the equation additionally has an error residual term that is redistributed across the budget components using error statistics, which are estimated from the uncertainties among data products. The constrained Kalman filter treats the budget closure constraint as a perfect observation within the assimilation framework. Precipitation is estimated using gauge observations, reanalysis products, and remote sensing products for below 50°N. Evapotranspiration is estimated in a number of ways: from the VIC land surface hydrologic model forced with a hybrid reanalysis-observation global forcing dataset, from remote sensing retrievals based on a suite of energy balance and process based models, and from an atmospheric water budget approach using reanalysis products for the atmospheric convergence and storage terms and our best estimate for precipitation. Terrestrial water storage changes, including surface and subsurface changes, are estimated using estimates from both VIC and the GRACE remote sensing retrievals. From these components, discharge can then be calculated as a residual of the water budget and compared with gauge observations to evaluate the closure of the water budget. Through the use of these largely independent data products, we estimate both the mean seasonal cycle of the water budget components and their uncertainties for a set of 20 large river basins across the globe. We particularly focus on three regions of interest in global changes studies: the Northern Eurasian region which is experiencing rapid change in terrestrial processes; the Amazon which is a central part of the global water, energy and carbon budgets; and Africa, which is predicted to face some of the most critical challenges for water and food security in the coming decades.
Empirical models of Jupiter's interior from Juno data. Moment of inertia and tidal Love number k2
NASA Astrophysics Data System (ADS)
Ni, Dongdong
2018-05-01
Context. The Juno spacecraft has significantly improved the accuracy of gravitational harmonic coefficients J4, J6 and J8 during its first two perijoves. However, there are still differences in the interior model predictions of core mass and envelope metallicity because of the uncertainties in the hydrogen-helium equations of state. New theoretical approaches or observational data are hence required in order to further constrain the interior models of Jupiter. A well constrained interior model of Jupiter is helpful for understanding not only the dynamic flows in the interior, but also the formation history of giant planets. Aims: We present the radial density profiles of Jupiter fitted to the Juno gravity field observations. Also, we aim to investigate our ability to constrain the core properties of Jupiter using its moment of inertia and tidal Love number k2 which could be accessible by the Juno spacecraft. Methods: In this work, the radial density profile was constrained by the Juno gravity field data within the empirical two-layer model in which the equations of state are not needed as an input model parameter. Different two-layer models are constructed in terms of core properties. The dependence of the calculated moment of inertia and tidal Love number k2 on the core properties was investigated in order to discern their abilities to further constrain the internal structure of Jupiter. Results: The calculated normalized moment of inertia (NMOI) ranges from 0.2749 to 0.2762, in reasonable agreement with the other predictions. There is a good correlation between the NMOI value and the core properties including masses and radii. Therefore, measurements of NMOI by Juno can be used to constrain both the core mass and size of Jupiter's two-layer interior models. For the tidal Love number k2, the degeneracy of k2 is found and analyzed within the two-layer interior model. In spite of this, measurements of k2 can still be used to further constrain the core mass and size of Jupiter's two-layer interior models.
Antunes, J; Debut, V
2017-02-01
Most musical instruments consist of dynamical subsystems connected at a number of constraining points through which energy flows. For physical sound synthesis, one important difficulty deals with enforcing these coupling constraints. While standard techniques include the use of Lagrange multipliers or penalty methods, in this paper, a different approach is explored, the Udwadia-Kalaba (U-K) formulation, which is rooted on analytical dynamics but avoids the use of Lagrange multipliers. This general and elegant formulation has been nearly exclusively used for conceptual systems of discrete masses or articulated rigid bodies, namely, in robotics. However its natural extension to deal with continuous flexible systems is surprisingly absent from the literature. Here, such a modeling strategy is developed and the potential of combining the U-K equation for constrained systems with the modal description is shown, in particular, to simulate musical instruments. Objectives are twofold: (1) Develop the U-K equation for constrained flexible systems with subsystems modelled through unconstrained modes; and (2) apply this framework to compute string/body coupled dynamics. This example complements previous work [Debut, Antunes, Marques, and Carvalho, Appl. Acoust. 108, 3-18 (2016)] on guitar modeling using penalty methods. Simulations show that the proposed technique provides similar results with a significant improvement in computational efficiency.
Thompson, R J; Godiksen, L; Hansen, G; Gustafson, D J; Brinkerhoff, D W; Ingle, M D; Rounds, T; Wing, H
1990-01-01
In recent years, sustainability has become one of the most critical concepts in international development and is having a dramatic impact on the way development is conceptualized and carried out. The US Agency for International Development (USAID) is incorporating this concept into its programs and projects. Factors encouraging sustainability of projects and programs include host government policies that support or constrain program objectives, national and/or local commitment to project goals, managerial leadership that helps shape improved policies, collaboration at all staff levels in program management, financial resources that cover program operational costs, appropriate program technology, integration of the program with the social and cultural setting of the country, community involvement in the program, sound environmental management, technical assistance oriented to transferring skills and increasing institutional capacity, perception by the host country that the project is "effective," training provided by the project to transfer skill needed for capacity-building, integration of the program into existing institutional framework, and external political, economic and environmental factors. Impediments to sustainability are often inherent in the donor agency's programming process. This includes the implicit assumption that program objectives can be accomplished in a relatively short time frame, when in fact capacity-building requires a lengthy commitment. USAID professionals are pressured to show near-term results which emphasize outputs rather than purpose and goal-level accomplishments achievable only after extensive effort. The emphasis on obligating money and on the project paper as a sales document leads project designers to talk with a great deal more certainty about project results than is warranted by the complex development situation. Uncertainty and flexibility should be designed into projects so activities and objects can change as more information and on-site experience is gained. Instead of outputs, success should be measured in processes that will continue to produce long-term results. Emphasis should be placed on establishing policymaking processes and decision making procedures in the recipient country that will lead to sound economic policymaking on a continuing basis. Sustainable efforts in agriculture, health, rural development and their evaluation are examined for several USAID projects.
Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Cooley, Scott K.; Vienna, John D.
This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directlymore » applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO 3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO 3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO 3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer-layer glasses. The experimental design was completed by a center-point glass, a Vitreous State Laboratory glass, and replicates of the center point and Vitreous State Laboratory glasses.« less
Levay, Adrienne V; Mumtaz, Zubia; Faiz Rashid, Sabina; Willows, Noreen
2013-09-26
Maternal malnutrition in Bangladesh is a persistent health issue and is the product of a number of complex factors, including adherence to food 'taboos' and a patriarchal gender order that limits women's mobility and decision-making. The recent global food price crisis is also negatively impacting poor pregnant women's access to food. It is believed that those who are most acutely affected by rising food prices are the urban poor. While there is an abundance of useful quantitative research centered on maternal nutrition and food insecurity measurements in Bangladesh, missing is an understanding of how food insecurity is experienced by people who are most vulnerable, the urban ultra-poor. In particular, little is known of the lived experience of food insecurity among pregnant women in this context. This research investigated these lived experiences by exploring food provisioning strategies of urban, ultra-poor, pregnant women. This knowledge is important as discussions surrounding the creation of new development goals are currently underway. Using a focused-ethnographic approach, household food provisioning experiences were explored. Data from participant observation, a focus group discussion and semi-structured interviews were collected in an urban slum in Dhaka, Bangladesh. Interviews were undertaken with 28 participants including 12 pregnant women and new mothers, two husbands, nine non-pregnant women, and five health care workers. The key findings are: 1) women were aware of the importance of good nutrition and demonstrated accurate, biomedically-based knowledge of healthy eating practices during pregnancy; 2) the normative gender rules that have traditionally constrained women's access to nutritional resources are relaxing in the urban setting; however 3) women are challenged in accessing adequate quality and quantities of food due to the increase in food prices at the market. Rising food prices and resultant food insecurity due to insufficient incomes are negating the recent efforts that have increased women's knowledge of healthy eating during pregnancy and their gendered empowerment. In order to maintain the gains in nutritional knowledge and women's increased mobility and decision-making capacity; policy must also consider the global political economy of food in the creation of the new development goals.
AIDA DART asteroid deflection test: Planetary defense and science objectives
NASA Astrophysics Data System (ADS)
Cheng, Andrew F.; Rivkin, Andrew S.; Michel, Patrick; Atchison, Justin; Barnouin, Olivier; Benner, Lance; Chabot, Nancy L.; Ernst, Carolyn; Fahnestock, Eugene G.; Kueppers, Michael; Pravec, Petr; Rainey, Emma; Richardson, Derek C.; Stickle, Angela M.; Thomas, Cristina
2018-08-01
The Asteroid Impact & Deflection Assessment (AIDA) mission is an international cooperation between NASA and ESA. NASA plans to provide the Double Asteroid Redirection Test (DART) mission which will perform a kinetic impactor experiment to demonstrate asteroid impact hazard mitigation. ESA proposes to provide the Hera mission which will rendezvous with the target to monitor the deflection, perform detailed characterizations, and measure the DART impact outcomes and momentum transfer efficiency. The primary goals of AIDA are (i) to demonstrate the kinetic impact technique on a potentially hazardous near-Earth asteroid and (ii) to measure and characterize the deflection caused by the impact. The AIDA target will be the binary asteroid (65803) Didymos, which is of spectral type Sq, with the deflection experiment to occur in October, 2022. The DART impact on the secondary member of the binary at ∼6 km/s changes the orbital speed and the binary orbit period, which can be measured by Earth-based observatories with telescope apertures as small as 1 m. The DART impact will in addition alter the orbital and rotational states of the Didymos binary, leading to excitation of eccentricity and libration that, if measured by Hera, can constrain internal structure of the target asteroid. Measurements of the DART crater diameter and morphology can constrain target properties like cohesion and porosity based on numerical simulations of the DART impact.
Robust synergetic control design under inputs and states constraints
NASA Astrophysics Data System (ADS)
Rastegar, Saeid; Araújo, Rui; Sadati, Jalil
2018-03-01
In this paper, a novel robust-constrained control methodology for discrete-time linear parameter-varying (DT-LPV) systems is proposed based on a synergetic control theory (SCT) approach. It is shown that in DT-LPV systems without uncertainty, and for any unmeasured bounded additive disturbance, the proposed controller accomplishes the goal of stabilising the system by asymptotically driving the error of the controlled variable to a bounded set containing the origin and then maintaining it there. Moreover, given an uncertain DT-LPV system jointly subject to unmeasured and constrained additive disturbances, and constraints in states, input commands and reference signals (set points), then invariant set theory is used to find an appropriate polyhedral robust invariant region in which the proposed control framework is guaranteed to robustly stabilise the closed-loop system. Furthermore, this is achieved even for the case of varying non-zero control set points in such uncertain DT-LPV systems. The controller is characterised to have a simple structure leading to an easy implementation, and a non-complex design process. The effectiveness of the proposed method and the implications of the controller design on feasibility and closed-loop performance are demonstrated through application examples on the temperature control on a continuous-stirred tank reactor plant, on the control of a real-coupled DC motor plant, and on an open-loop unstable system example.
NASA Technical Reports Server (NTRS)
Malavergne, V.; Brunet, F.; Righter, K.; Zanda, B.; Avril, C.; Borensztajn, S.; Berthet, S.
2012-01-01
Enstatite meteorites are the most reduced naturally-occuring materials of the solar system. The cubic monosulfide series with the general formula (Mg,Mn,Ca,Fe)S are common phases in these meteorite groups. The importance of such minerals, their formation, composition and textural relationships for understanding the genesis of enstatite chondrites (EC) and aubrites, has long been recognized (e.g. [1]). However, the mechanisms of formation of these sulfides is still not well constrained certainly because of possible multiple ways to produce them. We propose to simulate different models of formation in order to check their mineralogical, chemical and textural relevancies. The solubility of sulfur in silicate melts is of primary interest for planetary mantles, particularly for the Earth and Mercury. Indeed, these two planets could have formed, at least partly, from EC materials (e.g. [2, 3, 4]). The sulfur content in silicate melts depends on the melt composition but also on pressure (P), temperature (T) and oxygen fugacity fO2. Unfortunately, there is no model of general validity in a wide range of P-T-fO2-composition which describes precisely the evolution of sulfur content in silicate melts, even if the main trends are now known. The second goal of this study is to constrain the sulfur content in silicate melts under reducing conditions and different temperatures.
Bayesian Chance-Constrained Hydraulic Barrier Design under Geological Structure Uncertainty.
Chitsazan, Nima; Pham, Hai V; Tsai, Frank T-C
2015-01-01
The groundwater community has widely recognized geological structure uncertainty as a major source of model structure uncertainty. Previous studies in aquifer remediation design, however, rarely discuss the impact of geological structure uncertainty. This study combines chance-constrained (CC) programming with Bayesian model averaging (BMA) as a BMA-CC framework to assess the impact of geological structure uncertainty in remediation design. To pursue this goal, the BMA-CC method is compared with traditional CC programming that only considers model parameter uncertainty. The BMA-CC method is employed to design a hydraulic barrier to protect public supply wells of the Government St. pump station from salt water intrusion in the "1500-foot" sand and the "1700-foot" sand of the Baton Rouge area, southeastern Louisiana. To address geological structure uncertainty, three groundwater models based on three different hydrostratigraphic architectures are developed. The results show that using traditional CC programming overestimates design reliability. The results also show that at least five additional connector wells are needed to achieve more than 90% design reliability level. The total amount of injected water from the connector wells is higher than the total pumpage of the protected public supply wells. While reducing the injection rate can be achieved by reducing the reliability level, the study finds that the hydraulic barrier design to protect the Government St. pump station may not be economically attractive. © 2014, National Ground Water Association.
An effective parameter optimization with radiation balance constraints in the CAM5
NASA Astrophysics Data System (ADS)
Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.
2017-12-01
Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen
2013-01-01
One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.
de Vaal, M H; Gee, M W; Stock, U A; Wall, W A
2016-12-01
Because aortic occlusion is arguably one of the most dangerous aortic manipulation maneuvers during cardiac surgery in terms of perioperative ischemic neurological injury, the purpose of this investigation is to assess the structural mechanical impact resulting from the use of existing and newly proposed occluders. Existing (clinically used) occluders considered include different cross-clamps (CCs) and endo-aortic balloon occlusion (EABO). A novel occluder is also introduced, namely, constrained EABO (CEABO), which consists of applying a constrainer externally around the aorta when performing EABO. Computational solid mechanics are employed to investigate each occluder according to a comprehensive list of functional requirements. The potential of a state of occlusion is also considered for the first time. Three different constrainer designs are evaluated for CEABO. Although the CCs were responsible for the highest strains, largest deformation, and most inefficient increase of the occlusion potential, it remains the most stable, simplest, and cheapest occluder. The different CC hinge geometries resulted in poorer performance of CC used for minimally invasive procedures than conventional ones. CEABO with a profiled constrainer successfully addresses the EABO shortcomings of safety, stability, and positioning accuracy, while maintaining its complexities of operation (disadvantage) and yielding additional functionalities (advantage). Moreover, CEABO is able to achieve the previously unattainable potential to provide a clinically determinable state of occlusion. CEABO offers an attractive alternative to the shortcomings of existing occluders, with its design rooted in achieving the highest patient safety. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Estimating Stresses, Fault Friction and Fluid Pressure from Topography and Coseismic Slip Models
NASA Astrophysics Data System (ADS)
Styron, R. H.; Hetland, E. A.
2014-12-01
Stress is a first-order control on the deformation state of the earth. However, stress is notoriously hard to measure, and researchers typically only estimate the directions and relative magnitudes of principal stresses, with little quantification of the uncertainties or absolute magnitude. To improve upon this, we have developed methods to constrain the full stress tensor field in a region surrounding a fault, including tectonic, topographic, and lithostatic components, as well as static friction and pore fluid pressure on the fault. Our methods are based on elastic halfspace techniques for estimating topographic stresses from a DEM, and we use a Bayesian approach to estimate accumulated tectonic stress, fluid pressure, and friction from fault geometry and slip rake, assuming Mohr-Coulomb fault mechanics. The nature of the tectonic stress inversion is such that either the stress maximum or minimum is better constrained, depending on the topography and fault deformation style. Our results from the 2008 Wenchuan event yield shear stresses from topography up to 20 MPa (normal-sinistral shear sense) and topographic normal stresses up to 80 MPa on the faults; tectonic stress had to be large enough to overcome topography to produce the observed reverse-dextral slip. Maximum tectonic stress is constrained to be >0.3 * lithostatic stress (depth-increasing), with a most likely value around 0.8, trending 90-110°E. Minimum tectonic stress is about half of maximum. Static fault friction is constrained at 0.1-0.4, and fluid pressure at 0-0.6 * total pressure on the fault. Additionally, the patterns of topographic stress and slip suggest that topographic normal stress may limit fault slip once failure has occurred. Preliminary results from the 2013 Balochistan earthquake are similar, but yield stronger constraints on the upper limits of maximum tectonic stress, as well as tight constraints on the magnitude of minimum tectonic stress and stress orientation. Work in progress on the Wasatch fault suggests that maximum tectonic stress may also be able to be constrained, and that some of the shallow rupture segmentation may be due in part to localized topographic loading. Future directions of this work include regions where high relief influences fault kinematics (such as Tibet).
Brevard District Plan for Career Education Development.
ERIC Educational Resources Information Center
Thomas, Olive W.
The Brevard County Plan was written to include goals and objectives for the years 1974-77. Goals for 1974-75 include promoting the career education concept in all district schools (emphasizing the various career education elements at appropriate grade levels), setting up placement services, coordinating county and district goals, program…
NASA Astrophysics Data System (ADS)
Cull, S. C.; Arvidson, R. E.; Seelos, F.; Wolff, M. J.
2010-03-01
Using data from CRISM's Emission Phase Function observations, we attempt to constrain Phoenix soil scattering properties, including soil grain size, single-scattering albedo, and surface phase function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grant, Sean Campbell; Ao, Tommy; Davis, Jean-Paul
The CHEDS researchers are engaged in a collaborative research project to study the properties of iron and iron alloys under Earth’s core conditions. The Earth’s core, inner and outer, is composed primarily of iron, thus studying iron and iron alloys at high pressure and temperature conditions will give the best estimate of its properties. Also, comparing studies of iron alloys with known properties of the core can constrain the potential light element compositions found within the core, such as fitting sound speeds and densities of iron alloys to established inner- Earth models. One of the lesser established properties of themore » core is the thermal conductivity, where current estimates vary by a factor of three. Therefore, one of the primary goals of this collaboration is to make relevant measurements to elucidate this conductivity.« less
Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler
2016-09-01
This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less
Phobos-Grunt ; Russian Sample Return Mission
NASA Astrophysics Data System (ADS)
Marov, M.
As an important milestone in the Mars exploration, space vehicle of new generation "Phobos-Grunt" is planned to be launched by the Russian Aviation and Space Agency. The project is optimized around Phobos sample return mission and follow up missions targeted to study some Main asteroid belt bodies, NEO , and short period comets. The principal constrain is "Soyuz-Fregat" rather than "Proton" launcher utilization to accomplish these challenging goals. The vehicle design incorporates innovative SEP technology involving electrojet engines that allowed us to increase significantly the missions energetic capabilities, as well as high autonomous on- board systems . Basic criteria underlining the "Phobos-Grunt" mission scenario, scientific objections and rationale, involving Mars observations during the vehicle insertion into Mars orbit and Phobos approach manoeuvres, are discussed and an opportunity for international cooperation is suggested.
NASA Technical Reports Server (NTRS)
Wells, James; Scherrer, John; Van Noord, Jonathan; Law, Richard
2015-01-01
In response to the recommendations made in the National Research Council' s Ear th Science and Applications 2007 Decadal Sur vey, NASA has initiated the Ear th Venture line of mission oppor tunities. The fir st orbital mission chosen for this competitively selected, cost and schedule constrained, Pr incipal Investigator -led oppor tunity is the CYclone Global Navigation Satellite System (CYGNSS). The goal of CYGNSS is to understand the coupling between ocean sur face proper ties, moist atmospher ic thermodynamics, radiation, and convective dynamics in the inner core of a tropical cyclone. The CYGNSS mission is compr ised of eight Low Ear th Obser ving (LEO) micr osatellites that use GPS bi-static scatterometry to measure ocean sur face winds.
Estimation of the size of drug-like chemical space based on GDB-17 data.
Polishchuk, P G; Madzhidov, T I; Varnek, A
2013-08-01
The goal of this paper is to estimate the number of realistic drug-like molecules which could ever be synthesized. Unlike previous studies based on exhaustive enumeration of molecular graphs or on combinatorial enumeration preselected fragments, we used results of constrained graphs enumeration by Reymond to establish a correlation between the number of generated structures (M) and the number of heavy atoms (N): logM = 0.584 × N × logN + 0.356. The number of atoms limiting drug-like chemical space of molecules which follow Lipinsky's rules (N = 36) has been obtained from the analysis of the PubChem database. This results in M ≈ 10³³ which is in between the numbers estimated by Ertl (10²³) and by Bohacek (10⁶⁰).
Tetlock, Philip E; Mitchell, Gregory
2010-03-01
Conceptual distinctions that loom large to philosophers-such as the distinction between utilitarian and deontic decision norms-may be far less salient to most other mortals. Building on an intuitive-politician model of judgment and choice and on the empirical work reported by Bennis, Medin, and Bartels (2010, this issue), we argue that the overriding goal of most decision makers in the paradigms under scrutiny is to offer judgments that are readily defensible and that reinforce their social identities as both cognitively flexible (responsive to evidence and cost-benefit considerations) and morally principled (prepared to defend sacred values and censure those who do not). People are best classified neither as utilitarians nor Kantians but rather as pragmatic social beings embedded in complex cultural-political systems. © The Author(s) 2010.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvartatskyi, O. I., E-mail: alex.chvartatskyy@gmail.com; Sydorenko, Yu. M., E-mail: y-sydorenko@franko.lviv.ua
We introduce a new bidirectional generalization of (2+1)-dimensional k-constrained Kadomtsev-Petviashvili (KP) hierarchy ((2+1)-BDk-cKPH). This new hierarchy generalizes (2+1)-dimensional k-cKP hierarchy, (t{sub A}, τ{sub B}) and (γ{sub A}, σ{sub B}) matrix hierarchies. (2+1)-BDk-cKPH contains a new matrix (1+1)-k-constrained KP hierarchy. Some members of (2+1)-BDk-cKPH are also listed. In particular, it contains matrix generalizations of Davey-Stewartson (DS) systems, (2+1)-dimensional modified Korteweg-de Vries equation and the Nizhnik equation. (2+1)-BDk-cKPH also includes new matrix (2+1)-dimensional generalizations of the Yajima-Oikawa and Melnikov systems. Binary Darboux Transformation Dressing Method is also proposed for construction of exact solutions for equations from (2+1)-BDk-cKPH. As an example the exactmore » form of multi-soliton solutions for vector generalization of the DS system is given.« less
Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide
Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...
2017-03-01
The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2018-05-01
In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.
Quantifying How Observations Inform a Numerical Reanalysis of Hawaii
NASA Astrophysics Data System (ADS)
Powell, B. S.
2017-11-01
When assimilating observations into a model via state-estimation, it is possible to quantify how each observation changes the modeled estimate of a chosen oceanic metric. Using an existing 2 year reanalysis of Hawaii that includes more than 31 million observations from satellites, ships, SeaGliders, and autonomous floats, I assess which observations most improve the estimates of the transport and eddy kinetic energy. When the SeaGliders were in the water, they comprised less than 2.5% of the data, but accounted for 23% of the transport adjustment. Because the model physics constrains advanced state-estimation, the prescribed covariances are propagated in time to identify observation-model covariance. I find that observations that constrain the isopycnal tilt across the transport section provide the greatest impact in the analysis. In the case of eddy kinetic energy, observations that constrain the surface-driven upper ocean have more impact. This information can help to identify optimal sampling strategies to improve both state-estimates and forecasts.
In Situ Missions For Investigation of the Climate, Geology and Evolution of Venus
NASA Astrophysics Data System (ADS)
Grinspoon, David
2017-10-01
In situ Exploration of Venus has been recommended by the Decadal Study of the National Research Council. Many high priority measurements, addressing outstanding first-order, fundamental questions about current processes and evolution of Venus can only be made from in situ platforms such as entry probes, balloons or landers. These include: measuring noble gases and their isotopes to constrain origin and evolution; measuring stable isotopes to constrain the history of water and other volatiles; measuring trace gas profiles and sulfur compounds for chemical cycles and surface-atmosphere interactions, constraining the coupling of radiation, dynamics and chemistry, making visible and infrared descent images, and measuring surface and sub-surface composition. Such measurements will allow us deepen our understanding of the origin and evolution of Venus in the context of the terrestrial planets and extrasolar planets, to determine the level and style of current geological activity and to characterize the divergent climate evolution of Venus and Earth and extend our knowledge of the limits of habitability on hot terrestrial planets.
Spira, Thomas; Lindegren, Mary Lou; Ferris, Robert; Habiyambere, Vincent; Ellerbrock, Tedd
2009-06-01
The expansion of HIV/AIDS care and treatment in resource-constrained countries, especially in sub-Saharan Africa, has generally developed in a top-down manner. Further expansion will involve primary health centers where human and other resources are limited. This article describes the World Health Organization/President's Emergency Plan for AIDS Relief collaboration formed to help scale up HIV services in primary health centers in high-prevalence, resource-constrained settings. It reviews the contents of the Operations Manual developed, with emphasis on the Laboratory Services chapter, which discusses essential laboratory services, both at the center and the district hospital level, laboratory safety, laboratory testing, specimen transport, how to set up a laboratory, human resources, equipment maintenance, training materials, and references. The chapter provides specific information on essential tests and generic job aids for them. It also includes annexes containing a list of laboratory supplies for the health center and sample forms.
NASA Astrophysics Data System (ADS)
Druken, Bridget Kinsella
Lesson study, a teacher-led vehicle for inquiring into teacher practice through creating, enacting, and reflecting on collaboratively designed research lessons, has been shown to improve mathematics teacher practice in the United States, such as improving knowledge about mathematics, changing teacher practice, and developing communities of teachers. Though it has been described as a sustainable form of professional development, little research exists on what might support teachers in continuing to engage in lesson study after a grant ends. This qualitative and multi-case study investigates the sustainability of lesson study as mathematics teachers engage in a district scale-up lesson study professional experience after participating in a three-year California Mathematics Science Partnership (CaMSP) grant to improve algebraic instruction. To do so, I first provide a description of material (e.g. curricular materials and time), human (attending district trainings and interacting with mathematics coaches), and social (qualities like trust, shared values, common goals, and expectations developed through relationships with others) resources present in the context of two school districts as reported by participants. I then describe practices of lesson study reported to have continued. I also report on teachers' conceptions of what it means to engage in lesson study. I conclude by describing how these results suggest factors that supported and constrained teachers' in continuing lesson study. To accomplish this work, I used qualitative methods of grounded theory informed by a modified sustainability framework on interview, survey, and case study data about teachers, principals, and Teachers on Special Assignment (TOSAs). Four cases were selected to show the varying levels of lesson study practices that continued past the conclusion of the grant. Analyses reveal varying levels of integration, linkage, and synergy among both formally and informally arranged groups of teachers. High levels of integration and linkage among groups of teachers supported them in sustaining lesson study practices. Groups of teachers with low levels of integration but with linked individuals sustained some level of practices, whereas teachers with low levels of integration and linkage constrained them in continuing lesson study at their site. Additionally, teachers' visions of lesson study and its uses shaped the types of activities teachers engaged, with well-developed conceptions of lesson study supporting and limited visions constraining the ability to attract or align resources to continue lesson study practices. Principals' support, teacher autonomy, and cultures of collaboration or isolation were also factors that either supported or constrained teachers' ability to continue lesson study. These analyses provide practical implications on how to support mathematics teachers in continuing lesson study, and theoretical contributions on developing the construct of sustainability within mathematics education research.
Pitcher, T J; Lam, M E; Ainsworth, C; Martindale, A; Nakamura, K; Perry, R I; Ward, T
2013-10-01
This paper reports recent developments in Rapfish, a normative, scalable and flexible rapid appraisal technique that integrates both ecological and human dimensions to evaluate the status of fisheries in reference to a norm or goal. Appraisal status targets may be sustainability, compliance with a standard (such as the UN code of conduct for responsible fisheries) or the degree of progress in meeting some other goal or target. The method combines semi-quantitative (e.g. ecological) and qualitative (e.g. social) data via multiple evaluation fields, each of which is assessed through scores assigned to six to 12 attributes or indicators: the scoring method allows user flexibility to adopt a wide range of utility relationships. For assessing sustainability, six evaluation fields have been developed: ecological, technological, economic, social, ethical and institutional. Each field can be assessed directly with a set of scored attributes, or several of the fields can be dealt with in greater detail using nested subfields that themselves comprise multidimensional Rapfish assessments (e.g. the hierarchical institutional field encompasses both governance and management, including a detailed analysis of legality). The user has the choice of including all or only some of the available sustainability fields. For the attributes themselves, there will rarely be quantitative data, but scoring allows these items to be estimated. Indeed, within a normative framework, one important advantage with Rapfish is transparency of the rigour, quality and replicability of the scores. The Rapfish technique employs a constrained multidimensional ordination that is scaled to situate data points within evaluation space. Within each evaluation field, results may be presented as a two-dimensional plot or in a one-dimensional rank order. Uncertainty is expressed through the probability distribution of Monte-Carlo simulations that use the C.L. on each original observation. Overall results of the multidisciplinary analysis may be shown using kite diagrams that compare different locations, time periods (including future projections) and management scenarios, which make policy trade-offs explicit. These enhancements are now available in the R programming language and on an open website, where users can run Rapfish analyses by downloading the software or uploading their data to a user interface. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.
NASA Astrophysics Data System (ADS)
Sherman, Sydney; Jogee, Shardha; Florez, Jonathan; Stevans, Matthew L.; Kawinwanichakij, Lalitwadee; Finkelstein, Steven L.; Papovich, Casey J.; Ciardullo, Robin; Gronwall, Caryl
2017-06-01
We are currently conducting an unprecedented study of how nearly 0.6 million massive galaxies (Mstar > 1010 M⊙) grow their stars and dark matter halos over an enormous comoving volume (0.45 Gpc3) of the 1.9 < z < 3.5 universe, when cosmic star formation and black hole activity peak, and proto-clusters begin to collapse. This 24 deg2 study of the SDSS Stripe 82 field utilizes the powerful combination of five photometric surveys (DECam ugriz, NEWFIRM K-band, Spitzer-IRAC, Herschel-SPIRE, and Stripe 82X X-ray), along with future blind optical spectroscopy from the HETDEX project. Central to this study, and other large-area surveys like it, is the dependence on photometric redshifts and spectral energy distribution (SED) fitting to constrain the lookback time and properties of observed galaxies. Unfortunately, these methods are primarily based on galaxies in the local universe and often introduce large uncertainties when applied to high redshift systems. In this poster, we perform systematic tests of the photometric redshift code EAZY (Brammer et al. 2008), and SED fitting codes FAST (Kriek et al. 2009) and MAGPHYS (Da Cunha et al. 2008). We fine-tune input model choices to SED fitting codes (such as SSP, magnitude prior, SFH, IMF, and dust law) using 2 < z < 4 galaxies from theoretical cosmological simulations, with the goal of better constraining the uncertainty based on model choices. The results of this test are then used to inform the choice of input models used when constraining the properties of galaxies observed in our multi-wavelength study. In the era of large-area photometric surveys with little to no spectroscopic coverage, this work has broad implications for the characterization of galaxies at early cosmic times. We gratefully acknowledge support from NSF grants AST-1614798 and AST-1413652.
Hydrogen and Ferric Iron in Mars Materials
NASA Technical Reports Server (NTRS)
Dyar, Melinda D.
2004-01-01
Knowledge of oxygen and hydrogen fugacity is of paramount importance in constraining phase equilibria and crystallization processes of melts, as well as understanding the partitioning of elements between the cope and silicate portions of terrestrial planets. H and Fe(3+) must both be analyzed in order to reconstruct hydrogen and oxygen fugacities on Mars. To date, SIMS data have elucidated D/H and H contents of hydrous phases in SNC meteorites, but until now anhydrous martian minerals have not been systematically examined for trace hydrogen. Ferric iron has been quantified using XANES in many martian phases, but integrated studies of both Fe(3+) and H on the same spots are really needed to address the H budget. Finally, the effects of shock on both Fe(3+) and H in hydrous and anhydrous phases must be quantified. Thus, the overall goal of this research was to understand the oxygen and hydrogen fugacities under which martian samples crystallized. In this research one-year project, we approached this problem by 1) characterizing Fe(3+) and H contents of SNC meteorites using both bulk (Mossbauer spectroscopy and uranium extraction, respectively) and microscale (synchrotron micro-XANES and SIMS) methods; 2) relating Fe(3+) and H contents of martian minerals to their oxygen and hydrogen fugacities through analysis of experimentally equilibrated phases (for pyroxene) and through study of volcanic rocks in which the oxygen and hydrogen fugacities can be independently constrained (for feldspar); and 3) studying the effects of shock processes on Fe(3+) and H contents of the phases of interest. Results have been used to assess quantitatively the distribution of H and Fe(3+) among phases in the martian interior, which will better constrain the geodynamic processes of the interior, as well as the overall hydrogen and water budgets on Mars. There were no inventions funded by this research.
A Proof of Concept for In-Situ Lunar Dating
NASA Astrophysics Data System (ADS)
Anderson, F. S.; Whitaker, T.; Levine, J.; Draper, D. S.; Harris, W.; Olansen, J.; Devolites, J.
2015-12-01
We have obtained improved 87Rb-87Sr isochrons for the Duluth Gabbro, an analog for lunar KREEP rocks, using a prototype spaceflight laser ablation resonance ionization mass spectrometer (LARIMS). The near-side of the Moon comprises previously un-sampled, KREEP rich, young-lunar basalts critical for calibrating the <3.5 Ga history of the Moon, and hence the solar system, since 3.5 Ga. Measurement of the Duluth Gabbro is a proof of concept of lunar in-situ dating to constrain lunar history. Using a novel normalization approach, and by correcting for matrix-dependent isotope effects, we have been able to obtain a date of 1100 ± 200 Ma (Figure 1), compared to the previously established thermal ionization mass spectrometry measurement of 1096 ± 14 Ma. The precision of LARIMS is sufficient to constrain the current 1 Ga uncertainty of the lunar flux curve, allowing us to reassess the timing of peak lunar volcanism, and constrain lunar thermal evolution. Furthermore, an updated lunar flux curve has implications throughout the solar system. For example, Mars could have undergone a longer epoch of voluminous, shield-forming volcanism and associated mantle evolution, as well as a longer era of abundant volatiles and hence potential habitability. These alternative chronologies could even affect our understanding of the evolution of life on Earth: under the classic chronology, life is thought to have originated after the dwindling of bombardment, but under the alternative chronology, it might have appeared during heavy bombardment. In order to resolve the science questions regarding the history of the Moon, and in light of the Duluth Gabbro results, we recently proposed a Discovery mission called MARE: The Moon Age and Regolith Explorer. MARE would accomplish these goals by landing on a young, nearside lunar basalt flow southwest of Aristarchus that has a crater density corresponding to a highly uncertain absolute age, collecting >10 rock samples, and assessing their radioisotopic age, geochemistry, and mineralogy.
The potential for iron reduction in upland soils in Calhoun Critical Zone Observatory
NASA Astrophysics Data System (ADS)
Thompson, A.; Chen, C.; Noor, N.; Hodges, C. A.; Barcellos, D.; Richter, D. D., Jr.
2017-12-01
Fe redox cycling plays an important role in organic matter preservation and degradation, and the fate of nutrients and contaminants. Despite its importance, Fe redox cycling in non-flooded upland soils has been underappreciated, although many upland terrestrial ecosystems have episodes of low redox events and an abundance of anoxic microsites. Soil Fe reduction is generally constrained by C availability, the reactivity of Fe(III) oxyhydroxides, and the abundance of Fe reducing bacteria. The goal of this study was to determine the potential for Fe reduction in upland soils under varying land-uses (Hardwood, Pine and Cultivated soils) from Calhoun Critical Zone Observatory. Fresh field soils from multiple depths were incubated in the lab without amendments under anoxic conditions for 3 weeks to determine the native potential for soil Fe reduction and to assess the limiting factors, the soils were amended with factorial mixtures of the following: (1) organic substrates (glucose and alanine); (2) bioavailable Fe (ferrihydrite); and (3) Fe reducing bacteria (Shewanella oneidensis strain MR-1). Results showed that Fe reduction potential generally decreased with soil depth. Fe reduction potential is very minimal below 1m of soil profile. The availability of Fe(III) minerals did not constrain pine and hardwood soil Fe reduction potential. Fe(III) availability only slightly limited the potential for Fe reduction the cultivated soils, which have the lowest extractable Fe by ascorbate-citrate. Labile C constrained Fe reduction in the hardwood and cultivated soils, but not in the pine soils, which had the highest extractable C by K2SO4. In addition, we found the more energetic C source (glucose) facilitated more Fe reduction in the subsurface soil than did Alanine. Finally, the abundance of Fe-reducing bacteria limited Fe reduction potential in almost all of these soils, particularly the pine soils.
A Photometric Survey of Centaur and Trans-Neptunian Objects
NASA Astrophysics Data System (ADS)
Tegler, S. C.; Romanishin, W.; Weintraub, D. A.; Fink, U.; Fevig, R.
1997-07-01
We present a progress report on our program at the Steward Observatory 1.5-m and 2.3-m telescopes, the NASA Infrared Telescope Facility, and the Kitt Peak National Observatory 4.0-m telescope to carry out a B, V, R, J, H, and K band photometric survey of Centaur and Trans-Neptunian Objects (TNOs). The goals of our program are to: (1) constrain the sizes and shapes and spin axis orientations of these objects, and (2) determine whether these objects exhibit a diversity of colors and therefore a diversity of surface compositions. We have constrained the sizes and shapes and spin axis orientations of the TNOs 1993 SC and 1994 TB and the Centaur 1995 GO. If we assume the axes of rotation are orthogonal to our line of sight, then 1993 SC is spherical (semi-major to semi-minor axis ratio, a/b <= 1.12) and 1994 TB and 1995 GO are elongated (a/b ~ 1.4). We find that the TNOs 1993 SC and 1994 TB and Centaurs 1993 HA2 and 5145 Pholus have extraordinarily red B-V and V-R colors, among the reddest in the Solar System. Extraordinarily red colors are consistent with surfaces rich in complex carbon-bearing molecules. The B-V, V-R, V-J, and J-K colors of 1995 GO are much less red (similar to redder Trojan asteroids). In the future, once we have a statistically significant number of observations, we will look for a Centuar and TNO color trend with perihelion distance. In addition, we will examine individual Centaurs and TNOs for a color trend with longitude. From such an analysis we will constrain the importance of cosmic ray bombardment, collisions, and coma formation on the surface evolution of Centaurs and TNOs. This research is supported by the NASA Origins of Solar Systems Program.
Curtis, J. Randall; Tulsky, James A.
2018-01-01
Abstract Background: High-quality care for seriously ill patients aligns treatment with their goals and values. Failure to achieve “goal-concordant” care is a medical error that can harm patients and families. Because communication between clinicians and patients enables goal concordance and also affects the illness experience in its own right, healthcare systems should endeavor to measure communication and its outcomes as a quality assessment. Yet, little consensus exists on what should be measured and by which methods. Objectives: To propose measurement priorities for serious illness communication and its anticipated outcomes, including goal-concordant care. Methods: We completed a narrative review of the literature to identify links between serious illness communication, goal-concordant care, and other outcomes. We used this review to identify gaps and opportunities for quality measurement in serious illness communication. Results: Our conceptual model describes the relationship between communication, goal-concordant care, and other relevant outcomes. Implementation-ready measures to assess the quality of serious illness communication and care include (1) the timing and setting of serious illness communication, (2) patient experience of communication and care, and (3) caregiver bereavement surveys that include assessment of perceived goal concordance of care. Future measurement priorities include direct assessment of communication quality, prospective patient or family assessment of care concordance with goals, and assessment of the bereaved caregiver experience. Conclusion: Improving serious illness care necessitates ensuring that high-quality communication has occurred and measuring its impact. Measuring patient experience and receipt of goal-concordant care should be our highest priority. We have the tools to measure both. PMID:29091522
Sanders, Justin J; Curtis, J Randall; Tulsky, James A
2018-03-01
High-quality care for seriously ill patients aligns treatment with their goals and values. Failure to achieve "goal-concordant" care is a medical error that can harm patients and families. Because communication between clinicians and patients enables goal concordance and also affects the illness experience in its own right, healthcare systems should endeavor to measure communication and its outcomes as a quality assessment. Yet, little consensus exists on what should be measured and by which methods. To propose measurement priorities for serious illness communication and its anticipated outcomes, including goal-concordant care. We completed a narrative review of the literature to identify links between serious illness communication, goal-concordant care, and other outcomes. We used this review to identify gaps and opportunities for quality measurement in serious illness communication. Our conceptual model describes the relationship between communication, goal-concordant care, and other relevant outcomes. Implementation-ready measures to assess the quality of serious illness communication and care include (1) the timing and setting of serious illness communication, (2) patient experience of communication and care, and (3) caregiver bereavement surveys that include assessment of perceived goal concordance of care. Future measurement priorities include direct assessment of communication quality, prospective patient or family assessment of care concordance with goals, and assessment of the bereaved caregiver experience. Improving serious illness care necessitates ensuring that high-quality communication has occurred and measuring its impact. Measuring patient experience and receipt of goal-concordant care should be our highest priority. We have the tools to measure both.
A transformation method for constrained-function minimization
NASA Technical Reports Server (NTRS)
Park, S. K.
1975-01-01
A direct method for constrained-function minimization is discussed. The method involves the construction of an appropriate function mapping all of one finite dimensional space onto the region defined by the constraints. Functions which produce such a transformation are constructed for a variety of constraint regions including, for example, those arising from linear and quadratic inequalities and equalities. In addition, the computational performance of this method is studied in the situation where the Davidon-Fletcher-Powell algorithm is used to solve the resulting unconstrained problem. Good performance is demonstrated for 19 test problems by achieving rapid convergence to a solution from several widely separated starting points.
Foster, J.S. Jr.
1960-04-19
A compact electronic device capable of providing short time high density outputs of neutrons is described. The device of the invention includes an evacuated vacuum housing adapted to be supplied with a deuterium, tritium, or other atmosphere and means for establishing an electrical discharge along a path through the gas. An energized solenoid is arranged to constrain the ionized gas (plasma) along the path. An anode bearing adsorbed or adherent target material is arranged to enclose the constrained plasma. To produce neutrons a high voltage is applied from appropriate supply means between the plasma and anode to accelerate ions from the plasma to impinge upcn the target material, e.g., comprising deuterium.
NASA Astrophysics Data System (ADS)
Sullivan, Christopher James
Weak interactions involving atomic nuclei are critical components in a broad range of as- trophysical phenomenon. As allowed Gamow-Teller transitions are the primary path through which weak interactions in nuclei operate in astrophysical contexts, the constraint of these nuclear transitions is an important goal of nuclear astrophysics. In this work, the charged current nuclear weak interaction known as electron capture is studied in the context of stellar core-collapse supernovae (CCSNe). Specifically, the sensitivity of the core-collapse and early post-bounce phases of CCSNe to nuclear electron capture rates are examined. Electron capture rates are adjusted by factors consistent with uncer- tainties indicated by comparing theoretical rates to those deduced from charge-exchange and beta-decay measurements. With the aide of such sensitivity studies, the diverse role of electron capture on thousands of nuclear species is constrained to a few tens of nuclei near N 50 and A 80 which dictate the primary response of CCSNe to nuclear electron capture. As electron capture is shown to be a leading order uncertainty during the core-collapse phase of CCSNe, future experimental and theoretical efforts should seek to constrain the rates of nuclei in this region. Furthermore, neutral current neutrino-nuclear interactions in the tens-of-MeV energy range are important in a variety of astrophysical environments including core-collapse super- novae as well as in the synthesis of some of the solar systems rarest elements. Estimates for inelastic neutrino scattering on nuclei are also important for neutrino detector construction aimed at the detection of astrophysical neutrinos. Due to the small cross sections involved, direct measurements are rare and have only been performed on a few nuclei. For this rea- son, indirect measurements provide a unique opportunity to constrain the nuclear transition strength needed to infer inelastic neutrino-nucleus cross sections. Herein the (6Li, 6Li‧) inelastic scattering reaction at 100 MeV/u is shown to indirectly select the relevant transitions for inelastic neutrino-nucleus scattering. Specifically, the probes unique selectivity of isovector- spin transfer excitations (Delta S = 1, DeltaT = 1, DeltaTz = 0) is demonstrated, thereby allowing the extraction of Gamow-Teller transition strength in the inelastic channel. Finally, the development and performance of a newly established technique for the sub- field of artificial intelligence known as neuroevolution is described. While separate from the physics that is discussed, these algorithmic advancements seek to improve the adoption of machine learning in the scientific domain by enabling neuroevolution to take advantage of modern heterogeneous compute architectures. Because the evolution of neural network pop- ulations offloads the choice of specific details about the neural networks to an evolutionary search algorithm, neuroevolution can increase the accessibility of machine learning. However, the evolution of neural networks through parameter and structural space presents a novel di- vergence problem when mapping the evaluation of these networks to many-core architectures. The principal focus of the algorithm optimizations described herein are on improving the feed-forward evaluation time when tens-to-hundreds of thousands of heterogeneous neural networks are evaluated concurrently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Ying -Qi; Segall, Paul; Bradley, Andrew
Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock andmore » magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.« less
NASA Astrophysics Data System (ADS)
Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle
2017-10-01
Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ˜10-11.4m2 to reproduce observed dome rock porosities. Compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.
Wong, Ying -Qi; Segall, Paul; Bradley, Andrew; ...
2017-10-04
Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock andmore » magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.« less
Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle R.
2017-01-01
Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5wt%) total volatiles and that the magma permeability scale is well constrained at ~10-11.4 m2 to reproduce observed dome rock porosities. Compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.
21 CFR 888.3210 - Finger joint metal/metal constrained cemented prosthesis.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Finger joint metal/metal constrained cemented... metal/metal constrained cemented prosthesis. (a) Identification. A finger joint metal/metal constrained..., 1996 for any finger joint metal/metal constrained cemented prosthesis that was in commercial...
21 CFR 888.3210 - Finger joint metal/metal constrained cemented prosthesis.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Finger joint metal/metal constrained cemented... metal/metal constrained cemented prosthesis. (a) Identification. A finger joint metal/metal constrained..., 1996 for any finger joint metal/metal constrained cemented prosthesis that was in commercial...
Goal importance within planned behaviour theory as 'the' predictor of study behaviour in college.
Sideridis, G D; Kaissidis-Rodafinos, A
2001-12-01
The theory of planned behaviour has been rarely used for the explanation of student study behaviour and achievement. Although successful, the theory has been criticised for not including important cognitions, so goal importance was added in the present study. Goal importance refers to the weight-importance an individual assigns towards achieving a specific goal (Hollenbeck & Williams, 1987). The purpose of Study 1 was to explain the study behaviour habits of first year college students, using a) Ajzen and Madden's (1986) theory of planned behaviour, and b) planned behaviour with the addition of goal importance. The purpose of Study 2 was to replicate the findings of Study 1. The sample of Study 1 included 149 first year students of an American College located in northern Greece. Study 2 included 85 first year students of the same institution. The students in Study 1 were given a questionnaire four weeks prior to the end of the spring 1998 semester, and those in Study 2 in the autumn of 1998, including all elements of the theory of planned behaviour and goal importance. The data were modelled using Covariance Structural Modelling (CSM) and EQS 5.7b (Bentler, 1998). The planned behaviour model was not well supported in Study 1 providing a Comparative Fit Index (CFI) of.83. However, when goal importance was included in the equation, the resulting structural model produced a CFI of.94. The final structural model of Study 1 was re-tested with the sample of Study 2 and produced a CFI =.95. Findings suggest that goal importance is the causal agent in directing all elements necessary to achieve high levels of study behaviour. Future studies should examine the role of goal importance with other behaviours as well.
An Investigation of the Goals for an Environmental Science Course: Teacher and Student Perspectives
ERIC Educational Resources Information Center
Blatt, Erica N.
2015-01-01
This investigation uses an ethnographic case study approach to explore the benefits and challenges of including a variety of goals within a high school Environmental Science curriculum. The study focuses on environmental education (EE) goals established by the Belgrade Charter (1975), including developing students' environmental awareness and…
ERIC Educational Resources Information Center
Vogler, Jane S.; Bakken, Linda
2007-01-01
This study explored a theory for motivation which included aspects of both attribution theory and goal theory. Motivational variables included beliefs about intelligence (entity or incremental), goal orientation (mastery/learning, performance-approach, performance-avoidance) and avoidant behaviours. Grades 4 and 5 students from a large,…
21 CFR 888.3300 - Hip joint metal constrained cemented or uncemented prosthesis.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Hip joint metal constrained cemented or uncemented... metal constrained cemented or uncemented prosthesis. (a) Identification. A hip joint metal constrained... Administration on or before December 26, 1996 for any hip joint metal constrained cemented or uncemented...
Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.
2016-12-01
The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamurthy, Dheepak
This paper is an overview of Power System Simulation Toolbox (psst). psst is an open-source Python application for the simulation and analysis of power system models. psst simulates the wholesale market operation by solving a DC Optimal Power Flow (DCOPF), Security Constrained Unit Commitment (SCUC) and a Security Constrained Economic Dispatch (SCED). psst also includes models for the various entities in a power system such as Generator Companies (GenCos), Load Serving Entities (LSEs) and an Independent System Operator (ISO). psst features an open modular object oriented architecture that will make it useful for researchers to customize, expand, experiment beyond solvingmore » traditional problems. psst also includes a web based Graphical User Interface (GUI) that allows for user friendly interaction and for implementation on remote High Performance Computing (HPCs) clusters for parallelized operations. This paper also provides an illustrative application of psst and benchmarks with standard IEEE test cases to show the advanced features and the performance of toolbox.« less
An approach to constrained aerodynamic design with application to airfoils
NASA Technical Reports Server (NTRS)
Campbell, Richard L.
1992-01-01
An approach was developed for incorporating flow and geometric constraints into the Direct Iterative Surface Curvature (DISC) design method. In this approach, an initial target pressure distribution is developed using a set of control points. The chordwise locations and pressure levels of these points are initially estimated either from empirical relationships and observed characteristics of pressure distributions for a given class of airfoils or by fitting the points to an existing pressure distribution. These values are then automatically adjusted during the design process to satisfy the flow and geometric constraints. The flow constraints currently available are lift, wave drag, pitching moment, pressure gradient, and local pressure levels. The geometric constraint options include maximum thickness, local thickness, leading-edge radius, and a 'glove' constraint involving inner and outer bounding surfaces. This design method was also extended to include the successive constraint release (SCR) approach to constrained minimization.
(abstract) Science-Project Interaction in the Low-Cost Mission
NASA Technical Reports Server (NTRS)
Wall, Stephen D.
1994-01-01
Large, complex, and highly optimized missions have performed most of the preliminary reconnaisance of the solar system. As a result we have now mapped significant fractions of its total surface (or surface-equivalent) area. Now, however, scientific exploration of the solar system is undergoing a major change in scale, and existing missions find it necessary to limit costs while fulfilling existing goals. In the future, NASA's Discovery program will continue the reconnaisance, exploration, and diagnostic phases of planetary research using lower cost missions, which will include lower cost mission operations systems (MOS). Historically, one of the more expensive functions of MOS has been its interaction with the science community. Traditional MOS elements that this interaction have embraced include mission planning, science (and engineering) event conflict resolution, sequence optimization and integration, data production (e.g., assembly, enhancement, quality assurance, documentation, archive), and other science support services. In the past, the payoff from these efforts has been that use of mission resources has been highly optimized, constraining resources have been generally completely consumed, and data products have been accurate and well documented. But because these functions are expensive we are now challenged to reduce their cost while preserving the benefits. In this paper, we will consider ways of revising the traditional MOS approach that might save project resources while retaining a high degree of service to the Projects' customers. Pre-launch, science interaction can be made simplier by limiting numbers of instruments and by providing greater redundancy in mission plans. Post launch, possibilities include prioritizing data collection into a few categories, easing requirements on real-time of quick-look data delivery, and closer integration of scientists into the mission operation.
Mantle viscosity structure constrained by joint inversions of seismic velocities and density
NASA Astrophysics Data System (ADS)
Rudolph, M. L.; Moulik, P.; Lekic, V.
2017-12-01
The viscosity structure of Earth's deep mantle affects the thermal evolution of Earth, the ascent of mantle upwellings, sinking of subducted oceanic lithosphere, and the mixing of compositional heterogeneities in the mantle. Modeling the long-wavelength dynamic geoid allows us to constrain the radial viscosity profile of the mantle. Typically, in inversions for the mantle viscosity structure, wavespeed variations are mapped into density variations using a constant- or depth-dependent scaling factor. Here, we use a newly developed joint model of anisotropic Vs, Vp, density and transition zone topographies to generate a suite of solutions for the mantle viscosity structure directly from the seismologically constrained density structure. The density structure used to drive our forward models includes contributions from both thermal and compositional variations, including important contributions from compositionally dense material in the Large Low Velocity Provinces at the base of the mantle. These compositional variations have been neglected in the forward models used in most previous inversions and have the potential to significantly affect large-scale flow and thus the inferred viscosity structure. We use a transdimensional, hierarchical, Bayesian approach to solve the inverse problem, and our solutions for viscosity structure include an increase in viscosity below the base of the transition zone, in the shallow lower mantle. Using geoid dynamic response functions and an analysis of the correlation between the observed geoid and mantle structure, we demonstrate the underlying reason for this inference. Finally, we present a new family of solutions in which the data uncertainty is accounted for using covariance matrices associated with the mantle structure models.
The Urban Environmental Monitoring/100 Cities Project: Legacy of the First Phase and Next Steps
NASA Technical Reports Server (NTRS)
Stefanov, William L.; Wentz, Elizabeth A.; Brazel, Anthony; Netzband, Maik; Moeller, Matthias
2009-01-01
The Urban Environmental Monitoring (UEM) project, now known as the 100 Cities Project, at Arizona State University (ASU) is a baseline effort to collect and analyze remotely sensed data for 100 urban centers worldwide. Our overarching goal is to use remote sensing technology to better understand the consequences of rapid urbanization through advanced biophysical measurements, classification methods, and modeling, which can then be used to inform public policy and planning. Urbanization represents one of the most significant alterations that humankind has made to the surface of the earth. In the early 20th century, there were less than 20 cities in the world with populations exceeding 1 million; today, there are more than 400. The consequences of urbanization include the transformation of land surfaces from undisturbed natural environments to land that supports different forms of human activity, including agriculture, residential, commercial, industrial, and infrastructure such as roads and other types of transportation. Each of these land transformations has impacted, to varying degrees, the local climatology, hydrology, geology, and biota that predate human settlement. It is essential that we document, to the best of our ability, the nature of land transformations and the consequences to the existing environment. The focus in the UEM project since its inception has been on rapid urbanization. Rapid urbanization is occurring in hundreds of cities worldwide as population increases and people migrate from rural communities to urban centers in search of employment and a better quality of life. The unintended consequences of rapid urbanization have the potential to cause serious harm to the environment, to human life, and to the resulting built environment because rapid development constrains and rushes decision making. Such rapid decision making can result in poor planning, ineffective policies, and decisions that harm the environment and the quality of human life. Slower, more thought-out, decision making could result in more favorable outcomes. The harm to the environment includes poor air quality, soil erosion, polluted rivers and aquifers, and loss of wildlife habitat. Human life is then threatened because of increased potential for disease spreading, human conflict, environmental hazards, and diminished quality of life. The built environment is potentially threatened when cities are built in areas that can be impacted by events such as hurricanes, tsunamis, earthquakes, fires, and landslides. Our goals include assessing the threat of such events on cities and the people living there.
Preston, Charles; Chahal, Harinder S; Porrás, Analia; Cargill, Lucette; Hinds, Maryam; Olowokure, Babatunde; Cummings, Rudolph; Hospedales, James
2016-05-01
Improving basic capacities for regulation of medicines and health technologies through regulatory systems strengthening is particularly challenging in resource-constrained settings. "Regionalization"-an approach in which countries with common histories, cultural values, languages, and economic conditions work together to establish more efficient systems-may be one answer. This report describes the Caribbean Regulatory System (CRS), a regionalization initiative being implemented in the mostly small countries of the Caribbean Community and Common Market (CARICOM). This initiative is an innovative effort to strengthen regulatory systems in the Caribbean, where capacity is limited compared to other subregions of the Americas. The initiative's concept and design includes a number of features and steps intended to enhance sustainability in resource-constrained contexts. The latter include 1) leveraging existing platforms for centralized cooperation, governance, and infrastructure; 2) strengthening regulatory capacities with the largest potential public health impact; 3) incorporating policies that promote reliance on reference authorities; 4) changing the system to encourage industry to market their products in CARICOM (e.g., using a centralized portal of entry to reduce regulatory burdens); and 5) building human resource capacity. If implemented properly, the CRS will be self-sustaining through user fees. The experience and lessons learned thus far in implementing this initiative, described in this report, can serve as a case study for the development of similar regulatory strengthening initiatives in resource-constrained environments.
NASA Astrophysics Data System (ADS)
Liu, Ruo-Yu; Taylor, Andrew; Wang, Xiang-Yu; Aharonian, Felix
2017-01-01
By interacting with the cosmic background photons during their propagation through intergalactic space, ultrahigh energy cosmic rays (UHECRs) produce energetic electron/positron pairs and photons which will initiate electromagnetic cascades, contributing to the isotropic gamma-ray background (IGRB). The generated gamma-ray flux level highly depends on the redshift evolution of the UHECR sources. Recently, the Fermi-LAT collaboration reported that 86-14+16 of the total extragalactic gamma-ray flux comes from extragalactic point sources including those unresolved ones. This leaves a limited room for the diffusive gamma ray generated via UHECR propagation, and subsequently constrains their source distribution in the Universe. Normalizing the total cosmic ray energy budget with the observed UHECR flux in the energy band of (1-4)×1018 eV, we calculate the diffuse gamma-ray flux generated through UHECR propagation. We find that in order to not overshoot the new IGRB limit, these sub-ankle UHECRs should be produced mainly by nearby sources, with a possible non-negligible contribution from our Galaxy. The distance for the majority of UHECR sources can be further constrained if a given fraction of the observed IGRB at 820 GeV originates from UHECR. We note that our result should be conservative since there may be various other contributions to the IGRB that is not included here.
Morgan, Jake R; Kim, Arthur Y; Naggie, Susanna; Linas, Benjamin P
2018-01-01
Direct acting antiviral hepatitis C virus (HCV) therapies are highly effective but costly. Wider adoption of an 8-week ledipasvir/sofosbuvir treatment regimen could result in significant savings, but may be less efficacious compared with a 12-week regimen. We evaluated outcomes under a constrained budget and cost-effectiveness of 8 vs 12 weeks of therapy in treatment-naïve, noncirrhotic, genotype 1 HCV-infected black and nonblack individuals and considered scenarios of IL28B and NS5A resistance testing to determine treatment duration in sensitivity analyses. We developed a decision tree to use in conjunction with Monte Carlo simulation to investigate the cost-effectiveness of recommended treatment durations and the population health effect of these strategies given a constrained budget. Outcomes included the total number of individuals treated and attaining sustained virologic response (SVR) given a constrained budget and incremental cost-effectiveness ratios. We found that treating eligible (treatment-naïve, noncirrhotic, HCV-RNA <6 million copies) individuals with 8 weeks rather than 12 weeks of therapy was cost-effective and allowed for 50% more individuals to attain SVR given a constrained budget among both black and nonblack individuals, and our results suggested that NS5A resistance testing is cost-effective. Eight-week therapy provides good value, and wider adoption of shorter treatment could allow more individuals to attain SVR on the population level given a constrained budget. This analysis provides an evidence base to justify movement of the 8-week regimen to the preferred regimen list for appropriate patients in the HCV treatment guidelines and suggests expanding that recommendation to black patients in settings where cost and relapse trade-offs are considered.
Evaluating an image-fusion algorithm with synthetic-image-generation tools
NASA Astrophysics Data System (ADS)
Gross, Harry N.; Schott, John R.
1996-06-01
An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models
NASA Astrophysics Data System (ADS)
Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.
2017-06-01
The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, W.T.; Siebers, J.V.
Purpose: To introduce quasi-constrained Multi-Criteria Optimization (qcMCO) for unsupervised radiation therapy optimization which generates alternative patient-specific plans emphasizing dosimetric tradeoffs and conformance to clinical constraints for multiple delivery techniques. Methods: For N Organs At Risk (OARs) and M delivery techniques, qcMCO generates M(N+1) alternative treatment plans per patient. Objective weight variations for OARs and targets are used to generate alternative qcMCO plans. For 30 locally advanced lung cancer patients, qcMCO plans were generated for dosimetric tradeoffs to four OARs: each lung, heart, and esophagus (N=4) and 4 delivery techniques (simple 4-field arrangements, 9-field coplanar IMRT, 27-field non-coplanar IMRT, and non-coplanarmore » Arc IMRT). Quasi-constrained objectives included target prescription isodose to 95% (PTV-D95), maximum PTV dose (PTV-Dmax)< 110% of prescription, and spinal cord Dmax<45 Gy. The algorithm’s ability to meet these constraints while simultaneously revealing dosimetric tradeoffs was investigated. Statistically significant dosimetric tradeoffs were defined such that the coefficient of determination between dosimetric indices which varied by at least 5 Gy between different plans was >0.8. Results: The qcMCO plans varied mean dose by >5 Gy to ipsilateral lung for 24/30 patients, contralateral lung for 29/30 patients, esophagus for 29/30 patients, and heart for 19/30 patients. In the 600 plans computed without human interaction, average PTV-D95=67.4±3.3 Gy, PTV-Dmax=79.2±5.3 Gy, and spinal cord Dmax was >45 Gy in 93 plans (>50 Gy in 2/600 plans). Statistically significant dosimetric tradeoffs were evident in 19/30 plans, including multiple tradeoffs of at least 5 Gy between multiple OARs in 7/30 cases. The most common statistically significant tradeoff was increasing PTV-Dmax to reduce OAR dose (15/30 patients). Conclusion: The qcMCO method can conform to quasi-constrained objectives while revealing significant variations in OAR doses including mean dose reductions >5 Gy. Clinical implementation will facilitate patient-specific decision making based on achievable dosimetry as opposed to accept/reject models based on population derived objectives.« less
A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions
NASA Astrophysics Data System (ADS)
Lienert, Sebastian; Joos, Fortunat
2018-05-01
A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.
Weighting climate model projections using observational constraints.
Gillett, Nathan P
2015-11-13
Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081-2100 relative to 1986-2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5-95% warming range of 0.8-2.5 K is somewhat lower than the unweighted range of 1.1-2.6 K reported in the IPCC AR5. © 2015 The Authors.
Effects of constrained arm swing on vertical center of mass displacement during walking.
Yang, Hyung Suk; Atkins, Lee T; Jensen, Daniel B; James, C Roger
2015-10-01
The purpose of this study was to determine the effects of constraining arm swing on the vertical displacement of the body's center of mass (COM) during treadmill walking and examine several common gait variables that may account for or mask differences in the body's COM motion with and without arm swing. Participants included 20 healthy individuals (10 male, 10 female; age: 27.8 ± 6.8 years). The body's COM displacement, first and second peak vertical ground reaction forces (VGRFs), and lowest VGRF during mid-stance, peak summed bilateral VGRF, lower extremity sagittal joint angles, stride length, and foot contact time were measured with and without arm swing during walking at 1.34 m/s. The body's COM displacement was greater with the arms constrained (arm swing: 4.1 ± 1.2 cm, arm constrained: 4.9 ± 1.2 cm, p < 0.001). Ground reaction force data indicated that the COM displacement increased in both double limb and single limb stance. However, kinematic patterns visually appeared similar between conditions. Shortened stride length and foot contact time also were observed, although these do not seem to account for the increased COM displacement. However, a change in arm COM acceleration might have contributed to the difference. These findings indicate that a change in arm swing causes differences in vertical COM displacement, which could increase energy expenditure. Copyright © 2015 Elsevier B.V. All rights reserved.
Bending it like Beckham: how to visually fool the goalkeeper.
Dessing, Joost C; Craig, Cathy M
2010-10-06
As bending free-kicks becomes the norm in modern day soccer, implications for goalkeepers have largely been ignored. Although it has been reported that poor sensitivity to visual acceleration makes it harder for expert goalkeepers to perceptually judge where the curved free-kicks will cross the goal line, it is unknown how this affects the goalkeeper's actual movements. Here, an in-depth analysis of goalkeepers' hand movements in immersive, interactive virtual reality shows that they do not fully account for spin-induced lateral ball acceleration. Hand movements were found to be biased in the direction of initial ball heading, and for curved free-kicks this resulted in biases in a direction opposite to those necessary to save the free-kick. These movement errors result in less time to cover a now greater distance to stop the ball entering the goal. These and other details of the interceptive behaviour are explained using a simple mathematical model which shows how the goalkeeper controls his movements online with respect to the ball's current heading direction. Furthermore our results and model suggest how visual landmarks, such as the goalposts in this instance, may constrain the extent of the movement biases. While it has previously been shown that humans can internalize the effects of gravitational acceleration, these results show that it is much more difficult for goalkeepers to account for spin-induced visual acceleration, which varies from situation to situation. The limited sensitivity of the human visual system for detecting acceleration, suggests that curved free-kicks are an important goal-scoring opportunity in the game of soccer.
Bending It Like Beckham: How to Visually Fool the Goalkeeper
2010-01-01
Background As bending free-kicks becomes the norm in modern day soccer, implications for goalkeepers have largely been ignored. Although it has been reported that poor sensitivity to visual acceleration makes it harder for expert goalkeepers to perceptually judge where the curved free-kicks will cross the goal line, it is unknown how this affects the goalkeeper's actual movements. Methodology/Principal Findings Here, an in-depth analysis of goalkeepers' hand movements in immersive, interactive virtual reality shows that they do not fully account for spin-induced lateral ball acceleration. Hand movements were found to be biased in the direction of initial ball heading, and for curved free-kicks this resulted in biases in a direction opposite to those necessary to save the free-kick. These movement errors result in less time to cover a now greater distance to stop the ball entering the goal. These and other details of the interceptive behaviour are explained using a simple mathematical model which shows how the goalkeeper controls his movements online with respect to the ball's current heading direction. Furthermore our results and model suggest how visual landmarks, such as the goalposts in this instance, may constrain the extent of the movement biases. Conclusions While it has previously been shown that humans can internalize the effects of gravitational acceleration, these results show that it is much more difficult for goalkeepers to account for spin-induced visual acceleration, which varies from situation to situation. The limited sensitivity of the human visual system for detecting acceleration, suggests that curved free-kicks are an important goal-scoring opportunity in the game of soccer. PMID:20949130
Goal orientation and self-efficacy in relation to memory in adulthood
Hastings, Erin C.; West, Robin L.
2011-01-01
The achievement goal framework (Dweck, 1986) has been well-established in children and college-students, but has rarely been examined empirically with older adults. The current study, including younger and older adults, examined the effects of memory self-efficacy, learning goals (focusing on skill mastery over time) and performance goals (focusing on performance outcome evaluations) on memory performance. Questionnaires measured memory self-efficacy and general orientation toward learning and performance goals; free and cued recall was assessed in a subsequent telephone interview. As expected, age was negatively related and education was positively related to memory self-efficacy, and memory self-efficacy was positively related to memory, in a structural equation model. Age was also negatively related to memory performance. Results supported the positive impact of learning goals and the negative impact of performance goals on memory self-efficacy. There was no significant direct effect of learning or performance goals on memory performance; their impact occurred via their effect on memory self-efficacy. The present study supports past research suggesting that learning goals are beneficial, and performance goals are maladaptive, for self-efficacy and learning, and validates the achievement goal framework in a sample including older adults. PMID:21728891
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaulme, P.; McKeever, J.; Rawls, M. L.
2013-04-10
Red giant stars are proving to be an incredible source of information for testing models of stellar evolution, as asteroseismology has opened up a window into their interiors. Such insights are a direct result of the unprecedented data from space missions CoRoT and Kepler as well as recent theoretical advances. Eclipsing binaries are also fundamental astrophysical objects, and when coupled with asteroseismology, binaries provide two independent methods to obtain masses and radii and exciting opportunities to develop highly constrained stellar models. The possibility of discovering pulsating red giants in eclipsing binary systems is therefore an important goal that could potentiallymore » offer very robust characterization of these systems. Until recently, only one case has been discovered with Kepler. We cross-correlate the detected red giant and eclipsing-binary catalogs from Kepler data to find possible candidate systems. Light-curve modeling and mean properties measured from asteroseismology are combined to yield specific measurements of periods, masses, radii, temperatures, eclipse timing variations, core rotation rates, and red giant evolutionary state. After using three different techniques to eliminate false positives, out of the 70 systems common to the red giant and eclipsing-binary catalogs we find 13 strong candidates (12 previously unknown) to be eclipsing binaries, one to be a non-eclipsing binary with tidally induced oscillations, and 10 more to be hierarchical triple systems, all of which include a pulsating red giant. The systems span a range of orbital eccentricities, periods, and spectral types F, G, K, and M for the companion of the red giant. One case even suggests an eclipsing binary composed of two red giant stars and another of a red giant with a {delta}-Scuti star. The discovery of multiple pulsating red giants in eclipsing binaries provides an exciting test bed for precise astrophysical modeling, and follow-up spectroscopic observations of many of the candidate systems are encouraged. The resulting highly constrained stellar parameters will allow, for example, the exploration of how binary tidal interactions affect pulsations when compared to the single-star case.« less
Goal Translation: How To Create a Results-Focused Organizational Culture.
ERIC Educational Resources Information Center
Mourier, Pierre
2000-01-01
Presents a model for changing human and organizational behavior. Highlights include behavioral dynamics; expectations; alignment; organizational structure; organizational culture; individual skills and training; leadership; management systems; developing corporate-level goals; communicating goals to the organization; and developing employee goals.…
Method of assembling an electric power
Rinehart, Lawrence E [Lake Oswego, OR; Romero, Guillermo L [Phoenix, AZ
2007-05-03
A method of assembling and providing an electric power apparatus. The method uses a heat resistant housing having a structure adapted to accommodate and retain a power circuit card and also including a bracket adapted to accommodate and constrain a rigid conductive member. A power circuit card having an electrical terminal is placed into the housing and a rigid conductive member into the bracket. The rigid conductive member is flow soldered to the electrical terminal, thereby exposing the heat resistant housing to heat and creating a solder bond. Finally, the rigid conductive member is affirmatively connected to the housing. The bracket constrains the rigid conductive member so that the act of affirmatively connecting does not weaken the solder bond.
NASA Astrophysics Data System (ADS)
Liu, Yuan; Wang, Mingqiang; Ning, Xingyao
2018-02-01
Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.
On the formation of granulites
Bohlen, S.R.
1991-01-01
The tectonic settings for the formation and evolution of regional granulite terranes and the lowermost continental crust can be deduced from pressure-temperature-time (P-T-time) paths and constrained by petrological and geophysical considerations. P-T conditions deduced for regional granulites require transient, average geothermal gradients of greater than 35??C km-1, implying minimum heat flow in excess of 100 mW m-2. Such high heat flow is probably caused by magmatic heating. Tectonic settings wherein such conditions are found include convergent plate margins, continental rifts, hot spots and at the margins of large, deep-seated batholiths. Cooling paths can be constrained by solid-solid and devolatilization equilibria and geophysical modelling. -from Author
Characterizing Uranus with an Ice giant Planetary Origins Probe (Ice-POP)
NASA Technical Reports Server (NTRS)
Marley, Mark S.; Fortney, Jonathan; Nettelmann, Nadine; Zahnle, Kevin J.
2013-01-01
We now know from studies of planetary transits and microlensing that Neptune-mass planets are ubitquitous and may be the most common class of planets in the Galaxy. As such it is crucial that we understand the formation and evolution of the ice giant planets in our own solar system so that we can better understand planet formation throughout the galaxy. An entry probe mission to Uranus would help accomplish this goal. In fact the Planetary Decadal Survey recommended a Uranus orbiter with entry probe but did not explore in detail the specifications for the entry probe. NASA Ames is currently studying thermal protection system requirements for such a mission and this has led to questions regarding the minimum interesting science payload of such an entry probe. The single most important in-situ measurement for an ice giant entry probe is a measurement of atmospheric composition. For Uranus this would specifically include the methane and noble gas abundances. An in situ measurement of the methane abundance, from below the methane cloud, would constrain the atmospheric carbon abundance, which is believed to be roughly 30 to 50 times solar. There are hints from the transiting planets that extrasolar ice giants show comparable or even greater enhancements of heavy elements compared to their primary stars. However the origin of this carbon enhancement is controversial. Is Uranus a "failed core" of a larger gas giant or was the atmosphere enhanced by accretion of icy planetesimals' Constraining atmospheric abundances of C and perhaps S or even N from below 5 bars would provide badly needed data to address such issues. A measurement of the N abundance would provide clues on the origin of the planetesimals that formed Uranus. Low N-abundance indicates planetesimals from 'warmer' regions where N was mainly in form of NH3, whereas a strong enrichment could indicate planetesimals / cometary material from the colder outer regions of the nebula. Furthermore CO and HCN have been detected in Neptune but not in Uranus. A measurement of the abundance of either would constrain the source mechanisms for these molecules (exogenic or internal). A major surprise from the Galileo Entry Probe was that the heavier noble gases Ar, Kr, and Xe are enhanced in Jupiter's atmosphere at a level comparable to what was seen for the chemically active volatiles N, C, and S. It had been generally expected that Ar, Kr, and Xe would be present in solar abundances, as all were expected to accrete with hydrogen during the gravitational capture of nebular gases. Enhanced abundances of Ar, Kr, and Xe is equivalent to saying that these noble gases have been separated from hydrogen. There are several mechanisms that could accomplish this but these hypotheses require further testing. Measurement of noble gas abundances in an ice giant would constrain the planetary formation and nebular mechanisms responsible for this enhancement. Standard three-layer models of Uranus find that the outer, predominantly H/He layer of Uranus does not reach pressures high enough (approximately 1 Mbar) for H2 to transition to liquid metallic hydrogen. However, valid models can also be constructed with a smaller intermediate water-rich layer, with hydrogen then reaching the metallic hydrogen phase. If this occurs, He should phase separate from the hydrogen and ``rain out," taking along a substantial abundance of Ne, as suggested for Jupiter (and likely also for Saturn). Hence He and Ne depletions could be probes of the planet's structure in the much deeper interior. A determination of Uranus' atmospheric abundances, particularly of the noble gasses, is thus critical to understanding the formation of Uranus, and giant planets in general. These measurements can only be performed with an entry probe. The second key measurement would be a temperature-pressure sounding to provide ground truth for remote measurements of atmospheric temperature and composition and to constrain the internal heat flow. This would also establish that the methane abundance measurements have indeed been made below any possible methane cloud. Finally an ultra stable oscillator would measure wind speeds and constrain atmospheric dynamics. In our presentation we will discuss the importance of all of these measurements and argue that an entry probe is a crucial component of any ice giant mission.
Louisiana's 2017 Master Plan for a Sustainable Coast
NASA Astrophysics Data System (ADS)
Haase, B.
2017-12-01
The Coastal Protection and Restoration Authority is charged with coordinating restoration and protection investments through the development and implementation of Louisiana's Comprehensive Master Plan for a Sustainable Coast. The first master plan was submitted to the Louisiana Legislature in 2007 and is mandated to be updated every five years. The plan's objectives are to reduce economic losses from flooding, promote sustainability by harnessing natural processes, provide habitats for commercial and recreational activities, sustain cultural heritage and promote a viable working coast. Two goals drive decision making about the appropriate suite of restoration and protection projects to include in the Plan: restore and maintain Louisiana's wetlands and provide flood protection for coastal Louisiana's citizens. As part of the decision making process, a wide range of additional metrics are used to evaluate the complex, competing needs of communities, industries, navigation and fisheries. The master plan decision making process includes the identification of individual protection and restoration projects that are evaluated with landscape, storm surge, and risk assessment models and then ranked by how well they perform over time across the set of decision drivers and metrics. High performing projects are assembled into alternatives constrained by available funding and river resources. The planning process is grounded not only on extensive scientific analysis but also on interdisciplinary collaboration between scientists, engineers, planners, community advocates, and coastal stakeholders which creates the long-term dialogue needed for complex environmental planning decisions. It is through this collaboration that recommended alternatives are reviewed and modified to develop the final Plan. Keywords:alternative formulation, comprehensive planning, ecosystem restoration, flood risk reduction and stakeholder engagement
Solomon, Keith R; Wilks, Martin F; Bachman, Ammie; Boobis, Alan; Moretto, Angelo; Pastoor, Timothy P; Phillips, Richard; Embry, Michelle R
2016-11-01
When the human health risk assessment/risk management paradigm was developed, it did not explicitly include a "problem formulation" phase. The concept of problem formulation was first introduced in the context of ecological risk assessment (ERA) for the pragmatic reason to constrain and focus ERAs on the key questions. However, this need also exists for human health risk assessment, particularly for cumulative risk assessment (CRA), because of its complexity. CRA encompasses the combined threats to health from exposure via all relevant routes to multiple stressors, including biological, chemical, physical and psychosocial stressors. As part of the HESI Risk Assessment in the 21st Century (RISK21) Project, a framework for CRA was developed in which problem formulation plays a critical role. The focus of this effort is primarily on a chemical CRA (i.e., two or more chemicals) with subsequent consideration of non-chemical stressors, defined as "modulating factors" (ModFs). Problem formulation is a systematic approach that identifies all factors critical to a specific risk assessment and considers the purpose of the assessment, scope and depth of the necessary analysis, analytical approach, available resources and outcomes, and overall risk management goal. There are numerous considerations that are specific to multiple stressors, and proper problem formulation can help to focus a CRA to the key factors in order to optimize resources. As part of the problem formulation, conceptual models for exposures and responses can be developed that address these factors, such as temporal relationships between stressors and consideration of the appropriate ModFs.