Key Parameters for Operator Diagnosis of BWR Plant Condition during a Severe Accident
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clayton, Dwight A.; Poore, III, Willis P.
2015-01-01
The objective of this research is to examine the key information needed from nuclear power plant instrumentation to guide severe accident management and mitigation for boiling water reactor (BWR) designs (specifically, a BWR/4-Mark I), estimate environmental conditions that the instrumentation will experience during a severe accident, and identify potential gaps in existing instrumentation that may require further research and development. This report notes the key parameters that instrumentation needs to measure to help operators respond to severe accidents. A follow-up report will assess severe accident environmental conditions as estimated by severe accident simulation model analysis for a specific US BWR/4-Markmore » I plant for those instrumentation systems considered most important for accident management purposes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Amidan, Brett G.; Hu, Rebecca
2011-11-28
This report summarizes previous laboratory studies to characterize the performance of methods for collecting, storing/transporting, processing, and analyzing samples from surfaces contaminated by Bacillus anthracis or related surrogates. The focus is on plate culture and count estimates of surface contamination for swab, wipe, and vacuum samples of porous and nonporous surfaces. Summaries of the previous studies and their results were assessed to identify gaps in information needed as inputs to calculate key parameters critical to risk management in biothreat incidents. One key parameter is the number of samples needed to make characterization or clearance decisions with specified statistical confidence. Othermore » key parameters include the ability to calculate, following contamination incidents, the (1) estimates of Bacillus anthracis contamination, as well as the bias and uncertainties in the estimates, and (2) confidence in characterization and clearance decisions for contaminated or decontaminated buildings. Gaps in knowledge and understanding identified during the summary of the studies are discussed and recommendations are given for future studies.« less
FIELD MEASUREMENT OF DISSOLVED OXYGEN: A COMPARISON OF TECHNIQUES
The measurement and interpretation of geochemical redox parameters are key components of ground water remedial investigations. Dissolved oxygen (DO) is perhaps the most robust geochemical parameter in redox characterization; however, recent work has indicated a need for proper da...
An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves
Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing
2014-01-01
Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. PMID:24404181
Management of physical health in patients with schizophrenia: practical recommendations.
Heald, A; Montejo, A L; Millar, H; De Hert, M; McCrae, J; Correll, C U
2010-06-01
Improved physical health care is a pressing need for patients with schizophrenia. It can be achieved by means of a multidisciplinary team led by the psychiatrist. Key priorities should include: selection of antipsychotic therapy with a low risk of weight gain and metabolic adverse effects; routine assessment, recording and longitudinal tracking of key physical health parameters, ideally by electronic spreadsheets; and intervention to control CVD risk following the same principles as for the general population. A few simple tools to assess and record key physical parameters, combined with lifestyle intervention and pharmacological treatment as indicated, could significantly improve physical outcomes. Effective implementation of strategies to optimise physical health parameters in patients with severe enduring mental illness requires engagement and communication between psychiatrists and primary care in most health settings. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Xiao-bo; Wang, Zhi-xue; Li, Jian-xin; Ma, Jian-hui; Li, Yang; Li, Yan-qiang
In order to facilitate Bluetooth function realization and information can be effectively tracked in the process of production, the vehicle Bluetooth hands-free devices need to download such key parameters as Bluetooth address, CVC license and base plate numbers, etc. Therefore, it is the aim to search simple and effective methods to download parameters for each vehicle Bluetooth hands-free device, and to control and record the use of parameters. In this paper, by means of Bluetooth Serial Peripheral Interface programmer device, the parallel port is switched to SPI. The first step is to download parameters is simulating SPI with the parallel port. To perform SPI function, operating the parallel port in accordance with the SPI timing. The next step is to achieve SPI data transceiver functions according to the programming parameters of options. Utilizing the new method, downloading parameters is fast and accurate. It fully meets vehicle Bluetooth hands-free devices production requirements. In the production line, it has played a large role.
Measuring Two Key Parameters of H3 Color Centers in Diamond
NASA Technical Reports Server (NTRS)
Roberts, W. Thomas
2005-01-01
A method of measuring two key parameters of H3 color centers in diamond has been created as part of a continuing effort to develop tunable, continuous-wave, visible lasers that would utilize diamond as the lasing medium. (An H3 color center in a diamond crystal lattice comprises two nitrogen atoms substituted for two carbon atoms bonded to a third carbon atom. H3 color centers can be induced artificially; they also occur naturally. If present in sufficient density, they impart a yellow hue.) The method may also be applicable to the corresponding parameters of other candidate lasing media. One of the parameters is the number density of color centers, which is needed for designing an efficient laser. The other parameter is an optical-absorption cross section, which, as explained below, is needed for determining the number density. The present method represents an improvement over prior methods in which optical-absorption measurements have been used to determine absorption cross sections or number densities. Heretofore, in order to determine a number density from such measurements, it has been necessary to know the applicable absorption cross section; alternatively, to determine the absorption cross section from such measurements, it has been necessary to know the number density. If, as in this case, both the number density and the absorption cross section are initially unknown, then it is impossible to determine either parameter in the absence of additional information.
Building Blocks for Transport-Class Hybrid and Turboelectric Vehicles
NASA Technical Reports Server (NTRS)
Jankovsky, Amy; Bowman, Cheryl; Jansen, Ralph
2016-01-01
NASA has been investing in research efforts to define potential vehicles that use hybrid and turboelectric propulsion to enable savings in fuel burn and carbon usage. This paper overviews the fundamental building blocks that have been derived from those studies and details what key performance parameters have been defined, what key ground and flight tests need to occur, and highlights progress toward each.
Overview of Characterization Techniques for High Speed Crystal Growth
NASA Technical Reports Server (NTRS)
Ravi, K. V.
1984-01-01
Features of characterization requirements for crystals, devices and completed products are discussed. Key parameters of interest in semiconductor processing are presented. Characterization as it applies to process control, diagnostics and research needs is discussed with appropriate examples.
Application of Unmanned Aircraft System Instrumentation to Study Coastal Geochemistry
NASA Astrophysics Data System (ADS)
Coffin, R. B.; Osburn, C. L.; Smith, J. P.
2016-02-01
Coastal evaluation of key geochemical cycles is in strong need for thorough spatial data to address diverse topics. In many field studies we find that fixed station data taken from ship operations does not provide complete understanding of key research questions. In complicated systems where there is a need to integrate physical, chemical and biological parameters data taken from research vessels needs to be interpreted across large spatial areas. New technology in Unmanned Aircraft System (UAS) instrumentation coupled with ship board data can provide the thorough spatial data needed for a thorough evaluation of coastal sciences. This presentation will provide field data related to UAS application in two diverse environments. One study focuses on the flux of carbon dioxide and methane from Alaskan Arctic tundra and shallow Beaufort Sea coastal region to the atmosphere. In this study gas chemistry from samples is used to predict the relative fluxes to the atmosphere. A second study applies bio-optical analyses to differentiate between Gulf of Mexico coastal water column DOC and Lignin. This wide range of parameters in diverse ecosystems is selected to show current capability for application of UAS and the potential for understanding large scale questions about climate change and carbon cycling in coastal waters.
ESR paper on structured reporting in radiology.
2018-02-01
Structured reporting is emerging as a key element of optimising radiology's contribution to patient outcomes and ensuring the value of radiologists' work. It is being developed and supported by many national and international radiology societies, based on the recognised need to use uniform language and structure to accurately describe radiology findings. Standardisation of report structures ensures that all relevant areas are addressed. Standardisation of terminology prevents ambiguity in reports and facilitates comparability of reports. The use of key data elements and quantified parameters in structured reports ("radiomics") permits automatic functions (e.g. TNM staging), potential integration with other clinical parameters (e.g. laboratory results), data sharing (e.g. registries, biobanks) and data mining for research, teaching and other purposes. This article outlines the requirements for a successful structured reporting strategy (definition of content and structure, standard terminologies, tools and protocols). A potential implementation strategy is outlined. Moving from conventional prose reports to structured reporting is endorsed as a positive development, and must be an international effort, with international design and adoption of structured reporting templates that can be translated and adapted in local environments as needed. Industry involvement is key to success, based on international data standards and guidelines. • Standardisation of radiology report structure ensures completeness and comparability of reports. • Use of standardised language in reports minimises ambiguity. • Structured reporting facilitates automatic functions, integration with other clinical parameters and data sharing. • International and inter-society cooperation is key to developing successful structured report templates. • Integration with industry providers of radiology-reporting software is also crucial.
NASA Astrophysics Data System (ADS)
Tiwari, Harinarayan; Sharma, Nayan
2017-05-01
This research paper focuses on the need of turbulence, instruments reliable to capture turbulence, different turbulence parameters and some advance methodology which can decompose various turbulence structures at different levels near hydraulic structures. Small-scale turbulence research has valid prospects in open channel flow. The relevance of the study is amplified as we introduce any hydraulic structure in the channel which disturbs the natural flow and creates discontinuity. To recover this discontinuity, the piano key weir (PKW) might be used with sloped keys. Constraints of empirical results in the vicinity of PKW necessitate extensive laboratory experiments with fair and reliable instrumentation techniques. Acoustic Doppler velocimeter was established to be best suited within range of some limitations using principal component analysis. Wavelet analysis is proposed to decompose the underlying turbulence structure in a better way.
Needs for Robotic Assessments of Nuclear Disasters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Victor Walker; Derek Wadsworth
Following the nuclear disaster at the Fukushima nuclear reactor plant in Japan, the need for systems which can assist in dynamic high-radiation environments such as nuclear incidents has become more apparent. The INL participated in delivering robotic technologies to Japan and has identified key components which are needed for success and obstacles to their deployment. In addition, we are proposing new work and methods to improve assessments and reactions to such events in the future. Robotics needs in disaster situations include phases such as: Assessment, Remediation, and Recovery Our particular interest is in the initial assessment activities. In assessment wemore » need collection of environmental parameters, determination of conditions, and physical sample collection. Each phase would require key tools and efforts to develop. This includes study of necessary sensors and their deployment methods, the effects of radiation on sensors and deployment, and the development of training and execution systems.« less
Progress in Validation of Wind-US for Ramjet/Scramjet Combustion
NASA Technical Reports Server (NTRS)
Engblom, William A.; Frate, Franco C.; Nelson, Chris C.
2005-01-01
Validation of the Wind-US flow solver against two sets of experimental data involving high-speed combustion is attempted. First, the well-known Burrows- Kurkov supersonic hydrogen-air combustion test case is simulated, and the sensitively of ignition location and combustion performance to key parameters is explored. Second, a numerical model is developed for simulation of an X-43B candidate, full-scale, JP-7-fueled, internal flowpath operating in ramjet mode. Numerical results using an ethylene-air chemical kinetics model are directly compared against previously existing pressure-distribution data along the entire flowpath, obtained in direct-connect testing conducted at NASA Langley Research Center. Comparison to derived quantities such as burn efficiency and thermal throat location are also made. Reasonable to excellent agreement with experimental data is demonstrated for key parameters in both simulation efforts. Additional Wind-US feature needed to improve simulation efforts are described herein, including maintaining stagnation conditions at inflow boundaries for multi-species flow. An open issue regarding the sensitivity of isolator unstart to key model parameters is briefly discussed.
Gottlieb, Sami L; Giersing, Birgitte; Boily, Marie-Claude; Chesson, Harrell; Looker, Katharine J; Schiffer, Joshua; Spicknall, Ian; Hutubessy, Raymond; Broutet, Nathalie
2017-06-21
Development of a vaccine against herpes simplex virus (HSV) is an important goal for global sexual and reproductive health. In order to more precisely define the health and economic burden of HSV infection and the theoretical impact and cost-effectiveness of an HSV vaccine, in 2015 the World Health Organization convened an expert consultation meeting on HSV vaccine impact modelling. The experts reviewed existing model-based estimates and dynamic models of HSV infection to outline critical future modelling needs to inform development of a comprehensive business case and preferred product characteristics for an HSV vaccine. This article summarizes key findings and discussions from the meeting on modelling needs related to HSV burden, costs, and vaccine impact, essential data needs to carry out those models, and important model components and parameters. Copyright © 2017. Published by Elsevier Ltd.
Preliminary studies of cotton non-lint content identification by near-infrared spectroscopy
USDA-ARS?s Scientific Manuscript database
The high demand for cotton production worldwide has presented a need for its standardized classification. There currently exists trained classers and instrumentation to distinguish key cotton quality parameters, such as some trash types and content. However, it is of interest to develop a universal...
Chronic fish toxicity is a key parameter for hazard classification and environmental risk assessment of chemicals, and the OECD 210 fish early life-stage (FELS) test is the primary guideline test used for various international regulatory programs. There exists a need to develop ...
NASA Astrophysics Data System (ADS)
Tang, Zhiyuan; Liao, Zhongfa; Xu, Feihu; Qi, Bing; Qian, Li; Lo, Hoi-Kwong
2014-05-01
We demonstrate the first implementation of polarization encoding measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks. Active phase randomization of each individual pulse is implemented to protect against attacks on imperfect sources. By optimizing the parameters in the decoy state protocol, we show that it is feasible to implement polarization encoding MDI-QKD with commercial off-the-shelf devices. A rigorous finite key analysis is applied to estimate the secure key rate. Our work paves the way for the realization of a MDI-QKD network, in which the users only need compact and low-cost state-preparation devices and can share complicated and expensive detectors provided by an untrusted network server.
Tang, Zhiyuan; Liao, Zhongfa; Xu, Feihu; Qi, Bing; Qian, Li; Lo, Hoi-Kwong
2014-05-16
We demonstrate the first implementation of polarization encoding measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks. Active phase randomization of each individual pulse is implemented to protect against attacks on imperfect sources. By optimizing the parameters in the decoy state protocol, we show that it is feasible to implement polarization encoding MDI-QKD with commercial off-the-shelf devices. A rigorous finite key analysis is applied to estimate the secure key rate. Our work paves the way for the realization of a MDI-QKD network, in which the users only need compact and low-cost state-preparation devices and can share complicated and expensive detectors provided by an untrusted network server.
ATR NSUF Instrumentation Enhancement Efforts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joy L. Rempe; Mitchell K. Meyer; Darrell L. Knudson
A key component of the Advanced Test Reactor (ATR) National Scientific User Facility (NSUF) effort is to expand instrumentation available to users conducting irradiation tests in this unique facility. In particular, development of sensors capable of providing real-time measurements of key irradiation parameters is emphasized because of their potential to increase data fidelity and reduce posttest examination costs. This paper describes the strategy for identifying new instrumentation needed for ATR irradiations and the program underway to develop and evaluate new sensors to address these needs. Accomplishments from this program are illustrated by describing new sensors now available to users ofmore » the ATR NSUF. In addition, progress is reported on current research efforts to provide improved in-pile instrumentation to users.« less
Developing Communities: Serving ACE through Tertiary Education
ERIC Educational Resources Information Center
Sofo, Francesco
2011-01-01
Purpose: The purpose of this paper is to review the focus and practice of Adult and Community Education (ACE) as well as its conceptualization and delivery and to suggest parameters for an approach based on excellence, a balanced scorecard and performance to meet community needs. Design/methodology/approach: The review examines key aspects of the…
Happiness Inequality: How Much Is Reasonable?
ERIC Educational Resources Information Center
Gandelman, Nestor; Porzecanski, Rafael
2013-01-01
We compute the Gini indexes for income, happiness and various simulated utility levels. Due to decreasing marginal utility of income, happiness inequality should be lower than income inequality. We find that happiness inequality is about half that of income inequality. To compute the utility levels we need to assume values for a key parameter that…
Air change rates (ACRs) and interzonal flows are key determinants of indoor air quality (IAQ) and building energy use. This paper characterizes ACRs and interzonal flows in 126 houses, and evaluates effects of these parameters on IAQ. ACRs measured using weeklong tracer measureme...
Curie-Montgolfiere Planetary Explorers
NASA Astrophysics Data System (ADS)
Taylor, Chris Y.; Hansen, Jeremiah
2007-01-01
Hot-air balloons, also known as Montgolfiere balloons, powered by heat from radioisotope decay are a potentially useful tool for exploring planetary atmospheres and augmenting the capabilities of other exploration technologies. This paper describes the physical equations and identifies the key engineering parameters that drive radioisotope-powered balloon performance. These parameters include envelope strength-to-weight, envelope thermal conductivity, heater power-to-weight, heater temperature, and balloon shape. The design space for these parameters are shown for varying atmospheric compositions to illustrate the performance needed to build functioning ``Curie-Montgolfiere'' balloons for various planetary atmospheres. Methods to ease the process of Curie-Montgolfiere conceptual design and sizing of are also introduced.
System-level view of geospace dynamics: Challenges for high-latitude ground-based observations
NASA Astrophysics Data System (ADS)
Donovan, E.
2014-12-01
Increasingly, research programs including GEM, CEDAR, GEMSIS, GO Canada, and others are focusing on how geospace works as a system. Coupling sits at the heart of system level dynamics. In all cases, coupling is accomplished via fundamental processes such as reconnection and plasma waves, and can be between regions, energy ranges, species, scales, and energy reservoirs. Three views of geospace are required to attack system level questions. First, we must observe the fundamental processes that accomplish the coupling. This "observatory view" requires in situ measurements by satellite-borne instruments or remote sensing from powerful well-instrumented ground-based observatories organized around, for example, Incoherent Scatter Radars. Second, we need to see how this coupling is controlled and what it accomplishes. This demands quantitative observations of the system elements that are being coupled. This "multi-scale view" is accomplished by networks of ground-based instruments, and by global imaging from space. Third, if we take geospace as a whole, the system is too complicated, so at the top level we need time series of simple quantities such as indices that capture important aspects of the system level dynamics. This requires a "key parameter view" that is typically provided through indices such as AE and DsT. With the launch of MMS, and ongoing missions such as THEMIS, Cluster, Swarm, RBSP, and ePOP, we are entering a-once-in-a-lifetime epoch with a remarkable fleet of satellites probing processes at key regions throughout geospace, so the observatory view is secure. With a few exceptions, our key parameter view provides what we need. The multi-scale view, however, is compromised by space/time scales that are important but under-sampled, combined extent of coverage and resolution that falls short of what we need, and inadequate conjugate observations. In this talk, I present an overview of what we need for taking system level research to its next level, and how high latitude ground based observations can address these challenges.
A review of international pharmacy-based minor ailment services and proposed service design model.
Aly, Mariyam; García-Cárdenas, Victoria; Williams, Kylie; Benrimoj, Shalom I
2018-01-05
The need to consider sustainable healthcare solutions is essential. An innovative strategy used to promote minor ailment care is the utilisation of community pharmacists to deliver minor ailment services (MASs). Promoting higher levels of self-care can potentially reduce the strain on existing resources. To explore the features of international MASs, including their similarities and differences, and consider the essential elements to design a MAS model. A grey literature search strategy was completed in June 2017 to comply with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses standard. This included (1) Google/Yahoo! search engines, (2) targeted websites, and (3) contact with commissioning organisations. Executive summaries, table of contents and title pages of documents were reviewed. Key characteristics of MASs were extracted and a MAS model was developed. A total of 147 publications were included in the review. Key service elements identified included eligibility, accessibility, staff involvement, reimbursement systems. Several factors need to be considered when designing a MAS model; including contextualisation of MAS to the market. Stakeholder engagement, service planning, governance, implementation and review have emerged as key aspects involved with a design model. MASs differ in their structural parameters. Consideration of these parameters is necessary when devising MAS aims and assessing outcomes to promote sustainability and success of the service. Copyright © 2018 Elsevier Inc. All rights reserved.
Visualization of International Solar-Terrestrial Physics Program (ISTP) data
NASA Technical Reports Server (NTRS)
Kessel, Ramona L.; Candey, Robert M.; Hsieh, Syau-Yun W.; Kayser, Susan
1995-01-01
The International Solar-Terrestrial Physics Program (ISTP) is a multispacecraft, multinational program whose objective is to promote further understanding of the Earth's complex plasma environment. Extensive data sharing and data analysis will be needed to ensure the success of the overall ISTP program. For this reason, there has been a special emphasis on data standards throughout ISTP. One of the key tools will be the common data format (CDF), developed, maintained, and evolved at the National Space Science Data Center (NSSDC), with the set of ISTP implementation guidelines specially designed for space physics data sets by the Space Physics Data Facility (associated with the NSSDC). The ISTP guidelines were developed to facilitate searching, plotting, merging, and subsetting of data sets. We focus here on the plotting application. A prototype software package was developed to plot key parameter (KP) data from the ISTP program at the Science Planning and Operations Facility (SPOF). The ISTP Key Parameter Visualization Tool is based on the Interactive Data Language (IDL) and is keyed to the ISTP guidelines, reading data stored in CDF. With the combination of CDF, the ISTP guidelines, and the visualization software, we can look forward to easier and more effective data sharing and use among ISTP scientists.
NASA Astrophysics Data System (ADS)
Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede
2017-10-01
Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.
Aerodynamic Interaction between Delta Wing and Hemisphere-Cylinder in Supersonic Flow
NASA Astrophysics Data System (ADS)
Nishino, Atsuhiro; Ishikawa, Takahumi; Nakamura, Yoshiaki
As future space vehicles, Reusable Launch Vehicle (RLV) needs to be developed, where there are two kinds of RLV: Single Stage To Orbit (SSTO) and Two Stage To Orbit (TSTO). In the latter case, the shock/shock interaction and shock/boundary layer interaction play a key role. In the present study, we focus on the supersonic flow field with aerodynamic interaction between a delta wing and a hemisphere-cylinder, which imitate a TSTO, where the clearance, h, between the delta wing and hemisphere-cylinder is a key parameter. As a result, complicated flow patterns were made clear, including separation bubbles.
Lodewyck, Jérôme; Debuisschert, Thierry; García-Patrón, Raúl; Tualle-Brouri, Rosa; Cerf, Nicolas J; Grangier, Philippe
2007-01-19
An intercept-resend attack on a continuous-variable quantum-key-distribution protocol is investigated experimentally. By varying the interception fraction, one can implement a family of attacks where the eavesdropper totally controls the channel parameters. In general, such attacks add excess noise in the channel, and may also result in non-Gaussian output distributions. We implement and characterize the measurements needed to detect these attacks, and evaluate experimentally the information rates available to the legitimate users and the eavesdropper. The results are consistent with the optimality of Gaussian attacks resulting from the security proofs.
AGS vertical beta function measurements for Run 15
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, C.; Ahrens, L.; Huang, H.
2016-10-07
One key parameter for running the AGS efficiently is by maintaining a low emittance. To measure emittance, one needs to measure the beta function throughout the cycle. This can be done by measuring the beta function at the ionization profile monitors (IPM) in the AGS. This tech note delves into the motivation, the measurement, and some strides that were made throughout Run15.
NASA Astrophysics Data System (ADS)
Price, M. A.; Murphy, A.; Butterfield, J.; McCool, R.; Fleck, R.
2011-05-01
The predictive methods currently used for material specification, component design and the development of manufacturing processes, need to evolve beyond the current `metal centric' state of the art, if advanced composites are to realise their potential in delivering sustainable transport solutions. There are however, significant technical challenges associated with this process. Deteriorating environmental, political, economic and social conditions across the globe have resulted in unprecedented pressures to improve the operational efficiency of the manufacturing sector generally and to change perceptions regarding the environmental credentials of transport systems in particular. There is a need to apply new technologies and develop new capabilities to ensure commercial sustainability in the face of twenty first century economic and climatic conditions as well as transport market demands. A major technology gap exists between design, analysis and manufacturing processes in both the OEMs, and the smaller companies that make up the SME based supply chain. As regulatory requirements align with environmental needs, manufacturers are increasingly responsible for the broader lifecycle aspects of vehicle performance. These include not only manufacture and supply but disposal and re-use or re-cycling. In order to make advances in the reduction of emissions coupled with improved economic efficiency through the provision of advanced lightweight vehicles, four key challenges are identified as follows: Material systems, Manufacturing systems, Integrated design methods using digital manufacturing tools and Validation systems. This paper presents a project which has been designed to address these four key issues, using at its core, a digital framework for the creation and management of key parameters related to the lifecycle performance of thermoplastic composite parts and structures. It aims to provide capability for the proposition, definition, evaluation and demonstration of advanced lightweight structures for new generation vehicles in the context of whole life performance parameters.
Broadcasting satellite feeder links - Characteristics and planning
NASA Technical Reports Server (NTRS)
Kiebler, J. W.
1982-01-01
The paper presents the results of recent studies by the Feeder Link Sub-Working Group of the FCC Advisory Committee for the 1983 Regional Administrative Radio Conference (RARC). These studies conclude that specification of a few key parameters will make feeder link planning relatively straightforward. Feeder links can be located anywhere within a country if satellite orbit locations are separated by 10 deg for adjacent service areas and key parameter values presented in the paper are adopted. Colocated satellites serving a common service area need special attention to attain sufficient isolation between a desired channel and its adjacent cross-polarized channels and alternate co-polarized channels. In addition to presenting planning conclusions by the Advisory Committee, the paper presents and analyzes actions of the International Radio Consultative Committee's Conference Planning Meeting (CPM) concerning feeder links. The CPM reached conclusions similar to, and compatible with, those of the Advisory Committee.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736
Measuring UV Curing Parameters of Commercial Photopolymers used in Additive Manufacturing.
Bennett, Joe
2017-12-01
A testing methodology was developed to expose photopolymer resins and measure the cured material to determine two key parameters related to the photopolymerization process: E c (critical energy to initiate polymerization) and D p (penetration depth of curing light). Five commercially available resins were evaluated under exposure from 365 nm and 405 nm light at varying power densities and energies. Three different methods for determining the thickness of the cured resin were evaluated. Caliper measurements, stylus profilometry, and confocal laser scanning microscopy showed similar results for hard materials while caliper measurement of a soft, elastomeric material proved inaccurate. Working curves for the five photopolymers showed unique behavior both within and among the resins as a function of curing light wavelength. E c and D p for the five resins showed variations as large as 10×. Variations of this magnitude, if unknown to the user and not controlled for, will clearly affect printed part quality. This points to the need for a standardized approach for determining and disseminating these, and perhaps, other key parameters.
NASA Astrophysics Data System (ADS)
Farhat, I. A. H.; Gale, E.; Alpha, C.; Isakovic, A. F.
2017-07-01
Optimizing energy performance of Magnetic Tunnel Junctions (MTJs) is the key for embedding Spin Transfer Torque-Random Access Memory (STT-RAM) in low power circuits. Due to the complex interdependencies of the parameters and variables of the device operating energy, it is important to analyse parameters with most effective control of MTJ power. The impact of threshold current density, Jco , on the energy and the impact of HK on Jco are studied analytically, following the expressions that stem from Landau-Lifshitz-Gilbert-Slonczewski (LLGS-STT) model. In addition, the impact of other magnetic material parameters, such as Ms , and geometric parameters such as tfree and λ is discussed. Device modelling study was conducted to analyse the impact at the circuit level. Nano-magnetism simulation based on NMAGTM package was conducted to analyse the impact of controlling HK on the switching dynamics of the film.
Application of tire dynamics to aircraft landing gear design analysis
NASA Technical Reports Server (NTRS)
Black, R. J.
1983-01-01
The tire plays a key part in many analyses used for design of aircraft landing gear. Examples include structural design of wheels, landing gear shimmy, brake whirl, chatter and squeal, complex combination of chatter and shimmy on main landing gear (MLG) systems, anti-skid performance, gear walk, and rough terrain loads and performance. Tire parameters needed in the various analyses are discussed. Two tire models are discussed for shimmy analysis, the modified Moreland approach and the von Schlippe-Dietrich approach. It is shown that the Moreland model can be derived from the Von Schlippe-Dietrich model by certain approximations. The remaining analysis areas are discussed in general terms and the tire parameters needed for each are identified. Accurate tire data allows more accurate design analysis and the correct prediction of dynamic performance of aircraft landing gear.
Need for Cost Optimization of Space Life Support Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Anderson, Grant
2017-01-01
As the nation plans manned missions that go far beyond Earth orbit to Mars, there is an urgent need for a robust, disciplined systems engineering methodology that can identify an optimized Environmental Control and Life Support (ECLSS) architecture for long duration deep space missions. But unlike the previously used Equivalent System Mass (ESM), the method must be inclusive of all driving parameters and emphasize the economic analysis of life support system design. The key parameter for this analysis is Life Cycle Cost (LCC). LCC takes into account the cost for development and qualification of the system, launch costs, operational costs, maintenance costs and all other relevant and associated costs. Additionally, an effective methodology must consider system technical performance, safety, reliability, maintainability, crew time, and other factors that could affect the overall merit of the life support system.
Automatic Inference of Cryptographic Key Length Based on Analysis of Proof Tightness
2016-06-01
within an attack tree structure, then expand attack tree methodology to include cryptographic reductions. We then provide the algorithms for...maintaining and automatically reasoning about these expanded attack trees . We provide a software tool that utilizes machine-readable proof and attack metadata...and the attack tree methodology to provide rapid and precise answers regarding security parameters and effective security. This eliminates the need
NASA Technical Reports Server (NTRS)
Briggs, Maxwell; Schifer, Nicholas
2011-01-01
Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.
NASA Astrophysics Data System (ADS)
Miharja, M.; Priadi, Y. N.
2018-05-01
Promoting a better public transport is a key strategy to cope with urban transport problems which are mostly caused by a huge private vehicle usage. A better public transport service quality not only focuses on one type of public transport mode, but also concerns on inter modes service integration. Fragmented inter mode public transport service leads to a longer trip chain as well as average travel time which would result in its failure to compete with a private vehicle. This paper examines the optimation process of operation system integration between Trans Jakarta Bus as the main public transport mode and Kopaja Bus as feeder public transport service in Jakarta. Using scoring-interview method combined with standard parameters in operation system integration, this paper identifies the key factors that determine the success of the two public transport operation system integrations. The study found that some key integration parameters, such as the cancellation of “system setoran”, passenger get in-get out at official stop points, and systematic payment, positively contribute to a better service integration. However, some parameters such as fine system, time and changing point reliability, and information system reliability are among those which need improvement. These findings are very useful for the authority to set the right strategy to improve operation system integration between Trans Jakarta and Kopaja Bus services.
High Temperature Materials Needs in NASA's Advanced Space Propulsion Programs
NASA Technical Reports Server (NTRS)
Eckel, Andrew J.; Glass, David E.
2005-01-01
In recent years, NASA has embarked on several new and exciting efforts in the exploration and use of space. The successful accomplishment of many planned missions and projects is dependent upon the development and deployment of previously unproven propulsion systems. Key to many of the propulsion systems is the use of emergent materials systems, particularly high temperature structural composites. A review of the general missions and benefits of utilizing high temperature materials will be presented. The design parameters and operating conditions will be presented for both specific missions/vehicles and classes of components. Key technical challenges and opportunities are identified along with suggested paths for addressing them.
NASA Technical Reports Server (NTRS)
Bonet, John T.; Schellenger, Harvey G.; Rawdon, Blaine K.; Elmer, Kevin R.; Wakayama, Sean R.; Brown, Derrell L.; Guo, Yueping
2011-01-01
NASA has set demanding goals for technology developments to meet national needs to improve fuel efficiency concurrent with improving the environment to enable air transportation growth. A figure shows NASA's subsonic transport system metrics. The results of Boeing ERA N+2 Advanced Vehicle Concept Study show that the Blended Wing Body (BWB) vehicle, with ultra high bypass propulsion systems have the potential to meet the combined NASA ERA N+2 goals. This study had 3 main activities. 1) The development of an advanced vehicle concepts that can meet the NASA system level metrics. 2) Identification of key enabling technologies and the development of technology roadmaps and maturation plans. 3) The development of a subscale test vehicle that can demonstrate and mature the key enabling technologies needed to meet the NASA system level metrics. Technology maturation plans are presented and include key performance parameters and technical performance measures. The plans describe the risks that will be reduced with technology development and the expected progression of technical maturity.
Magnetic Field Response Measurement Acquisition System
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Taylor, Bryant D.; Shams, Qamar A.; Fox, Robert L.
2005-01-01
A measurement acquisition method that alleviates many shortcomings of traditional measurement systems is presented in this paper. The shortcomings are a finite number of measurement channels, weight penalty associated with measurements, electrical arcing, wire degradations due to wear or chemical decay and the logistics needed to add new sensors. The key to this method is the use of sensors designed as passive inductor-capacitor circuits that produce magnetic field responses. The response attributes correspond to states of physical properties for which the sensors measure. A radio frequency antenna produces a time-varying magnetic field used to power the sensor and receive the magnetic field response of the sensor. An interrogation system for discerning changes in the sensor response is presented herein. Multiple sensors can be interrogated using this method. The method eliminates the need for a data acquisition channel dedicated to each sensor. Methods of developing magnetic field response sensors and the influence of key parameters on measurement acquisition are discussed.
Modern methods for the quality management of high-rate melt solidification
NASA Astrophysics Data System (ADS)
Vasiliev, V. A.; Odinokov, S. A.; Serov, M. M.
2016-12-01
The quality management of high-rate melt solidification needs combined solution obtained by methods and approaches adapted to a certain situation. Technological audit is recommended to estimate the possibilities of the process. Statistical methods are proposed with the choice of key parameters. Numerical methods, which can be used to perform simulation under multifactor technological conditions, and an increase in the quality of decisions are of particular importance.
Performance Analysis on the Coexistence of Multiple Cognitive Radio Networks
2015-05-28
the scarce spectrum resources. Cognitive radio is a key in minimizing the spectral congestion through its adaptability, where the radio parameters...static allocation of spectrum results in congestion in some parts of the spectrum and non use in some others, therefore, spectra utilization is...well as the secondary user (SU) activities in multiple CR networks. It is shown that the scheduler provided much needed gain during congestions . However
Advanced approach to the analysis of a series of in-situ nuclear forward scattering experiments
NASA Astrophysics Data System (ADS)
Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel
2017-03-01
This study introduces a sequential fitting procedure as a specific approach to nuclear forward scattering (NFS) data evaluation. Principles and usage of this advanced evaluation method are described in details and its utilization is demonstrated on NFS in-situ investigations of fast processes. Such experiments frequently consist of hundreds of time spectra which need to be evaluated. The introduced procedure allows the analysis of these experiments and significantly decreases the time needed for the data evaluation. The key contributions of the study are the sequential use of the output fitting parameters of a previous data set as the input parameters for the next data set and the model suitability crosscheck option of applying the procedure in ascending and descending directions of the data sets. Described fitting methodology is beneficial for checking of model validity and reliability of obtained results.
Ma, Athen; Mondragón, Raúl J.
2015-01-01
A core comprises of a group of central and densely connected nodes which governs the overall behaviour of a network. It is recognised as one of the key meso-scale structures in complex networks. Profiling this meso-scale structure currently relies on a limited number of methods which are often complex and parameter dependent or require a null model. As a result, scalability issues are likely to arise when dealing with very large networks together with the need for subjective adjustment of parameters. The notion of a rich-club describes nodes which are essentially the hub of a network, as they play a dominating role in structural and functional properties. The definition of a rich-club naturally emphasises high degree nodes and divides a network into two subgroups. Here, we develop a method to characterise a rich-core in networks by theoretically coupling the underlying principle of a rich-club with the escape time of a random walker. The method is fast, scalable to large networks and completely parameter free. In particular, we show that the evolution of the core in World Trade and C. elegans networks correspond to responses to historical events and key stages in their physical development, respectively. PMID:25799585
Ma, Athen; Mondragón, Raúl J
2015-01-01
A core comprises of a group of central and densely connected nodes which governs the overall behaviour of a network. It is recognised as one of the key meso-scale structures in complex networks. Profiling this meso-scale structure currently relies on a limited number of methods which are often complex and parameter dependent or require a null model. As a result, scalability issues are likely to arise when dealing with very large networks together with the need for subjective adjustment of parameters. The notion of a rich-club describes nodes which are essentially the hub of a network, as they play a dominating role in structural and functional properties. The definition of a rich-club naturally emphasises high degree nodes and divides a network into two subgroups. Here, we develop a method to characterise a rich-core in networks by theoretically coupling the underlying principle of a rich-club with the escape time of a random walker. The method is fast, scalable to large networks and completely parameter free. In particular, we show that the evolution of the core in World Trade and C. elegans networks correspond to responses to historical events and key stages in their physical development, respectively.
Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
A Review of United States Air Force and Department of Defense Aerospace Propulsion Needs
2006-01-01
evolved expendable launch vehicle EHF extremely high frequency EMA electromechanical actuator EMDP engine model derivative program EMTVA...condition. A key aspect of the model was which of the two methods was used—parameters of the system or propulsion variables produced in the design ... models for turbopump analysis and design . In addition, the skills required to design a high -performance turbopump are very specialized and must be
Features and selection of vascular access devices.
Sansivero, Gail Egan
2010-05-01
To review venous anatomy and physiology, discuss assessment parameters before vascular access device (VAD) placement, and review VAD options. Journal articles, personal experience. A number of VAD options are available in clinical practice. Access planning should include comprehensive assessment, with attention to patient participation in the planning and selection process. Careful consideration should be given to long-term access needs and preservation of access sites. Oncology nurses are uniquely suited to perform a key role in VAD planning and placement. With knowledge of infusion therapy, anatomy and physiology, device options, and community resources, nurses can be key leaders in preserving vascular access and improving the safety and comfort of infusion therapy. Copyright 2010 Elsevier Inc. All rights reserved.
Models for the Economics of Resilience
Gilbert, Stanley; Ayyub, Bilal M.
2016-01-01
Estimating the economic burden of disasters requires appropriate models that account for key characteristics and decision making needs. Natural disasters in 2011 resulted in $366 billion in direct damages and 29,782 fatalities worldwide. Average annual losses in the US amount to about $55 billion. Enhancing community and system resilience could lead to significant savings through risk reduction and expeditious recovery. The management of such reduction and recovery is facilitated by an appropriate definition of resilience and associated metrics with models for examining the economics of resilience. This paper provides such microeconomic models, compares them, examines their sensitivities to key parameters, and illustrates their uses. Such models enable improving the resiliency of systems to meet target levels. PMID:28133626
Models for the Economics of Resilience.
Gilbert, Stanley; Ayyub, Bilal M
2016-12-01
Estimating the economic burden of disasters requires appropriate models that account for key characteristics and decision making needs. Natural disasters in 2011 resulted in $366 billion in direct damages and 29,782 fatalities worldwide. Average annual losses in the US amount to about $55 billion. Enhancing community and system resilience could lead to significant savings through risk reduction and expeditious recovery. The management of such reduction and recovery is facilitated by an appropriate definition of resilience and associated metrics with models for examining the economics of resilience. This paper provides such microeconomic models, compares them, examines their sensitivities to key parameters, and illustrates their uses. Such models enable improving the resiliency of systems to meet target levels.
NASA Astrophysics Data System (ADS)
Jackson-Blake, L.
2014-12-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.
Review of concrete biodeterioration in relation to nuclear waste.
Turick, Charles E; Berry, Christopher J
2016-01-01
Storage of radioactive waste in concrete structures is a means of containing wastes and related radionuclides generated from nuclear operations in many countries. Previous efforts related to microbial impacts on concrete structures that are used to contain radioactive waste showed that microbial activity can play a significant role in the process of concrete degradation and ultimately structural deterioration. This literature review examines the research in this field and is focused on specific parameters that are applicable to modeling and prediction of the fate of concrete structures used to store or dispose of radioactive waste. Rates of concrete biodegradation vary with the environmental conditions, illustrating a need to understand the bioavailability of key compounds involved in microbial activity. Specific parameters require pH and osmotic pressure to be within a certain range to allow for microbial growth as well as the availability and abundance of energy sources such as components involved in sulfur, iron and nitrogen oxidation. Carbon flow and availability are also factors to consider in predicting concrete biodegradation. The microbial contribution to degradation of the concrete structures containing radioactive waste is a constant possibility. The rate and degree of concrete biodegradation is dependent on numerous physical, chemical and biological parameters. Parameters to focus on for modeling activities and possible options for mitigation that would minimize concrete biodegradation are discussed and include key conditions that drive microbial activity on concrete surfaces. Copyright © 2015. Published by Elsevier Ltd.
The design analysis of a rechargeable lithium cell for space applications
NASA Technical Reports Server (NTRS)
Subba Rao, S.; Shen, D. H.; Yen, S. P. S.; Somoano, R. B.
1986-01-01
Ambient temperature rechargeable lithium batteries are needed by NASA for advanced space power applications for future missions. Specific energies of not less than 100 Wh/kg and long cycle life are critical performance goals. A design analysis of a 35 Ah Li-TiS2 cell was carried out using literature and experimental data to identify key design parameters governing specific energy. It is found that high specific energies are achievable in prismatic cells, especially with the use of advanced hardware materials. There is a serious need for a greatly expanded engineering database in order to enable more quantitative design analysis.
CosmoSIS: Modular cosmological parameter estimation
Zuntz, J.; Paterno, M.; Jennings, E.; ...
2015-06-09
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmicmore » shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis« less
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
Soil biochar amendment as a climate change mitigation tool: Key parameters and mechanisms involved.
Brassard, Patrick; Godbout, Stéphane; Raghavan, Vijaya
2016-10-01
Biochar, a solid porous material obtained from the carbonization of biomass under low or no oxygen conditions, has been proposed as a climate change mitigation tool because it is expected to sequester carbon (C) for centuries and to reduce greenhouse gas (GHG) emissions from soils. This review aimed to identify key biochar properties and production parameters that have an effect on these specific applications of the biochar. Moreover, mechanisms involved in interactions between biochar and soils were highlighted. Following a compilation and comparison of the characteristics of 76 biochars from 40 research studies, biochars with a lower N content, and consequently a higher C/N ratio (>30), were found to be more suitable for mitigation of N2O emissions from soils. Moreover, biochars produced at a higher pyrolysis temperature, and with O/C ratio <0.2, H/Corg ratio <0.4 and volatile matter below 80% may have high C sequestration potential. Based on these observations, biochar production and application to the field can be used as a tool to mitigate climate change. However, it is important to determine the pyrolysis conditions and feedstock needed to produce a biochar with the desired properties for a specific application. More research studies are needed to identify the exact mechanisms involved following biochar amendment to soil. Copyright © 2016 Elsevier Ltd. All rights reserved.
Balzarolo, Manuela; Anderson, Karen; Nichol, Caroline; Rossini, Micol; Vescovo, Loris; Arriga, Nicola; Wohlfahrt, Georg; Calvet, Jean-Christophe; Carrara, Arnaud; Cerasoli, Sofia; Cogliati, Sergio; Daumard, Fabrice; Eklundh, Lars; Elbers, Jan A.; Evrendilek, Fatih; Handcock, Rebecca N.; Kaduk, Joerg; Klumpp, Katja; Longdoz, Bernard; Matteucci, Giorgio; Meroni, Michele; Montagnani, Lenoardo; Ourcival, Jean-Marc; Sánchez-Cañete, Enrique P.; Pontailler, Jean-Yves; Juszczak, Radoslaw; Scholes, Bob; Martín, M. Pilar
2011-01-01
This paper reviews the currently available optical sensors, their limitations and opportunities for deployment at Eddy Covariance (EC) sites in Europe. This review is based on the results obtained from an online survey designed and disseminated by the Co-cooperation in Science and Technology (COST) Action ESO903—“Spectral Sampling Tools for Vegetation Biophysical Parameters and Flux Measurements in Europe” that provided a complete view on spectral sampling activities carried out within the different research teams in European countries. The results have highlighted that a wide variety of optical sensors are in use at flux sites across Europe, and responses further demonstrated that users were not always fully aware of the key issues underpinning repeatability and the reproducibility of their spectral measurements. The key findings of this survey point towards the need for greater awareness of the need for standardisation and development of a common protocol of optical sampling at the European EC sites. PMID:22164055
NASA Astrophysics Data System (ADS)
Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano
2018-06-01
One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.
Global sustainability and key needs in future automotive design.
McAuley, John W
2003-12-01
The number of light vehicle registrations is forecast to increase worldwide by a factor of 3-5 over the next 50 years. This will dramatically increase environmental impacts worldwide of automobiles and light trucks. If light vehicles are to be environmentally sustainable globally, the automotive industry must implement fundamental changes in future automotive design. Important factors in assessing automobile design needs include fuel economy and reduced emissions. Many design parameters can impact vehicle air emissions and energy consumption including alternative fuel or engine technologies, rolling resistance, aerodynamics, drive train design, friction, and vehicle weight. Of these, vehicle weight is key and will translate into reduced energy demand across all energy distribution elements. A new class of vehicles is needed that combines ultra-light design with a likely hybrid or fuel cell engine technology. This could increase efficiency by a factor of 3-5 and reduce air emissions as well. Advanced lightweight materials, such as plastics or composites, will need to overtake the present metal-based infrastructure. Incorporating design features to facilitate end-of-life recycling and recovery is also important. The trend will be towards fewer materials and parts in vehicle design, combined with ease of disassembly. Mono-material construction can create vehicle design with improved recyclability as well as reduced numbers of parts and weight.
Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B
2016-11-01
As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.
Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.
Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis
2008-10-01
We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)
Plant Invasions in China – Challenges and Chances
Axmacher, Jan C.; Sang, Weiguo
2013-01-01
Invasive species cause serious environmental and economic harm and threaten global biodiversity. We set out to investigate how quickly invasive plant species are currently spreading in China and how their resulting distribution patterns are linked to socio-economic and environmental conditions. A comparison of the invasive plant species density (log species/log area) reported in 2008 with current data shows that invasive species were originally highly concentrated in the wealthy, southeastern coastal provinces of China, but they are currently rapidly spreading inland. Linear regression models based on the species density and turnover of invasive plants as dependent parameters and principal components representing key socio-economic and environmental parameters as predictors indicate strong positive links between invasive plant density and the overall phytodiversity and associated climatic parameters. Principal components representing socio-economic factors and endemic plant density also show significant positive links with invasive plant density. Urgent control and eradication measures are needed in China's coastal provinces to counteract the rapid inland spread of invasive plants. Strict controls of imports through seaports need to be accompanied by similarly strict controls of the developing horticultural trade and underpinned by awareness campaigns for China's increasingly affluent population to limit the arrival of new invaders. Furthermore, China needs to fully utilize its substantial native phytodiversity, rather than relying on exotics, in current large-scale afforestation projects and in the creation of urban green spaces. PMID:23691164
Application of PBPK modelling in drug discovery and development at Pfizer.
Jones, Hannah M; Dickins, Maurice; Youdim, Kuresh; Gosset, James R; Attkins, Neil J; Hay, Tanya L; Gurrell, Ian K; Logan, Y Raj; Bungay, Peter J; Jones, Barry C; Gardner, Iain B
2012-01-01
Early prediction of human pharmacokinetics (PK) and drug-drug interactions (DDI) in drug discovery and development allows for more informed decision making. Physiologically based pharmacokinetic (PBPK) modelling can be used to answer a number of questions throughout the process of drug discovery and development and is thus becoming a very popular tool. PBPK models provide the opportunity to integrate key input parameters from different sources to not only estimate PK parameters and plasma concentration-time profiles, but also to gain mechanistic insight into compound properties. Using examples from the literature and our own company, we have shown how PBPK techniques can be utilized through the stages of drug discovery and development to increase efficiency, reduce the need for animal studies, replace clinical trials and to increase PK understanding. Given the mechanistic nature of these models, the future use of PBPK modelling in drug discovery and development is promising, however, some limitations need to be addressed to realize its application and utility more broadly.
Discrete Event Simulation Modeling and Analysis of Key Leader Engagements
2012-06-01
to offer. GreenPlayer agents require four parameters, pC, pKLK, pTK, and pRK , which give probabilities for being corrupt, having key leader...HandleMessageRequest component. The same parameter constraints apply to these four parameters. The parameter pRK is the same parameter from the CreatePlayers component...whether the local Green player has resource critical knowledge by using the parameter pRK . It schedules an EndResourceKnowledgeRequest event, passing
Status Report on Efforts to Enhance Instrumentation to Support Advanced Test Reactor Irradiations
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Rempe; D. Knudson; J. Daw
2014-01-01
The Department of Energy (DOE) designated the Advanced Test Reactor (ATR) as a National Scientific User Facility (NSUF) in April 2007 to support the growth of nuclear science and technology in the United States (US). By attracting new research users - universities, laboratories, and industry - the ATR NSUF facilitates basic and applied nuclear research and development, further advancing the nation's energy security needs. A key component of the ATR NSUF effort at the Idaho National Laboratory (INL) is to design, develop, and deploy new in-pile instrumentation techniques that are capable of providing real-time measurements of key parameters during irradiation.more » To address this need, an assessment of instrumentation available and under-development at other test reactors was completed. Based on this initial review, recommendations were made with respect to what instrumentation is needed at the ATR, and a strategy was developed for obtaining these sensors. In 2009, a report was issued documenting this program’s strategy and initial progress toward accomplishing program objectives. Since 2009, annual reports have been issued to provide updates on the program strategy and the progress made on implementing the strategy. This report provides an update reflecting progress as of January 2014.« less
NASA Astrophysics Data System (ADS)
Wiegart, L.; Rakitin, M.; Fluerasu, A.; Chubar, O.
2017-08-01
We present the application of fully- and partially-coherent synchrotron radiation wavefront propagation simulation functions, implemented in the "Synchrotron Radiation Workshop" computer code, to create a `virtual beamline' mimicking the Coherent Hard X-ray scattering beamline at NSLS-II. The beamline simulation includes all optical beamline components, such as the insertion device, mirror with metrology data, slits, double crystal monochromator and refractive focusing elements (compound refractive lenses and kinoform lenses). A feature of this beamline is the exploitation of X-ray beam coherence, boosted by the low-emittance NSLS-II storage-ring, for techniques such as X-ray Photon Correlation Spectroscopy or Coherent Diffraction Imaging. The key performance parameters are the degree of Xray beam coherence and photon flux, and the trade-off between them needs to guide the beamline settings for specific experimental requirements. Simulations of key performance parameters are compared to measurements obtained during beamline commissioning, and include the spectral flux of the undulator source, the degree of transverse coherence as well as focal spot sizes.
Design and cost drivers in 2-D braiding
NASA Technical Reports Server (NTRS)
Morales, Alberto
1993-01-01
Fundamentally, the braiding process is a highly efficient, low cost method for combining single yarns into circumferential shapes, as evidenced by the number of applications for continuous sleeving. However, this braiding approach cannot fully demonstrate that it can drastically reduce the cost of complex shape structural preforms. Factors such as part geometry, machine design and configuration, materials used, and operating parameters are described as key cost drivers and what is needed to minimize their effect on elevating the cost of structural braided preforms.
ITO-based evolutionary algorithm to solve traveling salesman problem
NASA Astrophysics Data System (ADS)
Dong, Wenyong; Sheng, Kang; Yang, Chuanhua; Yi, Yunfei
2014-03-01
In this paper, a ITO algorithm inspired by ITO stochastic process is proposed for Traveling Salesmen Problems (TSP), so far, many meta-heuristic methods have been successfully applied to TSP, however, as a member of them, ITO needs further demonstration for TSP. So starting from designing the key operators, which include the move operator, wave operator, etc, the method based on ITO for TSP is presented, and moreover, the ITO algorithm performance under different parameter sets and the maintenance of population diversity information are also studied.
MODELING MICROBUBBLE DYNAMICS IN BIOMEDICAL APPLICATIONS*
CHAHINE, Georges L.; HSIAO, Chao-Tsung
2012-01-01
Controlling microbubble dynamics to produce desirable biomedical outcomes when and where necessary and avoid deleterious effects requires advanced knowledge, which can be achieved only through a combination of experimental and numerical/analytical techniques. The present communication presents a multi-physics approach to study the dynamics combining viscous- in-viscid effects, liquid and structure dynamics, and multi bubble interaction. While complex numerical tools are developed and used, the study aims at identifying the key parameters influencing the dynamics, which need to be included in simpler models. PMID:22833696
Land use investigations in the central valley and central coastal test sites, California
NASA Technical Reports Server (NTRS)
Estes, J. E.
1973-01-01
The Geography Remote Sensing Unit (GRSU) at the University of California, Santa Barbara is responsible for investigations with ERTS-1 data in the Central Coastal Zone and West Side of the San Joaquin Valley. The nature of investigative effort involves the inventory, monitoring, and assessment of the natural and cultural resources of the two areas. Land use, agriculture, vegetation, landforms, geology, and hydrology are the principal subjects for attention. These parameters are the key indicators of the dynamically changing character of the areas. Monitoring of these parameters with ERTS-1 data will provide the techniques and methodologies required to generate the information needed by federal, state, county, and local agencies to assess change-related phenomena and plan for management and development.
2013-06-01
1 18th ICCRTS Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables...AND SUBTITLE Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables 5a. CONTRACT...command in crisis management. C2 Agility Model Agility can be conceptualized at a number of different levels; for instance at the team
Securing Digital Audio using Complex Quadratic Map
NASA Astrophysics Data System (ADS)
Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi
2018-03-01
In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.
Achieving mask order processing automation, interoperability and standardization based on P10
NASA Astrophysics Data System (ADS)
Rodriguez, B.; Filies, O.; Sadran, D.; Tissier, Michel; Albin, D.; Stavroulakis, S.; Voyiatzis, E.
2007-02-01
Last year the MUSCLE (Masks through User's Supply Chain: Leadership by Excellence) project was presented. Here is the project advancement. A key process in mask supply chain management is the exchange of technical information for ordering masks. This process is large, complex, company specific and error prone, and leads to longer cycle times and higher costs due to missing or wrong inputs. Its automation and standardization could produce significant benefits. We need to agree on the standard for mandatory and optional parameters, and also a common way to describe parameters when ordering. A system was created to improve the performance in terms of Key Performance Indicators (KPIs) such as cycle time and cost of production. This tool allows us to evaluate and measure the effect of factors, as well as the effect of implementing the improvements of the complete project. Next, a benchmark study and a gap analysis were performed. These studies show the feasibility of standardization, as there is a large overlap in requirements. We see that the SEMI P10 standard needs enhancements. A format supporting the standard is required, and XML offers the ability to describe P10 in a flexible way. Beyond using XML for P10, the semantics of the mask order should also be addressed. A system design and requirements for a reference implementation for a P10 based management system are presented, covering a mechanism for the evolution and for version management and a design for P10 editing and data validation.
NASA Astrophysics Data System (ADS)
Xiao, D.; Shi, Y.; Li, L.
2016-12-01
Field measurements are important to understand the fluxes of water, energy, sediment, and solute in the Critical Zone however are expensive in time, money, and labor. This study aims to assess the model predictability of hydrological processes in a watershed using information from another intensively-measured watershed. We compare two watersheds of different lithology using national datasets, field measurements, and physics-based model, Flux-PIHM. We focus on two monolithological, forested watersheds under the same climate in the Shale Hills Susquehanna CZO in central Pennsylvania: the Shale-based Shale Hills (SSH, 0.08 km2) and the sandstone-based Garner Run (GR, 1.34 km2). We firstly tested the transferability of calibration coefficients from SSH to GR. We found that without any calibration the model can successfully predict seasonal average soil moisture and discharge which shows the advantage of a physics-based model, however, cannot precisely capture some peaks or the runoff in summer. The model reproduces the GR field data better after calibrating the soil hydrology parameters. In particular, the percentage of sand turns out to be a critical parameter in reproducing data. With sandstone being the dominant lithology, GR has much higher sand percentage than SSH (48.02% vs. 29.01%), leading to higher hydraulic conductivity, lower overall water storage capacity, and in general lower soil moisture. This is consistent with area averaged soil moisture observations using the cosmic-ray soil moisture observing system (COSMOS) at the two sites. This work indicates that some parameters, including evapotranspiration parameters, are transferrable due to similar climatic and land cover conditions. However, the key parameters that control soil moisture, including the sand percentage, need to be recalibrated, reflecting the key role of soil hydrological properties.
Estimation of Quasi-Stiffness and Propulsive Work of the Human Ankle in the Stance Phase of Walking
Shamaei, Kamran; Sawicki, Gregory S.; Dollar, Aaron M.
2013-01-01
Characterizing the quasi-stiffness and work of lower extremity joints is critical for evaluating human locomotion and designing assistive devices such as prostheses and orthoses intended to emulate the biological behavior of human legs. This work aims to establish statistical models that allow us to predict the ankle quasi-stiffness and net mechanical work for adults walking on level ground. During the stance phase of walking, the ankle joint propels the body through three distinctive phases of nearly constant stiffness known as the quasi-stiffness of each phase. Using a generic equation for the ankle moment obtained through an inverse dynamics analysis, we identify key independent parameters needed to predict ankle quasi-stiffness and propulsive work and also the functional form of each correlation. These parameters include gait speed, ankle excursion, and subject height and weight. Based on the identified form of the correlation and key variables, we applied linear regression on experimental walking data for 216 gait trials across 26 subjects (speeds from 0.75–2.63 m/s) to obtain statistical models of varying complexity. The most general forms of the statistical models include all the key parameters and have an R2 of 75% to 81% in the prediction of the ankle quasi-stiffnesses and propulsive work. The most specific models include only subject height and weight and could predict the ankle quasi-stiffnesses and work for optimal walking speed with average error of 13% to 30%. We discuss how these models provide a useful framework and foundation for designing subject- and gait-specific prosthetic and exoskeletal devices designed to emulate biological ankle function during level ground walking. PMID:23555839
Wnt signalling pathway parameters for mammalian cells.
Tan, Chin Wee; Gardiner, Bruce S; Hirokawa, Yumiko; Layton, Meredith J; Smith, David W; Burgess, Antony W
2012-01-01
Wnt/β-catenin signalling regulates cell fate, survival, proliferation and differentiation at many stages of mammalian development and pathology. Mutations of two key proteins in the pathway, APC and β-catenin, have been implicated in a range of cancers, including colorectal cancer. Activation of Wnt signalling has been associated with the stabilization and nuclear accumulation of β-catenin and consequential up-regulation of β-catenin/TCF gene transcription. In 2003, Lee et al. constructed a computational model of Wnt signalling supported by experimental data from analysis of time-dependent concentration of Wnt signalling proteins in Xenopus egg extracts. Subsequent studies have used the Xenopus quantitative data to infer Wnt pathway dynamics in other systems. As a basis for understanding Wnt signalling in mammalian cells, a confocal live cell imaging measurement technique is developed to measure the cell and nuclear volumes of MDCK, HEK293T cells and 3 human colorectal cancer cell lines and the concentrations of Wnt signalling proteins β-catenin, Axin, APC, GSK3β and E-cadherin. These parameters provide the basis for formulating Wnt signalling models for kidney/intestinal epithelial mammalian cells. There are significant differences in concentrations of key proteins between Xenopus extracts and mammalian whole cell lysates. Higher concentrations of Axin and lower concentrations of APC are present in mammalian cells. Axin concentrations are greater than APC in kidney epithelial cells, whereas in intestinal epithelial cells the APC concentration is higher than Axin. Computational simulations based on Lee's model, with this new data, suggest a need for a recalibration of the model.A quantitative understanding of Wnt signalling in mammalian cells, in particular human colorectal cancers requires a detailed understanding of the concentrations of key protein complexes over time. Simulations of Wnt signalling in mammalian cells can be initiated with the parameters measured in this report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Napier, Bruce A.; Krupka, Kenneth M.; Fellows, Robert J.
2004-12-02
This Annual Progress Report describes the work performed and summarizes some of the key observations to date on the U.S. Nuclear Regulatory Commission’s project Assessment of Food Chain Pathway Parameters in Biosphere Models, which was established to assess and evaluate a number of key parameters used in the food-chain models used in performance assessments of radioactive waste disposal facilities. Section 2 of this report describes activities undertaken to collect samples of soils from three regions of the United States, the Southeast, Northwest, and Southwest, and perform analyses to characterize their physical and chemical properties. Section 3 summarizes information gathered regardingmore » agricultural practices and common and unusual crops grown in each of these three areas. Section 4 describes progress in studying radionuclide uptake in several representative crops from the three soil types in controlled laboratory conditions. Section 5 describes a range of international coordination activities undertaken by Project staff in order to support the underlying data needs of the Project. Section 6 provides a very brief summary of the status of the GENII Version 2 computer program, which is a “client” of the types of data being generated by the Project, and for which the Project will be providing training to the US NRC staff in the coming Fiscal Year. Several appendices provide additional supporting information.« less
Atmospheric gas-to-particle conversion: why NPF events are observed in megacities?
Kulmala, M; Kerminen, V-M; Petäjä, T; Ding, A J; Wang, L
2017-08-24
In terms of the global aerosol particle number load, atmospheric new particle formation (NPF) dominates over primary emissions. The key for quantifying the importance of atmospheric NPF is to understand how gas-to-particle conversion (GTP) takes place at sizes below a few nanometers in particle diameter in different environments, and how this nano-GTP affects the survival of small clusters into larger sizes. The survival probability of growing clusters is tied closely to the competition between their growth and scavenging by pre-existing aerosol particles, and the key parameter in this respect is the ratio between the condensation sink (CS) and the cluster growth rate (GR). Here we define their ratio as a dimensionless survival parameter, P, as P = (CS/10 -4 s -1 )/(GR/nm h -1 ). Theoretical arguments and observations in clean and moderately-polluted conditions indicate that P needs to be smaller than about 50 for a notable NPF to take place. However, the existing literature shows that in China, NPF occurs frequently in megacities such as in Beijing, Nanjing and Shanghai, and our analysis shows that the calculated values of P are even larger than 200 in these cases. By combining direct observations and conceptual modelling, we explore the variability of the survival parameter P in different environments and probe the reasons for NPF occurrence under highly-polluted conditions.
Optimization in Cardiovascular Modeling
NASA Astrophysics Data System (ADS)
Marsden, Alison L.
2014-01-01
Fluid mechanics plays a key role in the development, progression, and treatment of cardiovascular disease. Advances in imaging methods and patient-specific modeling now reveal increasingly detailed information about blood flow patterns in health and disease. Building on these tools, there is now an opportunity to couple blood flow simulation with optimization algorithms to improve the design of surgeries and devices, incorporating more information about the flow physics in the design process to augment current medical knowledge. In doing so, a major challenge is the need for efficient optimization tools that are appropriate for unsteady fluid mechanics problems, particularly for the optimization of complex patient-specific models in the presence of uncertainty. This article reviews the state of the art in optimization tools for virtual surgery, device design, and model parameter identification in cardiovascular flow and mechanobiology applications. In particular, it reviews trade-offs between traditional gradient-based methods and derivative-free approaches, as well as the need to incorporate uncertainties. Key future challenges are outlined, which extend to the incorporation of biological response and the customization of surgeries and devices for individual patients.
Ethnographic field work in requirements engineering
NASA Astrophysics Data System (ADS)
Reddivari, Sandeep; Asaithambi, Asai; Niu, Nan; Wang, Wentao; Xu, Li Da; Cheng, Jing-Ru C.
2017-01-01
The requirements engineering (RE) processes have become a key in developing and deploying enterprise information system (EIS) for organisations and corporations in various fields and industrial sectors. Ethnography is a contextual method allowing scientific description of the stakeholders, their needs and their organisational customs. Despite the recognition in the RE literature that ethnography could be helpful, the actual leverage of the method has been limited and ad hoc. To overcome the problems, we report in this paper a systematic mapping study where the relevant literature is examined. Building on the literature review, we further identify key parameters, their variations and their connections. The improved understanding about the role of ethnography in EIS RE is then presented in a consolidated model, and the guidelines of how to apply ethnography are organised by the key factors uncovered. Our study can direct researchers towards thorough understanding about the role that ethnography plays in EIS RE, and more importantly, to help practitioners better integrate contextually rich and ecologically valid methods in their daily practices.
Simulation tests of the optimization method of Hopfield and Tank using neural networks
NASA Technical Reports Server (NTRS)
Paielli, Russell A.
1988-01-01
The method proposed by Hopfield and Tank for using the Hopfield neural network with continuous valued neurons to solve the traveling salesman problem is tested by simulation. Several researchers have apparently been unable to successfully repeat the numerical simulation documented by Hopfield and Tank. However, as suggested to the author by Adams, it appears that the reason for those difficulties is that a key parameter value is reported erroneously (by four orders of magnitude) in the original paper. When a reasonable value is used for that parameter, the network performs generally as claimed. Additionally, a new method of using feedback to control the input bias currents to the amplifiers is proposed and successfully tested. This eliminates the need to set the input currents by trial and error.
Progress in Application of Generalized Wigner Distribution to Growth and Other Problems
NASA Astrophysics Data System (ADS)
Einstein, T. L.; Morales-Cifuentes, Josue; Pimpinelli, Alberto; Gonzalez, Diego Luis
We recap the use of the (single-parameter) Generalized Wigner Distribution (GWD) to analyze capture-zone distributions associated with submonolayer epitaxial growth. We discuss recent applications to physical systems, as well as key simulations. We pay particular attention to how this method compares with other methods to assess the critical nucleus size characterizing growth. The following talk discusses a particular case when special insight is needed to reconcile the various methods. We discuss improvements that can be achieved by going to a 2-parameter fragmentation approach. At a much larger scale we have applied this approach to various distributions in socio-political phenomena (areas of secondary administrative units [e.g., counties] and distributions of subway stations). Work at UMD supported by NSF CHE 13-05892.
NASA Astrophysics Data System (ADS)
Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Cao, Feng; Liang, Jimin; Tian, Jie
2014-12-01
To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance.
NASA Astrophysics Data System (ADS)
Koch, Jonas; Nowak, Wolfgang
2013-04-01
At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.
A crunch on thermocompression flip chip bonding
NASA Astrophysics Data System (ADS)
Suppiah, Sarveshvaran; Ong, Nestor Rubio; Sauli, Zaliman; Sarukunaselan, Karunavani; Alcain, Jesselyn Barro; Mahmed, Norsuria; Retnasamy, Vithyacharan
2017-09-01
This study discussed the evolution and important findings, critical technical challenges, solutions and bonding equipment of flip chip thermo compression bonding (TCB). The bonding force, temperature and time were the key bonding parameters that need to be tweaked based on the researches done by others. TCB technology worked well with both pre-applied underfill and flux (still under development). Lower throughput coupled with higher processing costs was example of challenges in the TCB technology. The paper is concluded with a brief description of the current equipment used in thermo compression process.
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
NASA Astrophysics Data System (ADS)
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
Mixing console design for telematic applications in live performance and remote recording
NASA Astrophysics Data System (ADS)
Samson, David J.
The development of a telematic mixing console addresses audio engineers' need for a fully integrated system architecture that improves efficiency and control for applications such as distributed performance and remote recording. Current systems used in state of the art telematic performance rely on software-based interconnections with complex routing schemes that offer minimal flexibility or control over key parameters needed to achieve a professional workflow. The lack of hardware-based control in the current model limits the full potential of both the engineer and the system. The new architecture provides a full-featured platform that, alongside customary features, integrates (1) surround panning capability for motorized, binaural manikin heads, as well as all sources in the included auralization module, (2) self-labelling channel strips, responsive to change at all remote sites, (3) onboard roundtrip latency monitoring, (4) synchronized remote audio recording and monitoring, and (5) flexible routing. These features combined with robust parameter automation and precise analog control will raise the standard for telematic systems as well as advance the development of networked audio systems for both research and professional audio markets.
Sensitivity of projected long-term CO2 emissions across the Shared Socioeconomic Pathways
NASA Astrophysics Data System (ADS)
Marangoni, G.; Tavoni, M.; Bosetti, V.; Borgonovo, E.; Capros, P.; Fricko, O.; Gernaat, D. E. H. J.; Guivarch, C.; Havlik, P.; Huppmann, D.; Johnson, N.; Karkatsoulis, P.; Keppo, I.; Krey, V.; Ó Broin, E.; Price, J.; van Vuuren, D. P.
2017-01-01
Scenarios showing future greenhouse gas emissions are needed to estimate climate impacts and the mitigation efforts required for climate stabilization. Recently, the Shared Socioeconomic Pathways (SSPs) have been introduced to describe alternative social, economic and technical narratives, spanning a wide range of plausible futures in terms of challenges to mitigation and adaptation. Thus far the key drivers of the uncertainty in emissions projections have not been robustly disentangled. Here we assess the sensitivities of future CO2 emissions to key drivers characterizing the SSPs. We use six state-of-the-art integrated assessment models with different structural characteristics, and study the impact of five families of parameters, related to population, income, energy efficiency, fossil fuel availability, and low-carbon energy technology development. A recently developed sensitivity analysis algorithm allows us to parsimoniously compute both the direct and interaction effects of each of these drivers on cumulative emissions. The study reveals that the SSP assumptions about energy intensity and economic growth are the most important determinants of future CO2 emissions from energy combustion, both with and without a climate policy. Interaction terms between parameters are shown to be important determinants of the total sensitivities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavgorodnya, Oleksandra; Shamshina, Julia L.; Bonner, Jonathan R.
Here, we report the correlation between key solution properties and spinability of chitin from the ionic liquid (IL) 1-ethyl-3-methylimidazolium acetate ([C 2mim][OAc]), and the similarities and differences to electrospinning solutions of non-ionic polymers in volatile organic compounds (VOCs). We found that when electrospinning is conducted from ILs, conductivity and surface tension are not the key parameters regulating spinability, while solution viscosity and polymer concentration are. Contrarily, for electrospinning of polymers from VOCs, solution conductivity and viscosity have been reported to be among some of the most important factors controlling fiber formation. For chitin electrospun from [C 2mim][OAc], we found bothmore » a critical chitin concentration required for continuous fiber formation (> 0.20 wt%) and a required viscosity for the spinning solution (between ca. 450 – 1500 cP). The high viscosities of the biopolymer-IL solutions made it possible to electrospin solutions with low, less than 1 wt% of polymer concentration and produce thin fibers without the need to adjust the electrospinning parameters. These results suggest new prospects for the control of fiber architecture in non-woven mats, which is crucial for materials performance.« less
Zavgorodnya, Oleksandra; Shamshina, Julia L.; Bonner, Jonathan R.; ...
2017-04-27
Here, we report the correlation between key solution properties and spinability of chitin from the ionic liquid (IL) 1-ethyl-3-methylimidazolium acetate ([C 2mim][OAc]), and the similarities and differences to electrospinning solutions of non-ionic polymers in volatile organic compounds (VOCs). We found that when electrospinning is conducted from ILs, conductivity and surface tension are not the key parameters regulating spinability, while solution viscosity and polymer concentration are. Contrarily, for electrospinning of polymers from VOCs, solution conductivity and viscosity have been reported to be among some of the most important factors controlling fiber formation. For chitin electrospun from [C 2mim][OAc], we found bothmore » a critical chitin concentration required for continuous fiber formation (> 0.20 wt%) and a required viscosity for the spinning solution (between ca. 450 – 1500 cP). The high viscosities of the biopolymer-IL solutions made it possible to electrospin solutions with low, less than 1 wt% of polymer concentration and produce thin fibers without the need to adjust the electrospinning parameters. These results suggest new prospects for the control of fiber architecture in non-woven mats, which is crucial for materials performance.« less
Radiation environment study of near space in China area
NASA Astrophysics Data System (ADS)
Fan, Dongdong; Chen, Xingfeng; Li, Zhengqiang; Mei, Xiaodong
2015-10-01
Aerospace activity becomes research hotspot for worldwide aviation big countries. Solar radiation study is the prerequisite for aerospace activity to carry out, but lack of observation in near space layer becomes the barrier. Based on reanalysis data, input key parameters are determined and simulation experiments are tried separately to simulate downward solar radiation and ultraviolet radiation transfer process of near space in China area. Results show that atmospheric influence on the solar radiation and ultraviolet radiation transfer process has regional characteristic. As key factors such as ozone are affected by atmospheric action both on its density, horizontal and vertical distribution, meteorological data of stratosphere needs to been considered and near space in China area is divided by its activity feature. Simulated results show that solar and ultraviolet radiation is time, latitude and ozone density-variant and has complicated variation characteristics.
Nguyen, Manh Cuong; Yao, Yongxin; Wang, Cai-Zhuang; ...
2018-05-16
The dependence of the magnetocrystalline anisotropy energy (MAE) in MCo 5 (M = Y, La, Ce, Gd) and CoPt on the Coulomb correlations and strength of spin orbit (SO) interaction within the GGA + U scheme is investigated. A range of parameters suitable for the satisfactory description of key magnetic properties is determined. We show that for a large variation of SO interaction the MAE in these materials can be well described by the traditional second order perturbation theory. We also show that in these materials the MAE can be both proportional and negatively proportional to the orbital moment anisotropymore » (OMA) of Co atoms. Dependence of relativistic effects on Coulomb correlations, applicability of the second order perturbation theory for the description of MAE, and effective screening of the SO interaction in these systems are discussed using a generalized virial theorem. Finally, such determined sets of parameters of Coulomb correlations can be used in much needed large scale atomistic simulations.« less
NASA Astrophysics Data System (ADS)
Nguyen, Manh Cuong; Yao, Yongxin; Wang, Cai-Zhuang; Ho, Kai-Ming; Antropov, Vladimir P.
2018-05-01
The dependence of the magnetocrystalline anisotropy energy (MAE) in MCo5 (M = Y, La, Ce, Gd) and CoPt on the Coulomb correlations and strength of spin orbit (SO) interaction within the GGA + U scheme is investigated. A range of parameters suitable for the satisfactory description of key magnetic properties is determined. We show that for a large variation of SO interaction the MAE in these materials can be well described by the traditional second order perturbation theory. We also show that in these materials the MAE can be both proportional and negatively proportional to the orbital moment anisotropy (OMA) of Co atoms. Dependence of relativistic effects on Coulomb correlations, applicability of the second order perturbation theory for the description of MAE, and effective screening of the SO interaction in these systems are discussed using a generalized virial theorem. Such determined sets of parameters of Coulomb correlations can be used in much needed large scale atomistic simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones-Albertus, Rebecca; Feldman, David; Fu, Ran
2016-04-20
To quantify the potential value of technological advances to the photovoltaics (PV) sector, this paper examines the impact of changes to key PV module and system parameters on the levelized cost of energy (LCOE). The parameters selected include module manufacturing cost, efficiency, degradation rate, and service lifetime. NREL's System Advisor Model (SAM) is used to calculate the lifecycle cost per kilowatt-hour (kWh) for residential, commercial, and utility scale PV systems within the contiguous United States, with a focus on utility scale. Different technological pathways are illustrated that may achieve the Department of Energy's SunShot goal of PV electricity that ismore » at grid price parity with conventional electricity sources. In addition, the impacts on the 2015 baseline LCOE due to changes to each parameter are shown. These results may be used to identify research directions with the greatest potential to impact the cost of PV electricity.« less
NASA Astrophysics Data System (ADS)
Ameli, Kazem; Alfi, Alireza; Aghaebrahimi, Mohammadreza
2016-09-01
Similarly to other optimization algorithms, harmony search (HS) is quite sensitive to the tuning parameters. Several variants of the HS algorithm have been developed to decrease the parameter-dependency character of HS. This article proposes a novel version of the discrete harmony search (DHS) algorithm, namely fuzzy discrete harmony search (FDHS), for optimizing capacitor placement in distribution systems. In the FDHS, a fuzzy system is employed to dynamically adjust two parameter values, i.e. harmony memory considering rate and pitch adjusting rate, with respect to normalized mean fitness of the harmony memory. The key aspect of FDHS is that it needs substantially fewer iterations to reach convergence in comparison with classical discrete harmony search (CDHS). To the authors' knowledge, this is the first application of DHS to specify appropriate capacitor locations and their best amounts in the distribution systems. Simulations are provided for 10-, 34-, 85- and 141-bus distribution systems using CDHS and FDHS. The results show the effectiveness of FDHS over previous related studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Manh Cuong; Yao, Yongxin; Wang, Cai-Zhuang
The dependence of the magnetocrystalline anisotropy energy (MAE) in MCo 5 (M = Y, La, Ce, Gd) and CoPt on the Coulomb correlations and strength of spin orbit (SO) interaction within the GGA + U scheme is investigated. A range of parameters suitable for the satisfactory description of key magnetic properties is determined. We show that for a large variation of SO interaction the MAE in these materials can be well described by the traditional second order perturbation theory. We also show that in these materials the MAE can be both proportional and negatively proportional to the orbital moment anisotropymore » (OMA) of Co atoms. Dependence of relativistic effects on Coulomb correlations, applicability of the second order perturbation theory for the description of MAE, and effective screening of the SO interaction in these systems are discussed using a generalized virial theorem. Finally, such determined sets of parameters of Coulomb correlations can be used in much needed large scale atomistic simulations.« less
NASA Astrophysics Data System (ADS)
Song, W. M.; Fan, D. W.; Su, L. Y.; Cui, C. Z.
2017-11-01
Calculating the coordinate parameters recorded in the form of key/value pairs in FITS (Flexible Image Transport System) header is the key to determine FITS images' position in the celestial system. As a result, it has great significance in researching the general process of calculating the coordinate parameters. By combining CCD related parameters of astronomical telescope (such as field, focal length, and celestial coordinates in optical axis, etc.), astronomical images recognition algorithm, and WCS (World Coordinate System) theory, the parameters can be calculated effectively. CCD parameters determine the scope of star catalogue, so that they can be used to build a reference star catalogue by the corresponding celestial region of astronomical images; Star pattern recognition completes the matching between the astronomical image and reference star catalogue, and obtains a table with a certain number of stars between CCD plane coordinates and their celestial coordinates for comparison; According to different projection of the sphere to the plane, WCS can build different transfer functions between these two coordinates, and the astronomical position of image pixels can be determined by the table's data we have worked before. FITS images are used to carry out scientific data transmission and analyze as a kind of mainstream data format, but only to be viewed, edited, and analyzed in the professional astronomy software. It decides the limitation of popular science education in astronomy. The realization of a general image visualization method is significant. FITS is converted to PNG or JPEG images firstly. The coordinate parameters in the FITS header are converted to metadata in the form of AVM (Astronomy Visualization Metadata), and then the metadata is added to the PNG or JPEG header. This method can meet amateur astronomers' general needs of viewing and analyzing astronomical images in the non-astronomical software platform. The overall design flow is realized through the java program and tested by SExtractor, WorldWide Telescope, picture viewer, and other software.
Self-referenced continuous-variable quantum key distribution protocol
Soh, Daniel Beom Soo; Sarovar, Mohan; Brif, Constantin; ...
2015-10-21
We introduce a new continuous-variable quantum key distribution (CV-QKD) protocol, self-referenced CV-QKD, that eliminates the need for transmission of a high-power local oscillator between the communicating parties. In this protocol, each signal pulse is accompanied by a reference pulse (or a pair of twin reference pulses), used to align Alice’s and Bob’s measurement bases. The method of phase estimation and compensation based on the reference pulse measurement can be viewed as a quantum analog of intradyne detection used in classical coherent communication, which extracts the phase information from the modulated signal. We present a proof-of-principle, fiber-based experimental demonstration of themore » protocol and quantify the expected secret key rates by expressing them in terms of experimental parameters. Our analysis of the secret key rate fully takes into account the inherent uncertainty associated with the quantum nature of the reference pulse(s) and quantifies the limit at which the theoretical key rate approaches that of the respective conventional protocol that requires local oscillator transmission. The self-referenced protocol greatly simplifies the hardware required for CV-QKD, especially for potential integrated photonics implementations of transmitters and receivers, with minimum sacrifice of performance. Furthermore, it provides a pathway towards scalable integrated CV-QKD transceivers, a vital step towards large-scale QKD networks.« less
Self-referenced continuous-variable quantum key distribution protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soh, Daniel Beom Soo; Sarovar, Mohan; Brif, Constantin
We introduce a new continuous-variable quantum key distribution (CV-QKD) protocol, self-referenced CV-QKD, that eliminates the need for transmission of a high-power local oscillator between the communicating parties. In this protocol, each signal pulse is accompanied by a reference pulse (or a pair of twin reference pulses), used to align Alice’s and Bob’s measurement bases. The method of phase estimation and compensation based on the reference pulse measurement can be viewed as a quantum analog of intradyne detection used in classical coherent communication, which extracts the phase information from the modulated signal. We present a proof-of-principle, fiber-based experimental demonstration of themore » protocol and quantify the expected secret key rates by expressing them in terms of experimental parameters. Our analysis of the secret key rate fully takes into account the inherent uncertainty associated with the quantum nature of the reference pulse(s) and quantifies the limit at which the theoretical key rate approaches that of the respective conventional protocol that requires local oscillator transmission. The self-referenced protocol greatly simplifies the hardware required for CV-QKD, especially for potential integrated photonics implementations of transmitters and receivers, with minimum sacrifice of performance. Furthermore, it provides a pathway towards scalable integrated CV-QKD transceivers, a vital step towards large-scale QKD networks.« less
Self-Referenced Continuous-Variable Quantum Key Distribution Protocol
NASA Astrophysics Data System (ADS)
Soh, Daniel B. S.; Brif, Constantin; Coles, Patrick J.; Lütkenhaus, Norbert; Camacho, Ryan M.; Urayama, Junji; Sarovar, Mohan
2015-10-01
We introduce a new continuous-variable quantum key distribution (CV-QKD) protocol, self-referenced CV-QKD, that eliminates the need for transmission of a high-power local oscillator between the communicating parties. In this protocol, each signal pulse is accompanied by a reference pulse (or a pair of twin reference pulses), used to align Alice's and Bob's measurement bases. The method of phase estimation and compensation based on the reference pulse measurement can be viewed as a quantum analog of intradyne detection used in classical coherent communication, which extracts the phase information from the modulated signal. We present a proof-of-principle, fiber-based experimental demonstration of the protocol and quantify the expected secret key rates by expressing them in terms of experimental parameters. Our analysis of the secret key rate fully takes into account the inherent uncertainty associated with the quantum nature of the reference pulse(s) and quantifies the limit at which the theoretical key rate approaches that of the respective conventional protocol that requires local oscillator transmission. The self-referenced protocol greatly simplifies the hardware required for CV-QKD, especially for potential integrated photonics implementations of transmitters and receivers, with minimum sacrifice of performance. As such, it provides a pathway towards scalable integrated CV-QKD transceivers, a vital step towards large-scale QKD networks.
NASA Astrophysics Data System (ADS)
Hina, A.
2017-12-01
Although Thar coal is recognized to be one of the most abundant fossil fuel that could meet the need to combat energy crisis of Pakistan, but there still remains a challenge to tackle the associated environmental and socio-ecological changes and its linkage to the provision of ecosystem services of the region. The study highlights the importance of considering Ecosystem service assessment to be undertaken in all strategic Environmental and Social Assessments of Thar coal field projects. The three-step approach has been formulated to link the project impacts to the provision of important ecosystem services; 1) Identification of impact indicators and parameters by analyzing the environmental and social impacts of surface mining in Thar Coal field through field investigation, literature review and stakeholder consultations; 2) Ranking of parameters and criteria alternatives using Multi-criteria Decision Analysis(MCDA) tool: (AHP method); 3) Using ranked parameters as a proxy to prioritize important ecosystem services of the region; The ecosystem services that were prioritized because of both high significance of project impact and high project dependence are highlighted as: Water is a key ecosystem service to be addressed and valued due to its high dependency in the area for livestock, human wellbeing, agriculture and other purposes. Crop production related to agricultural services, in association with supply services such as soil quality, fertility, and nutrient recycling and water retention need to be valued. Cultural services affected in terms of land use change and resettlement and rehabilitation factors are recommended to be addressed. The results of the analysis outline a framework of identifying these linkages as key constraints to foster the emergence of green growth and development in Pakistan. The practicality of implementing these assessments requires policy instruments and strategies to support human well-being and social inclusion while minimizing environmental degradation and loss of ecosystem services. Keywords Ecosystem service assessment; Environmental and Social Impact Assessment; coal mining; Thar Coal Field; Sustainable development
Robotic vision techniques for space operations
NASA Technical Reports Server (NTRS)
Krishen, Kumar
1994-01-01
Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.
Brown, Adrian P; Borgs, Christian; Randall, Sean M; Schnell, Rainer
2017-06-08
Integrating medical data using databases from different sources by record linkage is a powerful technique increasingly used in medical research. Under many jurisdictions, unique personal identifiers needed for linking the records are unavailable. Since sensitive attributes, such as names, have to be used instead, privacy regulations usually demand encrypting these identifiers. The corresponding set of techniques for privacy-preserving record linkage (PPRL) has received widespread attention. One recent method is based on Bloom filters. Due to superior resilience against cryptographic attacks, composite Bloom filters (cryptographic long-term keys, CLKs) are considered best practice for privacy in PPRL. Real-world performance of these techniques using large-scale data is unknown up to now. Using a large subset of Australian hospital admission data, we tested the performance of an innovative PPRL technique (CLKs using multibit trees) against a gold-standard derived from clear-text probabilistic record linkage. Linkage time and linkage quality (recall, precision and F-measure) were evaluated. Clear text probabilistic linkage resulted in marginally higher precision and recall than CLKs. PPRL required more computing time but 5 million records could still be de-duplicated within one day. However, the PPRL approach required fine tuning of parameters. We argue that increased privacy of PPRL comes with the price of small losses in precision and recall and a large increase in computational burden and setup time. These costs seem to be acceptable in most applied settings, but they have to be considered in the decision to apply PPRL. Further research on the optimal automatic choice of parameters is needed.
Imaging on a Shoestring: Cost-Effective Technologies for Probing Vadose Zone Transport Processes
NASA Astrophysics Data System (ADS)
Corkhill, C.; Bridge, J. W.; Barns, G.; Fraser, R.; Romero-Gonzalez, M.; Wilson, R.; Banwart, S.
2010-12-01
Key barriers to the widespread uptake of imaging technology for high spatial resolution monitoring of porous media systems are cost and accessibility. X-ray tomography, magnetic resonance imaging (MRI), gamma and neutron radiography require highly specialised equipment, controlled laboratory environments and/or access to large synchrotron facilities. Here we present results from visible light, fluorescence and autoradiographic imaging techniques developed at low cost and applied in standard analytical laboratories, adapted where necessary at minimal capital expense. UV-visible time lapse fluorescence imaging (UV-vis TLFI) in a transparent thin bed chamber enabled microspheres labelled with fluorescent dye and a conservative fluorophore solute (disodium fluorescein) to be measured simultaneously in saturated, partially-saturated and actively draining quartz sand to elucidate empirical values for colloid transport and deposition parameters distributed throughout the flow field, independently of theoretical approximations. Key results include the first experimental quantification of the effects of ionic strength and air-water interfacial area on colloid deposition above a capillary fringe, and the first direct observations of particle mobilisation and redeposition by moving saturation gradients during drainage. UV-vis imaging was also used to study biodegradation and reactive transport in a variety of saturated conditions, applying fluorescence as a probe for oxygen and nitrate concentration gradients, pH, solute transport parameters, reduction of uranium, and mapping of two-dimensional flow fields around a model dipole flow borehole system to validate numerical models. Costs are low: LED excitation sources (< US 50), flow chambers (US 200) and detectors (although a complete scientific-grade CCD set-up costs around US$ 8000, robust datasets can be obtained using a commercial digital SLR camera) mean that set-ups can be flexible to meet changing experimental requirements. The critical limitations of UV-vis fluorescence imaging are the need for reliable fluorescent probes suited to the experimental objective, and the reliance on thin-bed (2D) transparent porous media. Autoradiographic techniques address some of these limitations permit imaging of key biogeochemical processes in opaque media using radioactive probes, without the need for specialised radiation sources. We present initial calibration data for the use of autoradiography to monitor transport parameters for radionuclides (99-technetium), and a novel application of a radioactive salt tracer as a probe for pore water content, in model porous media systems.
Economic evaluation in chronic pain: a systematic review and de novo flexible economic model.
Sullivan, W; Hirst, M; Beard, S; Gladwell, D; Fagnani, F; López Bastida, J; Phillips, C; Dunlop, W C N
2016-07-01
There is unmet need in patients suffering from chronic pain, yet innovation may be impeded by the difficulty of justifying economic value in a field beset by data limitations and methodological variability. A systematic review was conducted to identify and summarise the key areas of variability and limitations in modelling approaches in the economic evaluation of treatments for chronic pain. The results of the literature review were then used to support the development of a fully flexible open-source economic model structure, designed to test structural and data assumptions and act as a reference for future modelling practice. The key model design themes identified from the systematic review included: time horizon; titration and stabilisation; number of treatment lines; choice/ordering of treatment; and the impact of parameter uncertainty (given reliance on expert opinion). Exploratory analyses using the model to compare a hypothetical novel therapy versus morphine as first-line treatments showed cost-effectiveness results to be sensitive to structural and data assumptions. Assumptions about the treatment pathway and choice of time horizon were key model drivers. Our results suggest structural model design and data assumptions may have driven previous cost-effectiveness results and ultimately decisions based on economic value. We therefore conclude that it is vital that future economic models in chronic pain are designed to be fully transparent and hope our open-source code is useful in order to aspire to a common approach to modelling pain that includes robust sensitivity analyses to test structural and parameter uncertainty.
Determination of quality parameters from statistical analysis of routine TLD dosimetry data.
German, U; Weinstein, M; Pelled, O
2006-01-01
Following the as low as reasonably achievable (ALARA) practice, there is a need to measure very low doses, of the same order of magnitude as the natural background, and the limits of detection of the dosimetry systems. The different contributions of the background signals to the total zero dose reading of thermoluminescence dosemeter (TLD) cards were analysed by using the common basic definitions of statistical indicators: the critical level (L(C)), the detection limit (L(D)) and the determination limit (L(Q)). These key statistical parameters for the system operated at NRC-Negev were quantified, based on the history of readings of the calibration cards in use. The electronic noise seems to play a minor role, but the reading of the Teflon coating (without the presence of a TLD crystal) gave a significant contribution.
Protocols for long-term monitoring of seabird ecology in the Gulf of Alaska
Piatt, John F.; Byrd, G. Vernon; Harding, Ann M.A.; Kettle, Arthur B.; Kitaysky, Sasha; Litzow, Michael A.; Roseneau, David G.; Shultz, Michael T.; van Pelt, Thomas I.
2003-01-01
Seabird populations will need to be monitored for many years to assess both recovery and ecological conditions affecting recovery. Detailed studies of individual seabird colonies and marine ecosystems in the Gulf of Alaska have been conducted by the U.S. Geological Survey and U.S. Fish and Wildlife Service under the auspices of damage assessment and restoration programs of the Trustee Council. Much has been learned about factors influencing seabird populations and their capacity to recover from the spill in the Gulf of Alaska. As the restoration program moves toward long-term monitoring of populations, however, protocols and long-term monitoring strategies that focus on key parameters of interest and that are inexpensive, practical, and applicable over a large geographic area need to be developed.
Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.
2017-01-01
Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.
Morton, David; Mayekiso, Thoko; Cunningham, Peter
2018-03-01
Community home-based care (CHBC) is a critical component of non-formal care in communities in Africa that have a high prevalence of HIV and tuberculosis (TB). Community carers consisting primarily of volunteers are critical role players in African healthcare systems and particularly in South Africa's strategy to fight HIV and AIDS. This paper explores the structural barriers volunteer caregivers need to overcome to provide quality CHBC. The researchers used two focus group discussions with key informants (each with four participants), and semi-structured interviews with six key informants to collect data relating to the meaning of quality CHBC. The data were coded using Tesch's data analysis technique. A major theme that emerged from the results was "Addressing structural challenges to improve the quality of CHBC". Subthemes underpinning this theme were: 1) lack of standardised training of volunteer caregivers; 2) the need for a scope of practice, parameters and legal boundaries; 3) lack of monitoring and evaluation (M&E) of CHBC; and 4) the importance of mentoring and supervision in CHBC. CHBC policy should address the need for standardised training programmes for caregivers, so that they are equipped with multiple skills. Furthermore CHBC policy must emphasise mentoring as well as M&E to encourage quality care. Finally, the policy should provide a clear scope of practice for caregivers to regulate their competencies and boundaries.
RSA-Based Password-Authenticated Key Exchange, Revisited
NASA Astrophysics Data System (ADS)
Shin, Seonghan; Kobara, Kazukuni; Imai, Hideki
The RSA-based Password-Authenticated Key Exchange (PAKE) protocols have been proposed to realize both mutual authentication and generation of secure session keys where a client is sharing his/her password only with a server and the latter should generate its RSA public/private key pair (e, n), (d, n) every time due to the lack of PKI (Public-Key Infrastructures). One of the ways to avoid a special kind of off-line (so called e-residue) attacks in the RSA-based PAKE protocols is to deploy a challenge/response method by which a client verifies the relative primality of e and φ(n) interactively with a server. However, this kind of RSA-based PAKE protocols did not give any proof of the underlying challenge/response method and therefore could not specify the exact complexity of their protocols since there exists another security parameter, needed in the challenge/response method. In this paper, we first present an RSA-based PAKE (RSA-PAKE) protocol that can deploy two different challenge/response methods (denoted by Challenge/Response Method1 and Challenge/Response Method2). The main contributions of this work include: (1) Based on the number theory, we prove that the Challenge/Response Method1 and the Challenge/Response Method2 are secure against e-residue attacks for any odd prime e (2) With the security parameter for the on-line attacks, we show that the RSA-PAKE protocol is provably secure in the random oracle model where all of the off-line attacks are not more efficient than on-line dictionary attacks; and (3) By considering the Hamming weight of e and its complexity in the. RSA-PAKE protocol, we search for primes to be recommended for a practical use. We also compare the RSA-PAKE protocol with the previous ones mainly in terms of computation and communication complexities.
Estimating unknown parameters in haemophilia using expert judgement elicitation.
Fischer, K; Lewandowski, D; Janssen, M P
2013-09-01
The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Samsonov, Andrey; Gordeev, Evgeny; Sergeev, Victor
2017-04-01
As it was recently suggested (e.g., Gordeev et al., 2015), the global magnetospheric configuration can be characterized by a set of key parameters, such as the magnetopause distance at the subsolar point and on the terminator plane, the magnetic field in the magnetotail lobe and the plasma sheet thermal pressure, the cross polar cap electric potential drop and the total field-aligned current. For given solar wind conditions, the values of these parameters can be obtained from both empirical models and global MHD simulations. We validate the recently developed global MHD code SPSU-16 using the key magnetospheric parameters mentioned above. The code SPSU-16 can calculate both the isotropic and anisotropic MHD equations. In the anisotropic version, we use the modified double-adiabatic equations in which the T⊥/T∥ (the ratio of perpendicular to parallel thermal pressures) has been bounded from above by the mirror and ion-cyclotron thresholds and from below by the firehose threshold. The results of validation for the SPSU-16 code well agree with the previously published results of other global codes. Some key parameters coincide in the isotropic and anisotropic MHD simulations, but some are different.
Efficiency Versus Instability in Plasma Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebedev, Valeri; Burov, Alexey; Nagaitsev, Sergei
2017-01-05
Plasma wake-field acceleration in a strongly nonlinear (a.k.a. the blowout) regime is one of the main candidates for future high-energy colliders. For this case, we derive a universal efficiency-instability relation, between the power efficiency and the key instability parameter of the witness bunch. We also show that in order to stabilize the witness bunch in a regime with high power efficiency, the bunch needs to have high energy spread, which is not presently compatible with collider-quality beam properties. It is unclear how such limitations could be overcome for high-luminosity linear colliders.
Tapered slot antenna design for vehicular GPR applications
NASA Astrophysics Data System (ADS)
Bıçak, Emrullah; Yeǧin, Korkut; Nazlı, Hakki; Daǧ, Mahmut
2014-05-01
Vehicular applications of UWB GPR demand multiple GPR sensors operating in a harsh environment. One of the key elements of in the sensor is its UWB antenna which has minimal inter-element coupling, low group delay, high directivity and less prone to environmental conditions. Tapered slot antennas (TSA's) provide good impedance match, but they need to be modified for above specifications. Parasitic slot loaded TSA with balanced feed is proposed and a multi-channel antenna array structure is formed. Structural parameters are numerically analyzed and a prototype is built. Measurements show good performance for UWB GPR applications.
A national-scale analysis of the impacts of drought on water quality in UK rivers
NASA Astrophysics Data System (ADS)
Coxon, G.; Howden, N. J. K.; Freer, J. E.; Whitehead, P. G.; Bussi, G.
2015-12-01
Impacts of droughts on water quality qre difficult to quanitify but are essential to manage ecosystems and maintain public water supply. During drought, river water quality is significantly changed by increased residence times, reduced dilution and enhanced biogeochemical processes. But, the impact severity varies between catchments and depends on multiple factors including the sensitivity of the river to drought conditions, anthropogenic influences in the catchment and different delivery patterns of key nutrient, contaminant and mineral sources. A key constraint is data availability for key water quality parameters such that impacts of drought periods on certain determinands can be identified. We use national-scale water quality monitoring data to investigate the impacts of drought periods on water quality in the United Kingdom (UK). The UK Water Quality Sampling Harmonised Monitoring Scheme (HMS) dataset consists of >200 UK sites with weekly to monthly sampling of many water quality variables over the past 40 years. This covers several major UK droughts in 1975-1976, 1983-1984,1989-1992, 1995 and 2003, which cover severity, spatial and temporal extent, and how this affects the temporal impact of the drought on water quality. Several key water quality parameters, including water temperature, nitrate, dissolved organic carbon, orthophosphate, chlorophyll and pesticides, are selected from the database. These were chosen based on their availability for many of the sites, high sampling resolution and importance to the drinking water function and ecological status of the river. The water quality time series were then analysed to investigate whether water quality during droughts deviated significantly from non-drought periods and examined how the results varied spatially, for different drought periods and for different water quality parameters. Our results show that there is no simple conclusion as to the effects of drought on water quality in UK rivers; impacts are diverse both in terms of timing, magnitude and duration. We consider several scenarios in which management interventions may alleviate water quality pressures, and discuss how the many interacting factors need to be better characterised to support detailed mechanistic models to improve our process understanding.
Ierardo, Gaetano; Corridore, Denise; Di Carlo, Gabriele; Di Giorgio, Gianni; Leonardi, Emanuele; Campus, Guglielmo-Giuseppe; Vozza, Iole; Polimeni, Antonella; Bossù, Maurizio
2017-01-01
Background Data from epidemiological studies investigating the prevalence and severity of malocclusions in children are of great relevance to public health programs aimed at orthodontic prevention. Previous epidemiological studies focused mainly on the adolescence age group and reported a prevalence of malocclusion with a high variability, going from 32% to 93%. Aim of our study was to assess the need for orthodontic treatment in a paediatric sample from Southern Italy in order to improve awareness among paediatricians about oral health preventive strategies in pediatric dentistry. Material and Methods The study used the IOTN-DHC index to evaluate the need for orthodontic treatment for several malocclusions (overjet, reverse overjet, overbite, openbite, crossbite) in a sample of 579 children in the 2-9 years age range. Results The most frequently altered occlusal parameter was the overbite (prevalence: 24.5%), while the occlusal anomaly that most frequently presented a need for orthodontic treatment was the crossbite (8.8%). The overall prevalence of need for orthodontic treatment was of 19.3%, while 49% of the sample showed one or more altered occlusal parameters. No statistically significant difference was found between males and females. Conclusions Results from this study support the idea that the establishment of a malocclusion is a gradual process starting at an early age. Effective orthodontic prevention programs should therefore include preschool children being aware paediatricians of the importance of early first dental visit. Key words:Orthodontic treatment, malocclusion, oral health, pediatric dentistry. PMID:28936290
Lang, Jun
2012-01-30
In this paper, we propose a novel secure image sharing scheme based on Shamir's three-pass protocol and the multiple-parameter fractional Fourier transform (MPFRFT), which can safely exchange information with no advance distribution of either secret keys or public keys between users. The image is encrypted directly by the MPFRFT spectrum without the use of phase keys, and information can be shared by transmitting the encrypted image (or message) three times between users. Numerical simulation results are given to verify the performance of the proposed algorithm.
Fast clustering using adaptive density peak detection.
Wang, Xiao-Feng; Xu, Yifan
2017-12-01
Common limitations of clustering methods include the slow algorithm convergence, the instability of the pre-specification on a number of intrinsic parameters, and the lack of robustness to outliers. A recent clustering approach proposed a fast search algorithm of cluster centers based on their local densities. However, the selection of the key intrinsic parameters in the algorithm was not systematically investigated. It is relatively difficult to estimate the "optimal" parameters since the original definition of the local density in the algorithm is based on a truncated counting measure. In this paper, we propose a clustering procedure with adaptive density peak detection, where the local density is estimated through the nonparametric multivariate kernel estimation. The model parameter is then able to be calculated from the equations with statistical theoretical justification. We also develop an automatic cluster centroid selection method through maximizing an average silhouette index. The advantage and flexibility of the proposed method are demonstrated through simulation studies and the analysis of a few benchmark gene expression data sets. The method only needs to perform in one single step without any iteration and thus is fast and has a great potential to apply on big data analysis. A user-friendly R package ADPclust is developed for public use.
Software Computes Tape-Casting Parameters
NASA Technical Reports Server (NTRS)
deGroh, Henry C., III
2003-01-01
Tcast2 is a FORTRAN computer program that accelerates the setup of a process in which a slurry containing metal particles and a polymeric binder is cast, to a thickness regulated by a doctor blade, onto fibers wound on a rotating drum to make a green precursor of a metal-matrix/fiber composite tape. Before Tcast2, setup parameters were determined by trial and error in time-consuming multiple iterations of the process. In Tcast2, the fiber architecture in the final composite is expressed in terms of the lateral distance between fibers and the thickness-wise distance between fibers in adjacent plies. The lateral distance is controlled via the manner of winding. The interply spacing is controlled via the characteristics of the slurry and the doctor-blade height. When a new combination of fibers and slurry is first cast and dried to a green tape, the shrinkage from the wet to the green condition and a few other key parameters of the green tape are measured. These parameters are provided as input to Tcast2, which uses them to compute the doctor-blade height and fiber spacings needed to obtain the desired fiber architecture and fiber volume fraction in the final composite.
From LCAs to simplified models: a generic methodology applied to wind power electricity.
Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle
2013-02-05
This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.
TORABIPOUR, Amin; ZERAATI, Hojjat; ARAB, Mohammad; RASHIDIAN, Arash; AKBARI SARI, Ali; SARZAIEM, Mahmuod Reza
2016-01-01
Background: To determine the hospital required beds using stochastic simulation approach in cardiac surgery departments. Methods: This study was performed from Mar 2011 to Jul 2012 in three phases: First, collection data from 649 patients in cardiac surgery departments of two large teaching hospitals (in Tehran, Iran). Second, statistical analysis and formulate a multivariate linier regression model to determine factors that affect patient's length of stay. Third, develop a stochastic simulation system (from admission to discharge) based on key parameters to estimate required bed capacity. Results: Current cardiac surgery department with 33 beds can only admit patients in 90.7% of days. (4535 d) and will be required to over the 33 beds only in 9.3% of days (efficient cut off point). According to simulation method, studied cardiac surgery department will requires 41–52 beds for admission of all patients in the 12 next years. Finally, one-day reduction of length of stay lead to decrease need for two hospital beds annually. Conclusion: Variation of length of stay and its affecting factors can affect required beds. Statistic and stochastic simulation model are applied and useful methods to estimate and manage hospital beds based on key hospital parameters. PMID:27957466
NASA Astrophysics Data System (ADS)
Rawal, Amit; Rao, P. V. Kameswara; Kumar, Vijay
2018-04-01
Absorptive glass mat (AGM) separator is a vital technical component in valve regulated lead acid (VRLA) batteries that can be tailored for a desired application. To selectively design and tailor the AGM separator, the intricate three-dimensional (3D) structure needs to be unraveled. Herein, a toolkit of 3D analytical models of pore size distribution and electrolyte uptake expressed via wicking characteristics of AGM separators under unconfined and confined states is presented. 3D data of fiber orientation distributions obtained previously through X-ray micro-computed tomography (microCT) analysis are used as key set of input parameters. The predictive ability of pore size distribution model is assessed through the commonly used experimental set-up that usually apply high level of compressive stresses. Further, the existing analytical model of wicking characteristics of AGM separators has been extended to account for 3D characteristics, and subsequently, compared with the experimental results. A good agreement between the theory and experiments pave the way to simulate the realistic charge-discharge modes of the battery by applying cyclic loading condition. A threshold criterion describing the invariant behavior of pore size and wicking characteristics in terms of maximum permissible limit of key structural parameters during charge-discharge mode of the battery has also been proposed.
New methodology to baseline and match AME polysilicon etcher using advanced diagnostic tools
NASA Astrophysics Data System (ADS)
Poppe, James; Shipman, John; Reinhardt, Barbara E.; Roussel, Myriam; Hedgecock, Raymond; Fonda, Arturo
1999-09-01
As process controls tighten in the semiconductor industry, the need to understand the variables that determine system performance become more important. For plasma etch systems, process success depends on the control of key parameters such as: vacuum integrity, pressure, gas flows, and RF power. It is imperative to baseline, monitor, and control these variables. This paper presents an overview of the methods and tools used by Motorola BMC fabrication facility to characterize an Applied Materials polysilicon etcher. Tool performance data obtained from our traditional measurement techniques are limited in their scope and do not provide a complete picture of the ultimate tool performance. Presently the BMC traditional characterization tools provide a snapshot of the static operation of the equipment under test (EUT); however, complete evaluation of the dynamic performance cannot be monitored without the aid of specialized diagnostic equipment. To provide us with a complete system baseline evaluation of the polysilicon etcher, three diagnostic tools were utilized: Lucas Labs Vacuum Diagnostic System, Residual Gas Analyzer, and the ENI Voltage/Impedance Probe. The diagnostic methodology used to baseline and match key parameters of qualified production equipment has had an immense impact on other equipment characterization in the facility. It has resulted in reduced cycle time for new equipment introduction as well.
Electrobioremediation of oil spills.
Daghio, Matteo; Aulenta, Federico; Vaiopoulou, Eleni; Franzetti, Andrea; Arends, Jan B A; Sherry, Angela; Suárez-Suárez, Ana; Head, Ian M; Bestetti, Giuseppina; Rabaey, Korneel
2017-05-01
Annually, thousands of oil spills occur across the globe. As a result, petroleum substances and petrochemical compounds are widespread contaminants causing concern due to their toxicity and recalcitrance. Many remediation strategies have been developed using both physicochemical and biological approaches. Biological strategies are most benign, aiming to enhance microbial metabolic activities by supplying limiting inorganic nutrients, electron acceptors or donors, thus stimulating oxidation or reduction of contaminants. A key issue is controlling the supply of electron donors/acceptors. Bioelectrochemical systems (BES) have emerged, in which an electrical current serves as either electron donor or acceptor for oil spill bioremediation. BES are highly controllable and can possibly also serve as biosensors for real time monitoring of the degradation process. Despite being promising, multiple aspects need to be considered to make BES suitable for field applications including system design, electrode materials, operational parameters, mode of action and radius of influence. The microbiological processes, involved in bioelectrochemical contaminant degradation, are currently not fully understood, particularly in relation to electron transfer mechanisms. Especially in sulfate rich environments, the sulfur cycle appears pivotal during hydrocarbon oxidation. This review provides a comprehensive analysis of the research on bioelectrochemical remediation of oil spills and of the key parameters involved in the process. Copyright © 2017 Elsevier Ltd. All rights reserved.
A cooperative strategy for parameter estimation in large scale systems biology models.
Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R
2012-06-22
Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.
A cooperative strategy for parameter estimation in large scale systems biology models
2012-01-01
Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112
Hone, J.; Pech, R.; Yip, P.
1992-01-01
Infectious diseases establish in a population of wildlife hosts when the number of secondary infections is greater than or equal to one. To estimate whether establishment will occur requires extensive experience or a mathematical model of disease dynamics and estimates of the parameters of the disease model. The latter approach is explored here. Methods for estimating key model parameters, the transmission coefficient (beta) and the basic reproductive rate (RDRS), are described using classical swine fever (hog cholera) in wild pigs as an example. The tentative results indicate that an acute infection of classical swine fever will establish in a small population of wild pigs. Data required for estimation of disease transmission rates are reviewed and sources of bias and alternative methods discussed. A comprehensive evaluation of the biases and efficiencies of the methods is needed. PMID:1582476
Safety monitoring and reactor transient interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hench, J. E.; Fukushima, T. Y.
1983-12-20
An apparatus which monitors a subset of control panel inputs in a nuclear reactor power plant, the subset being those indicators of plant status which are of a critical nature during an unusual event. A display (10) is provided for displaying primary information (14) as to whether the core is covered and likely to remain covered, including information as to the status of subsystems needed to cool the core and maintain core integrity. Secondary display information (18,20) is provided which can be viewed selectively for more detailed information when an abnormal condition occurs. The primary display information has messages (24)more » for prompting an operator as to which one of a number of pushbuttons (16) to press to bring up the appropriate secondary display (18,20). The apparatus utilizes a thermal-hydraulic analysis to more accurately determine key parameters (such as water level) from other measured parameters, such as power, pressure, and flow rate.« less
Lorget, Florence; Parenteau, Audrey; Carrier, Michel; Lambert, Daniel; Gueorguieva, Ana; Schuetz, Chris; Bantseev, Vlad; Thackaberry, Evan
2016-09-06
Many long-acting delivery strategies for ocular indications rely on pH- and/or temperature-driven release of the therapeutic agent and degradation of the drug carrier. Yet, these physiological parameters are poorly characterized in ocular animal models. These strategies aim at reducing the frequency of dosing, which is of particular interest for the treatment of chronic disorders affecting the posterior segment of the eye, such as macular degeneration that warrants monthly or every other month intravitreal injections. We used anesthetized white New Zealand rabbits, Yucatan mini pigs, and cynomolgus monkeys to characterize pH and temperature in several vitreous locations and the central aqueous location. We also established post mortem pH changes in the vitreous. Our data showed regional and species differences, which need to be factored into strategies for developing biodegradable long-acting delivery systems.
Cost of ownership for inspection equipment
NASA Astrophysics Data System (ADS)
Dance, Daren L.; Bryson, Phil
1993-08-01
Cost of Ownership (CoO) models are increasingly a part of the semiconductor equipment evaluation and selection process. These models enable semiconductor manufacturers and equipment suppliers to quantify a system in terms of dollars per wafer. Because of the complex nature of the semiconductor manufacturing process, there are several key attributes that must be considered in order to accurately reflect the true 'cost of ownership'. While most CoO work to date has been applied to production equipment, the need to understand cost of ownership for inspection and metrology equipment presents unique challenges. Critical parameters such as detection sensitivity as a function of size and type of defect are not included in current CoO models yet are, without question, major factors in the technical evaluation process and life-cycle cost. This paper illustrates the relationship between these parameters, as components of the alpha and beta risk, and cost of ownership.
In-Situ Waviness Characterization of Metal Plates by a Lateral Shearing Interferometric Profilometer
Frade, María; Enguita, José María; Álvarez, Ignacio
2013-01-01
Characterizing waviness in sheet metal is a key process for quality control in many industries, such as automotive and home appliance manufacturing. However, there is still no known technique able to work in an automated in-floor inspection system. The literature describes many techniques developed in the last three decades, but most of them are either slow, only able to work in laboratory conditions, need very short (unsafe) working distances, or are only able to estimate certain waviness parameters. In this article we propose the use of a lateral shearing interferometric profilometer, which is able to obtain a 19 mm profile in a single acquisition, with sub-micron precision, in an uncontrolled environment, and from a working distance greater than 90 mm. This system allows direct measurement of all needed waviness parameters even with objects in movement. We describe a series of experiments over several samples of steel plates to validate the sensor and the processing method, and the results are in close agreement with those obtained with a contact stylus device. The sensor is an ideal candidate for on-line or in-machine fast automatic waviness assessment, reducing delays and costs in many metalworking processes. PMID:23584120
Frade, María; Enguita, José María; Alvarez, Ignacio
2013-04-12
Characterizing waviness in sheet metal is a key process for quality control in many industries, such as automotive and home appliance manufacturing. However, there is still no known technique able to work in an automated in-floor inspection system. The literature describes many techniques developed in the last three decades, but most of them are either slow, only able to work in laboratory conditions, need very short (unsafe) working distances, or are only able to estimate certain waviness parameters. In this article we propose the use of a lateral shearing interferometric profilometer, which is able to obtain a 19 mm profile in a single acquisition, with sub-micron precision, in an uncontrolled environment, and from a working distance greater than 90 mm. This system allows direct measurement of all needed waviness parameters even with objects in movement. We describe a series of experiments over several samples of steel plates to validate the sensor and the processing method, and the results are in close agreement with those obtained with a contact stylus device. The sensor is an ideal candidate for on-line or in-machine fast automatic waviness assessment, reducing delays and costs in many metalworking processes.
Key parameters design of an aerial target detection system on a space-based platform
NASA Astrophysics Data System (ADS)
Zhu, Hanlu; Li, Yejin; Hu, Tingliang; Rao, Peng
2018-02-01
To ensure flight safety of an aerial aircraft and avoid recurrence of aircraft collisions, a method of multi-information fusion is proposed to design the key parameter to realize aircraft target detection on a space-based platform. The key parameters of a detection wave band and spatial resolution using the target-background absolute contrast, target-background relative contrast, and signal-to-clutter ratio were determined. This study also presented the signal-to-interference ratio for analyzing system performance. Key parameters are obtained through the simulation of a specific aircraft. And the simulation results show that the boundary ground sampling distance is 30 and 35 m in the mid- wavelength infrared (MWIR) and long-wavelength infrared (LWIR) bands for most aircraft detection, and the most reasonable detection wavebands is 3.4 to 4.2 μm and 4.35 to 4.5 μm in the MWIR bands, and 9.2 to 9.8 μm in the LWIR bands. We also found that the direction of detection has a great impact on the detection efficiency, especially in MWIR bands.
Novel image encryption algorithm based on multiple-parameter discrete fractional random transform
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Dong, Taiji; Wu, Jianhua
2010-08-01
A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.
Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le
2015-01-01
Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589
NASA Astrophysics Data System (ADS)
Faivre, R.; Colin, J.; Menenti, M.; Lindenbergh, R.; Van Den Bergh, L.; Yu, H.; Jia, L.; Xin, L.
2010-10-01
Improving the understanding and the monitoring of high elevation regions hydrology is of major relevance from both societal and environmental points of view for many Asian countries, in particular in terms of flood and drought, but also in terms of food security in a chang- ing environment. Satellite and airborne remote sensing technologies are of utmost for such a challenge. Exist- ing imaging spectro-radiometers, radars, microwave ra- diometers and backscatter LIDAR provide a very com- prehensive suite of measurements over a wide rage of wavelengths, time frequencies and spatial resolu- tions. It is however needed to devise new algorithms to convert these radiometric measurements into useful eco-hydrological quantitative parameters for hydrologi- cal modeling and water management. The DRAGON II project entitled Key Eco-Hydrological Parameters Re- trieval and Land Data Assimilation System Development in a Typical Inland River Basin of Chinas Arid Region (ID 5322) aims at improving the monitoring, understand- ing, and predictability of hydrological and ecological pro- cesses at catchment scale, and promote the applicability of quantitative remote sensing in watershed science. Ex- isting Earth Observation platforms provided by the Euro- pean Space Agency as well as prototype airborne systems developed in China - ENVISAT/AATSR, ALOS/PRISM and PALSAR, Airborne LIDAR - are used and combined to retrieve advanced land surface physical properties over high elevation arid regions of China. The existing syn- ergies between this project, the CEOP-AEGIS project (FP7) and the WATER project (CAS) provide incentives for innovative studies. The investigations presented in the following report focus on the development of advanced and innovative methodologies and algorithms to monitor both the state and the trend of key eco-hydrological vari- ables: 3D vegetation properties, land surface evaporation, glacier mass balance and drought indicators.
Ghunmi, Lina Abu; Zeeman, Grietje; van Lier, Jules; Fayyed, Manar
2008-01-01
The objective of this work is to assess the potentials and requirements for grey water reuse in Jordan. The results revealed that urban, rural and dormitory grey water production rate and concentration of TS, BOD(5), COD and pathogens varied between 18-66 L cap(-1)d(-1), 848-1,919, 200-1,056, and 560-2,568 mg L(-1) and 6.9E2-2.7E5 CFU mL(-1), respectively. The grey water compromises 64 to 85% of the total water flow in the rural and urban areas. Storing grey water is inevitable to meet reuse requirements in terms of volume and timing. All the studied grey waters need treatment, in terms of solids, BOD(5), COD and pathogens, before storage and reuse. Storage and physical treatment, as a pretreatment step should be avoided, since it produces unstable effluents and non-stabilized sludge. However, extensive biological treatment can combine storage and physical treatments. Furthermore, a batch-fed biological treatment system combining anaerobic and aerobic processes copes with the fluctuations in the hydrographs and pollutographs as well as the present nutrients. The inorganic content of grey water in Jordan is about drinking water quality and does not need treatment. Moreover, the grey water SAR values were 3-7, revealing that the concentrations of monovalent and divalent cations comply with agricultural demand in Jordan. The observed patterns in the hydrographs and pollutographs showed that the hydraulic load could be used for the design of both physical and biological treatment units for dormitories and hotels. For family houses the hydraulic load was identified as the key design parameter for physical treatment units and the organic load is the key design parameter for biological treatment units. Copyright IWA Publishing 2008.
Zradziński, Patryk
2013-06-01
According to international guidelines, the assessment of biophysical effects of exposure to electromagnetic fields (EMF) generated by hand-operated sources needs the evaluation of induced electric field (E(in)) or specific energy absorption rate (SAR) caused by EMF inside a worker's body and is usually done by the numerical simulations with different protocols applied to these two exposure cases. The crucial element of these simulations is the numerical phantom of the human body. Procedures of E(in) and SAR evaluation due to compliance analysis with exposure limits have been defined in Institute of Electrical and Electronics Engineers standards and International Commission on Non-Ionizing Radiation Protection guidelines, but a detailed specification of human body phantoms has not been described. An analysis of the properties of over 30 human body numerical phantoms was performed which has been used in recently published investigations related to the assessment of EMF exposure by various sources. The differences in applicability of these phantoms in the evaluation of E(in) and SAR while operating industrial devices and SAR while using mobile communication handsets are discussed. The whole human body numerical phantom dimensions, posture, spatial resolution and electric contact with the ground constitute the key parameters in modeling the exposure related to industrial devices, while modeling the exposure from mobile communication handsets, which needs only to represent the exposed part of the human body nearest to the handset, mainly depends on spatial resolution of the phantom. The specification and standardization of these parameters of numerical human body phantoms are key requirements to achieve comparable and reliable results from numerical simulations carried out for compliance analysis against exposure limits or within the exposure assessment in EMF-related epidemiological studies.
Keeper, D M; Kerrisk, K L; House, J K; Garcia, S C; Thomson, P
2017-09-01
To determine the management practices utilised in automatic milking systems (AMS) that affect reproductive management and performance and how these compare with the management practices used in regionally proximal conventional milking systems (CMS). This study examined demographic and management data from AMS and CMS dairy farms through a survey, with a specific focus on reproductive management procedures. Overall, responses from AMS and CMS dairy farms showed little difference in terms of respondent demographics, farm size, herd structure and most farm management strategies. AMS dairies were more likely to use activity meters or other electronic oestrus detection aids than CMS dairies (P < 0.001) and were also more likely to have changed to electronic recording systems (P = 0.007). Although many respondents indicated that they used key monitoring parameters to assess reproductive performance (e.g. days in milk, conception vs pregnancy rate etc.), the format of responses varied significantly, indicating a relatively widespread (among the respondents) lack of knowledge regarding the meaning and usage of some of these common parameters/terminology. Ultimately, reproductive management practices of AMS dairies were largely similar to those of CMS dairies, indicating that such practices can be implemented in a practical sense, even though the resultant reproductive performance is not yet understood. Understanding that the key reproductive management strategies do not need to change vastly is important to ensure that new adoptees are well informed. Further work is needed to objectively measure AMS performance to increase the knowledge base and generate the confidence that will facilitate further adoption of this innovation. © 2017 Australian Veterinary Association.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.
2017-07-01
The diversity in hydrologic models has historically led to great controversy on the correct
approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
NASA Astrophysics Data System (ADS)
Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.
2017-12-01
The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
Factors Influencing Renewable Energy Production & Supply - A Global Analysis
NASA Astrophysics Data System (ADS)
Ali, Anika; Saqlawi, Juman Al
2016-04-01
Renewable energy is one of the key technologies through which the energy needs of the future can be met in a sustainable and carbon-neutral manner. Increasing the share of renewable energy in the total energy mix of each country is therefore a critical need. While different countries have approached this in different ways, there are some common aspects which influence the pace and effectiveness of renewable energy incorporation. This presentation looks at data and information from 34 selected countries, analyses the patterns, compares the different parameters and identifies the common factors which positively influence renewable energy incorporation. The most successful countries are analysed for their renewable energy performance against their GDP, policy/regulatory initiatives in the field of renewables, landmass, climatic conditions and population to identify the most influencing factors to bring about positive change in renewable energy share.
Macquarrie, K T B; Mayer, K U; Jin, B; Spiessl, S M
2010-03-01
Redox evolution in sparsely fractured crystalline rocks is a key, and largely unresolved, issue when assessing the geochemical suitability of deep geological repositories for nuclear waste. Redox zonation created by the influx of oxygenated waters has previously been simulated using reactive transport models that have incorporated a variety of processes, resulting in predictions for the depth of oxygen penetration that may vary greatly. An assessment and direct comparison of the various underlying conceptual models are therefore needed. In this work a reactive transport model that considers multiple processes in an integrated manner is used to investigate the ingress of oxygen for both single fracture and fracture zone scenarios. It is shown that the depth of dissolved oxygen migration is greatly influenced by the a priori assumptions that are made in the conceptual models. For example, the ability of oxygen to access and react with minerals in the rock matrix may be of paramount importance for single fracture conceptual models. For fracture zone systems, the abundance and reactivity of minerals within the fractures and thin matrix slabs between the fractures appear to provide key controls on O(2) attenuation. The findings point to the need for improved understanding of the coupling between the key transport-reaction feedbacks to determine which conceptual models are most suitable and to provide guidance for which parameters should be targeted in field and laboratory investigations. Copyright 2009 Elsevier B.V. All rights reserved.
Neural Synchronization and Cryptography
NASA Astrophysics Data System (ADS)
Ruttor, Andreas
2007-11-01
Neural networks can synchronize by learning from each other. In the case of discrete weights full synchronization is achieved in a finite number of steps. Additional networks can be trained by using the inputs and outputs generated during this process as examples. Several learning rules for both tasks are presented and analyzed. In the case of Tree Parity Machines synchronization is much faster than learning. Scaling laws for the number of steps needed for full synchronization and successful learning are derived using analytical models. They indicate that the difference between both processes can be controlled by changing the synaptic depth. In the case of bidirectional interaction the synchronization time increases proportional to the square of this parameter, but it grows exponentially, if information is transmitted in one direction only. Because of this effect neural synchronization can be used to construct a cryptographic key-exchange protocol. Here the partners benefit from mutual interaction, so that a passive attacker is usually unable to learn the generated key in time. The success probabilities of different attack methods are determined by numerical simulations and scaling laws are derived from the data. They show that the partners can reach any desired level of security by just increasing the synaptic depth. Then the complexity of a successful attack grows exponentially, but there is only a polynomial increase of the effort needed to generate a key. Further improvements of security are possible by replacing the random inputs with queries generated by the partners.
The longevity of lava dome eruptions
NASA Astrophysics Data System (ADS)
Wolpert, Robert L.; Ogburn, Sarah E.; Calder, Eliza S.
2016-02-01
Understanding the duration of past, ongoing, and future volcanic eruptions is an important scientific goal and a key societal need. We present a new methodology for forecasting the duration of ongoing and future lava dome eruptions based on a database (DomeHaz) recently compiled by the authors. The database includes duration and composition for 177 such eruptions, with "eruption" defined as the period encompassing individual episodes of dome growth along with associated quiescent periods during which extrusion pauses but unrest continues. In a key finding, we show that probability distributions for dome eruption durations are both heavy tailed and composition dependent. We construct objective Bayesian statistical models featuring heavy-tailed Generalized Pareto distributions with composition-specific parameters to make forecasts about the durations of new and ongoing eruptions that depend on both eruption duration to date and composition. Our Bayesian predictive distributions reflect both uncertainty about model parameter values (epistemic uncertainty) and the natural variability of the geologic processes (aleatoric uncertainty). The results are illustrated by presenting likely trajectories for 14 dome-building eruptions ongoing in 2015. Full representation of the uncertainty is presented for two key eruptions, Soufriére Hills Volcano in Montserrat (10-139 years, median 35 years) and Sinabung, Indonesia (1-17 years, median 4 years). Uncertainties are high but, importantly, quantifiable. This work provides for the first time a quantitative and transferable method and rationale on which to base long-term planning decisions for lava dome-forming volcanoes, with wide potential use and transferability to forecasts of other types of eruptions and other adverse events across the geohazard spectrum.
Futeran, Shuli; Draper, Brian M
2012-01-01
To describe the needs of patients aged 50 years and over with chronic mental illness being case managed within a public mental health service, and to determine factors that influence these needs. Patients were recruited from community-based Adult Mental Health (AMH) teams and Specialist Mental Health Services for Older People (SMHSOP) teams. Eligibility criteria included a diagnosis of schizophrenia or mood disorder. Patient, carer and key worker interviews were carried out using the Camberwell Assessment of Need for the Elderly (CANE). Of 183 eligible patients, 97 (mean age of 66.4 years) participated, of whom 63 were managed by AMH teams and 34 by SMHSOP teams. The majority (52%) had a diagnosis of schizophrenia, particularly those managed by AMH (71%). Patients self-rated fewer needs overall on the CANE than their key workers or the researcher, and also rated a higher proportion of their needs being met (83%) than the key worker (77%) or researcher (76%). From each perspective, over 80% of psychiatric and around 95% of identified medical needs were being met. The majority of social needs were unmet, with patients reporting only 42%, and key workers only 33%, met needs. The key unmet social needs were company, daily activities and having a close confidant. Key workers, patients and researchers rated SMHSOP service delivery to have significantly less unmet needs. The social needs of older patients with chronic mental illness require greater attention by public mental health services.
Image-Guided Surgery using Invisible Near-Infrared Light: Fundamentals of Clinical Translation
Gioux, Sylvain; Choi, Hak Soo; Frangioni, John V.
2011-01-01
The field of biomedical optics has matured rapidly over the last decade and is poised to make a significant impact on patient care. In particular, wide-field (typically > 5 cm), planar, near-infrared (NIR) fluorescence imaging has the potential to revolutionize human surgery by providing real-time image guidance to surgeons for tissue that needs to be resected, such as tumors, and tissue that needs to be avoided, such as blood vessels and nerves. However, to become a clinical reality, optimized imaging systems and NIR fluorescent contrast agents will be needed. In this review, we introduce the principles of NIR fluorescence imaging, analyze existing NIR fluorescence imaging systems, and discuss the key parameters that guide contrast agent development. We also introduce the complexities surrounding clinical translation using our experience with the Fluorescence-Assisted Resection and Exploration (FLARE™) imaging system as an example. Finally, we introduce state-of-the-art optical imaging techniques that might someday improve image-guided surgery even further. PMID:20868625
Asteroid Redirection Mission Evaluation Using Multiple Landers
NASA Astrophysics Data System (ADS)
Bazzocchi, Michael C. F.; Emami, M. Reza
2018-06-01
In this paper, a low-thrust tugboat redirection method is assessed using multiple spacecraft for a target range of small near-Earth asteroids. The benefits of a landed configuration of tugboat spacecraft in formation are examined for the redirection of a near-Earth asteroid. The tugboat method uses a gimballed thruster with a highly collimated ion beam to generate a thrust on the asteroid. The target asteroid range focuses on near-Earth asteroids smaller than 150 m in diameter, and carbonaceous (C-type) asteroids, due to the volatiles available for in-situ utilization. The assessment focuses primarily on the three key parameters, i.e., the asteroid mass redirected, the timeframe for redirection, and the overall system cost. An evaluation methodology for each parameter is discussed in detail, and the parameters are employed to determine the expected return and feasibility of the redirection mission. The number of spacecraft employed is optimized along with the electrical power needed for each spacecraft to ensure the highest possible return on investment. A discussion of the optimization results and the benefits of spacecraft formation for the tugboat method are presented.
Lago, Laura; Rilo, Benito; Fernández-Formoso, Noelia; DaSilva, Luis
2017-08-01
Rehabilitation with implants is a challenge. Having previous evaluation criteria is key to establishing the best treatment for the patient. In addition to clinical and radiological aspects, the prosthetic parameters must be taken into account in the initial workup, since they allow discrimination between fixed and removable rehabilitation. We present a study protocol that analyzes three basic prosthetic aspects. First, denture space defines the need to replace teeth, tissue, or both. Second, lip support focuses on whether or not to include a flange. Third, the smile line warns of potential risks in esthetic rehabilitation. Combining these parameters allows us to make a decision as to the most suitable type of prosthesis. The proposed protocol is useful for assessing the prosthetic parameters that influence decision making as to the best-suited type of restoration. From this point of view, we think it is appropriate for the initial approach to the patient. In any case, other considerations of study may amend the proposal. © 2016 by the American College of Prosthodontists.
Martel, D; Guerra, A; Turek, P; Weiss, J; Vileno, B
2016-04-01
In the field of solar fuel cells, the development of efficient photo-converting semiconductors remains a major challenge. A rational analysis of experimental photocatalytic results obtained with material in colloïdal suspensions is needed to access fundamental knowledge required to improve the design and properties of new materials. In this study, a simple system electron donor/nano-TiO2 is considered and examined via spin scavenging electron paramagnetic resonance as well as a panel of analytical techniques (composition, optical spectroscopy and dynamic light scattering) for selected type of nano-TiO2. Independent variables (pH, electron donor concentration and TiO2 amount) have been varied and interdependent variables (aggregate size, aggregate surface vs. volume and acid/base groups distribution) are discussed. This work shows that reliable understanding involves thoughtful combination of interdependent parameters, whereas the specific surface area seems not a pertinent parameter. The conclusion emphasizes the difficulty to identify the key features of the mechanisms governing photocatalytic properties in nano-TiO2. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dupret, M.-A.; De Ridder, J.; De Cat, P.; Aerts, C.; Scuflaire, R.; Noels, A.; Thoul, A.
2003-02-01
We present an improved version of the method of photometric mode identification of Heynderickx et al. (\\cite{hey}). Our new version is based on the inclusion of precise non-adiabatic eigenfunctions determined in the outer stellar atmosphere according to the formalism recently proposed by Dupret et al. (\\cite{dup}). Our improved photometric mode identification technique is therefore no longer dependent on ad hoc parameters for the non-adiabatic effects. It contains the complete physical conditions of the outer atmosphere of the star, provided that rotation does not play a key role. We apply our method to the two slowly pulsating B stars HD 74560 and HD 138764 and to the beta Cephei star EN (16) Lac. Besides identifying the degree l of the pulsating stars, our method is also a tool for improving the knowledge of stellar interiors and atmospheres, by imposing constraints on parameters such as the metallicity and the mixing-length parameter alpha (a procedure we label non-adiabatic asteroseismology). The non-adiabatic eigenfunctions needed for the mode identification are available upon request from the authors.
Asteroid Redirection Mission Evaluation Using Multiple Landers
NASA Astrophysics Data System (ADS)
Bazzocchi, Michael C. F.; Emami, M. Reza
2018-01-01
In this paper, a low-thrust tugboat redirection method is assessed using multiple spacecraft for a target range of small near-Earth asteroids. The benefits of a landed configuration of tugboat spacecraft in formation are examined for the redirection of a near-Earth asteroid. The tugboat method uses a gimballed thruster with a highly collimated ion beam to generate a thrust on the asteroid. The target asteroid range focuses on near-Earth asteroids smaller than 150 m in diameter, and carbonaceous (C-type) asteroids, due to the volatiles available for in-situ utilization. The assessment focuses primarily on the three key parameters, i.e., the asteroid mass redirected, the timeframe for redirection, and the overall system cost. An evaluation methodology for each parameter is discussed in detail, and the parameters are employed to determine the expected return and feasibility of the redirection mission. The number of spacecraft employed is optimized along with the electrical power needed for each spacecraft to ensure the highest possible return on investment. A discussion of the optimization results and the benefits of spacecraft formation for the tugboat method are presented.
The left ventricle in aortic stenosis--imaging assessment and clinical implications.
Călin, Andreea; Roşca, Monica; Beladan, Carmen Cristiana; Enache, Roxana; Mateescu, Anca Doina; Ginghină, Carmen; Popescu, Bogdan Alexandru
2015-04-29
Aortic stenosis has an increasing prevalence in the context of aging population. In these patients non-invasive imaging allows not only the grading of valve stenosis severity, but also the assessment of left ventricular function. These two goals play a key role in clinical decision-making. Although left ventricular ejection fraction is currently the only left ventricular function parameter that guides intervention, current imaging techniques are able to detect early changes in LV structure and function even in asymptomatic patients with significant aortic stenosis and preserved ejection fraction. Moreover, new imaging parameters emerged as predictors of disease progression in patients with aortic stenosis. Although proper standardization and confirmatory data from large prospective studies are needed, these novel parameters have the potential of becoming useful tools in guiding intervention in asymptomatic patients with aortic stenosis and stratify risk in symptomatic patients undergoing aortic valve replacement.This review focuses on the mechanisms of transition from compensatory left ventricular hypertrophy to left ventricular dysfunction and heart failure in aortic stenosis and the role of non-invasive imaging assessment of the left ventricular geometry and function in these patients.
NASA Astrophysics Data System (ADS)
Atmani, O.; Abbès, B.; Abbès, F.; Li, Y. M.; Batkam, S.
2018-05-01
Thermoforming of high impact polystyrene sheets (HIPS) requires technical knowledge on material behavior, mold type, mold material, and process variables. Accurate thermoforming simulations are needed in the optimization process. Determining the behavior of the material under thermoforming conditions is one of the key parameters for an accurate simulation. The aim of this work is to identify the thermomechanical behavior of HIPS in the thermoforming conditions. HIPS behavior is highly dependent on temperature and strain rate. In order to reproduce the behavior of such material, a thermo-elasto-viscoplastic constitutive law was implement in the finite element code ABAQUS. The proposed model parameters are considered as thermo-dependent. The strain-dependence effect is introduced using Prony series. Tensile tests were carried out at different temperatures and strain rates. The material parameters were then identified using a NSGA-II algorithm. To validate the rheological model, experimental blowing tests were carried out on a thermoforming pilot machine. To compare the numerical results with the experimental ones the thickness distribution and the bubble shape were investigated.
Flexible parameter-sparse global temperature time profiles that stabilise at 1.5 and 2.0 °C
NASA Astrophysics Data System (ADS)
Huntingford, Chris; Yang, Hui; Harper, Anna; Cox, Peter M.; Gedney, Nicola; Burke, Eleanor J.; Lowe, Jason A.; Hayman, Garry; Collins, William J.; Smith, Stephen M.; Comyn-Platt, Edward
2017-07-01
The meeting of the United Nations Framework Convention on Climate Change (UNFCCC) in December 2015 committed parties at the convention to hold the rise in global average temperature to well below 2.0 °C above pre-industrial levels. It also committed the parties to pursue efforts to limit warming to 1.5 °C. This leads to two key questions. First, what extent of emissions reduction will achieve either target? Second, what is the benefit of the reduced climate impacts from keeping warming at or below 1.5 °C? To provide answers, climate model simulations need to follow trajectories consistent with these global temperature limits. It is useful to operate models in an inverse mode to make model-specific estimates of greenhouse gas (GHG) concentration pathways consistent with the prescribed temperature profiles. Further inversion derives related emissions pathways for these concentrations. For this to happen, and to enable climate research centres to compare GHG concentrations and emissions estimates, common temperature trajectory scenarios are required. Here we define algebraic curves that asymptote to a stabilised limit, while also matching the magnitude and gradient of recent warming levels. The curves are deliberately parameter-sparse, needing the prescription of just two parameters plus the final temperature. Yet despite this simplicity, they can allow for temperature overshoot and for generational changes, for which more effort to decelerate warming change needs to be made by future generations. The curves capture temperature profiles from the existing Representative Concentration Pathway (RCP2.6) scenario projections by a range of different Earth system models (ESMs), which have warming amounts towards the lower levels of those that society is discussing.
Commercialization Issues For Catheter-Based Electrochemical Sensors
NASA Astrophysics Data System (ADS)
Nikolchev, Julian; Gaisford, Scott
1989-08-01
The need for continuous monitoring of key clinical parameters in hospitals is well recognized. Figure 1 shows typical time constants for blood gases, ions and enzymes in response to acute ventilatory changes and interventions. Although it can be seen that relatively low rates of data collection are necessary for many medical measurements, it is also clear that intermittent measurement of P02, PCO2 and pH are not sufficient to provide safe and effective management of the patient. Very frequent or continuous monitoring is often essential. This figure also shows why the emphasis of a large number of research efforts in this country and in Europe and Japan have as their goal the development of continuous blood gas sensors, i.e., sensors that continuously monitor blood pH, partial pressure of oxygen and partial pressure of carbon dioxide. These are three (3) of the most frequent parameters measured in hospitals and the ones having the shortest time constant. Considering that in the United States alone close to 25 million blood gas samples per year are taken from patients, the potential market for continuous monitoring sensors is enormous. The emergence of microelectronics and microfabrication technologies over the past 30 years are now pointing to a possible resolution of the well recognized need for real time monitoring of critically ill patients through catheter-based sensors. Although physicians will always prefer non-invasive monitoring techniques, there are a number of parameters that presently can only be monitored by invasive method. The emerging ability to miniaturize chemical sensors using silicon microfabrication or fiber-optic techniques offer an excellent opportunity to solve this need. In fact, the development of in vivo biomedical sensors with satisfactory performance characteristics has long been considered the ultimate application of these emerging technologies.
Wang, Qin; Zhou, Xing-Yu; Guo, Guang-Can
2016-01-01
In this paper, we put forward a new approach towards realizing measurement-device-independent quantum key distribution with passive heralded single-photon sources. In this approach, both Alice and Bob prepare the parametric down-conversion source, where the heralding photons are labeled according to different types of clicks from the local detectors, and the heralded ones can correspondingly be marked with different tags at the receiver’s side. Then one can obtain four sets of data through using only one-intensity of pump light by observing different kinds of clicks of local detectors. By employing the newest formulae to do parameter estimation, we could achieve very precise prediction for the two-single-photon pulse contribution. Furthermore, by carrying out corresponding numerical simulations, we compare the new method with other practical schemes of measurement-device-independent quantum key distribution. We demonstrate that our new proposed passive scheme can exhibit remarkable improvement over the conventional three-intensity decoy-state measurement-device-independent quantum key distribution with either heralded single-photon sources or weak coherent sources. Besides, it does not need intensity modulation and can thus diminish source-error defects existing in several other active decoy-state methods. Therefore, if taking intensity modulating errors into account, our new method will show even more brilliant performance. PMID:27759085
Key management and encryption under the bounded storage model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draelos, Timothy John; Neumann, William Douglas; Lanzone, Andrew J.
2005-11-01
There are several engineering obstacles that need to be solved before key management and encryption under the bounded storage model can be realized. One of the critical obstacles hindering its adoption is the construction of a scheme that achieves reliable communication in the event that timing synchronization errors occur. One of the main accomplishments of this project was the development of a new scheme that solves this problem. We show in general that there exist message encoding techniques under the bounded storage model that provide an arbitrarily small probability of transmission error. We compute the maximum capacity of this channelmore » using the unsynchronized key-expansion as side-channel information at the decoder and provide tight lower bounds for a particular class of key-expansion functions that are pseudo-invariant to timing errors. Using our results in combination with Dziembowski et al. [11] encryption scheme we can construct a scheme that solves the timing synchronization error problem. In addition to this work we conducted a detailed case study of current and future storage technologies. We analyzed the cost, capacity, and storage data rate of various technologies, so that precise security parameters can be developed for bounded storage encryption schemes. This will provide an invaluable tool for developing these schemes in practice.« less
NASA Astrophysics Data System (ADS)
Jackson-Blake, Leah; Helliwell, Rachel
2015-04-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, with a physically unrealistic TDP simulation being produced when too many parameters were allowed to vary during model calibration. Parameters should not therefore be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. This study highlights the potential pitfalls of using low frequency timeseries of observed water quality to calibrate complex process-based models. For reliable model calibrations to be produced, monitoring programmes need to be designed which capture system variability, in particular nutrient dynamics during high flow events. In addition, there is a need for simpler models, so that all model parameters can be included in auto-calibration and uncertainty analysis, and to reduce the data needs during calibration.
Improving Diaper Performance for Extremely Low-Birth-Weight Infants.
Sanchez, Veronica; Maladen-Percy, Michelle; Gustin, Jennifer; Tally, Amy; Gibb, Roger; Ogle, Julie; Kenneally, Dianna C; Carr, Andrew N
2018-06-01
Extremely low-birth-weight (ELBW) infants face significant diapering challenges compared with their full-term peers, due to immature musculature, nervous system, and skin development. Advances in medical care has increased an ELBW infant's rate of survival, which creates a growing need for diapers to better serve these infants. Aim of research. The objective of this study was to identify and confirm the requirements for optimal diaper performance from the neonatal intensive care unit nurses' perspective, as well as to assess in-hospital performance to determine if new features improved key developmental care parameters. Two surveys were shared among nurses to address study objectives. Study 1 (N = 151) was designed for neonatal intensive care unit nurses to identify key requirements for ELBW diapers and rate the performance of existing ELBW diapers. Study 2 (N = 99) assessed in-hospital performance of the test diaper compared with the usual diaper, under normal usage conditions. Findings/results. The majority of nurses agreed that ELBW diapers must fit appropriately between the legs so that hips and legs are not spread apart and that ELBW diapers need to be flexible between the legs for positioning. Of the nurses-infant pair responses, 93% ( P < .0001) preferred the test ELBW diaper over their usual diaper. Findings suggest that nurses should be included in the product design process to ensure both their needs and the needs of an infant are being met. Nurses are considering how diaper features may affect both acute and long-term medical outcomes, and this information provides necessary guidance to diaper manufacturers and designers when developing better-performing diapers.
Microwave moisture sensing of seedcotton: Part 1: Seedcotton microwave material properties
USDA-ARS?s Scientific Manuscript database
Moisture content at harvest is a key parameter that impacts quality and how well the cotton crop can be stored without degrading before processing. It is also a key parameter of interest for harvest time field trials as it can directly influence the quality of the harvested crop as well as alter the...
Microwave moisture sensing of seedcotton: Part 1: Seedcotton microwave material properties
USDA-ARS?s Scientific Manuscript database
Moisture content at harvest is a key parameter that impacts quality and how well the cotton crop can be stored without degrading before processing. It is also a key parameter of interest for harvest time field trials as it can directly influence the quality of the harvested crop as well as skew the...
Quantifying Uncertainty in Inverse Models of Geologic Data from Shear Zones
NASA Astrophysics Data System (ADS)
Davis, J. R.; Titus, S.
2016-12-01
We use Bayesian Markov chain Monte Carlo simulation to quantify uncertainty in inverse models of geologic data. Although this approach can be applied to many tectonic settings, field areas, and mathematical models, we focus on transpressional shear zones. The underlying forward model, either kinematic or dynamic, produces a velocity field, which predicts the dikes, foliation-lineations, crystallographic preferred orientation (CPO), shape preferred orientation (SPO), and other geologic data that should arise in the shear zone. These predictions are compared to data using modern methods of geometric statistics, including the Watson (for lines such as dike poles), isotropic matrix Fisher (for orientations such as foliation-lineations and CPO), and multivariate normal (for log-ellipsoids such as SPO) distributions. The result of the comparison is a likelihood, which is a key ingredient in the Bayesian approach. The other key ingredient is a prior distribution, which reflects the geologist's knowledge of the parameters before seeing the data. For some parameters, such as shear zone strike and dip, we identify realistic informative priors. For other parameters, where the geologist has no prior knowledge, we identify useful uninformative priors.We investigate the performance of this approach through numerical experiments on synthetic data sets. A fundamental issue is that many models of deformation exhibit asymptotic behavior (e.g., flow apophyses, fabric attractors) or periodic behavior (e.g., SPO when the clasts are rigid), which causes the likelihood to be too uniform. Based on our experiments, we offer rules of thumb for how many data, of which types, are needed to constrain deformation.
NASA Astrophysics Data System (ADS)
Xu, R.; Tian, H.; Pan, S.; Yang, J.; Lu, C.; Zhang, B.
2016-12-01
Human activities have caused significant perturbations of the nitrogen (N) cycle, resulting in about 21% increase of atmospheric N2O concentration since the pre-industrial era. This large increase is mainly caused by intensive agricultural activities including the application of nitrogen fertilizer and the expansion of leguminous crops. Substantial efforts have been made to quantify the global and regional N2O emission from agricultural soils in the last several decades using a wide variety of approaches, such as ground-based observation, atmospheric inversion, and process-based model. However, large uncertainties exist in those estimates as well as methods themselves. In this study, we used a coupled biogeochemical model (DLEM) to estimate magnitude, spatial, and temporal patterns of N2O emissions from global croplands in the past five decades (1961-2012). To estimate uncertainties associated with input data and model parameters, we have implemented a number of simulation experiments with DLEM, accounting for key parameter values that affect calculation of N2O fluxes (i.e., maximum nitrification and denitrification rates, N fixation rate, and the adsorption coefficient for soil ammonium and nitrate), different sets of input data including climate, land management practices (i.e., nitrogen fertilizer types, application rates and timings, with/without irrigation), N deposition, and land use and land cover change. This work provides a robust estimate of global N2O emissions from agricultural soils as well as identifies key gaps and limitations in the existing model and data that need to be investigated in the future.
Estimation of Key Parameters of the Coupled Energy and Water Model by Assimilating Land Surface Data
NASA Astrophysics Data System (ADS)
Abdolghafoorian, A.; Farhadi, L.
2017-12-01
Accurate estimation of land surface heat and moisture fluxes, as well as root zone soil moisture, is crucial in various hydrological, meteorological, and agricultural applications. Field measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state observations that are widely available from remote sensing across a range of scale. In this work, we applies the variational data assimilation approach to estimate land surface fluxes and soil moisture profile from the implicit information contained Land Surface Temperature (LST) and Soil Moisture (SM) (hereafter the VDA model). The VDA model is focused on the estimation of three key parameters: 1- neutral bulk heat transfer coefficient (CHN), 2- evaporative fraction from soil and canopy (EF), and 3- saturated hydraulic conductivity (Ksat). CHN and EF regulate the partitioning of available energy between sensible and latent heat fluxes. Ksat is one of the main parameters used in determining infiltration, runoff, groundwater recharge, and in simulating hydrological processes. In this study, a system of coupled parsimonious energy and water model will constrain the estimation of three unknown parameters in the VDA model. The profile of SM (LST) at multiple depths is estimated using moisture diffusion (heat diffusion) equation. In this study, the uncertainties of retrieved unknown parameters and fluxes are estimated from the inverse of Hesian matrix of cost function which is computed using the Lagrangian methodology. Analysis of uncertainty provides valuable information about the accuracy of estimated parameters and their correlation and guide the formulation of a well-posed estimation problem. The results of proposed algorithm are validated with a series of experiments using a synthetic data set generated by the simultaneous heat and water (SHAW) model. In addition, the feasibility of extending this algorithm to use remote sensing observations that have low temporal resolution is examined by assimilating the limited number of land surface moisture and temperature observations.
A pavement Moisture Accelerated Distress (MAD) identification system, volume 2
NASA Astrophysics Data System (ADS)
Carpenter, S. H.; Darter, M. I.; Dempsey, B. J.
1981-09-01
A users manual is designed which provides the engineer with a rational method of examining a pavement and determining rehabilitation needs that are related to the causes of the existing distress, particularly moisture related distress. The key elements in this procedure are the MAD Index developed in Volume 1, the Pavement Condition Index (PCI) and the Moisture Distress Index (MDI). Step by step procedures are presented for calculating each parameter. Complete distress identification manuals are included for asphalt surfaced highways and jointed reinforced concrete highways with pictures and descriptions of all major distress types. Descriptions of the role moisture plays in the development of each distress type are included.
On the traceability of gaseous reference materials
NASA Astrophysics Data System (ADS)
Brown, Richard J. C.; Brewer, Paul J.; Harris, Peter M.; Davidson, Stuart; van der Veen, Adriaan M. H.; Ent, Hugo
2017-06-01
The complex and multi-parameter nature of chemical composition measurement means that establishing traceability is a challenging task. As a result incorrect interpretations about the origin of the metrological traceability of chemical measurement results can occur. This discussion paper examines why this is the case by scrutinising the peculiarities of the gas metrology area. It considers in particular: primary methods, dissemination of metrological traceability and the role of documentary standards and accreditation bodies in promulgating best practice. There is also a discussion of documentary standards relevant to the NMI and reference material producer community which need clarification, and the impact which key stakeholders in the quality infrastructure can bring to these issues.
Studies on possible propagation of microbial contamination in planetary clouds
NASA Technical Reports Server (NTRS)
Dimmick, R. L.; Chatigny, M. A.; Wolochow, H.
1973-01-01
One of the key parameters in estimation of the probability of contamintion of the outer planets (Jupiter, Saturn, Uranus, etc.) is the probability of growth (Pg) of terrestrial microorganisms on or near these planets. For example, Jupiter appears to have an atmosphere in which some microbial species could metabolize and propagate. This study includes investigation of the likelihood of metabolism and propagation of microbes suspended in dynamic atmospheres. It is directed toward providing experimental information needed to aid in rational estimation of Pg for these outer planets. Current work is directed at demonstration of aerial metabolism under near optimal conditions and tests of propagation in simulated Jovian atmospheres.
Studies on possible propagation of microbial contamination in planetary clouds
NASA Technical Reports Server (NTRS)
Dimmick, R. L.; Chatigny, M. A.
1973-01-01
Current U.S. planetary quarantine standards based on international agreements require consideration of the probability of contamination (Pc) of the outer planets, Venus, Jupiter, Saturn, etc. One of the key parameters in estimation of the Pc of these planets is the probability of growth (Pg) of terrestrial microorganisms on or near these planets. For example, Jupiter and Saturn appear to have an atmosphere in which some microbial species could metabolize and propagate. This study includes investigation of the likelihood of metabolism and propagation of microbes suspended in dynamic atmospheres. It is directed toward providing experimental information needed to aid in rational estimation of Pg for these outer plants.
NASA Technical Reports Server (NTRS)
Mueller, Carl H.; VanKeuls, Frederick W.; Romanofsky, Robert R.; Alterovitz, Samuel A.; Miranda, Felix A.
2003-01-01
One of the keys to successfully incorporating ferroelectric films into Ku-band (12 to 18 GHz) phase shifters is to establish the composition, microstructure, and thickness required to meet the tuning needs, and tailor the film properties to meet these needs. Optimal performance is obtained when the film composition and device design are such that the device performance is limited by odd mode dielectric losses, and these losses are minimized as much as possible while still maintaining adequate tunability. The parameters required to maintain device performance will vary slightly depending on composition, but we can conclude that the best tuning-to-loss figures of merit (K-factor) are obtained when there is minimal variation between the in-plane and out-of-plane lattice parameters, and the full-width half maximum values of the BSTO (002) peaks are less than approximately 0.04 deg. We have observed that for phase shifters in which the ferroelectric crystalline quality and thickness are almost identical, higher losses are observed in films with higher BaISr ratios. The best performance was observed in phase shifters with Ba:Sr = 30:70. The superiority of this composition was attributed to several interacting factors: the B a: Sr ratio was such that the Curie temperature (180 K) was far removed from room temperature, the crystalline quality of the film was excellent, and there was virtually no difference between the inplane and out-of-plane lattice parameters of the film.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allu, Srikanth; Velamur Asokan, Badri; Shelton, William A
A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]).more » In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors« less
Numerical Simulation Of Cratering Effects In Adobe
2013-07-01
DEVELOPMENT OF MATERIAL PARAMETERS .........................................................7 PROBLEM SETUP...37 PARAMETER ADJUSTMENTS ......................................................................................38 GLOSSARY...dependent yield surface with the Geological Yield Surface (GEO) modeled in CTH using well characterized adobe. By identifying key parameters that
Key Parameters for the Use of AbobotulinumtoxinA in Aesthetics: Onset and Duration
Ablon, Glynis; Pickett, Andy
2017-01-01
Abstract Time to onset of response and duration of response are key measures of botulinum toxin efficacy that have a considerable influence on patient satisfaction with aesthetic treatment. However, there is no overall accepted definition of efficacy for aesthetic uses of botulinumtoxinA (BoNT-A). Mechanical methods of assessment do not lend themselves to clinical practice and clinicians rely instead on assessment scales such as the Frontalis Activity Measurement Standard, Frontalis Rating Scale, Wrinkle Severity Scale, and Subject Global Assessment Scale, but not all of these have been fully validated. Onset of activity is typically seen within 5 days of injection, but has also been recorded within 12 hours with abobotulinumtoxinA. Duration of effect is more variable, and is influenced by parameters such as muscle mass (including the effects of age and sex) and type of product used. Even when larger muscles are treated with higher doses of BoNT-A, the duration of effect is still shorter than that for smaller muscles. Muscle injection technique, including dilution of the toxin, the volume of solution injected, and the positioning of the injections, can also have an important influence on onset and duration of activity. Comparison of the efficacy of different forms of BoNT-A must be made with the full understanding that the dosing units are not equivalent. Range of equivalence studies for abobotulinumtoxinA (Azzalure; Ipsen Limited, Slough UK/Galderma, Lausanne CH/Dysport, Ipsen Biopharm Limited, Wrexham UK/Galderma LP, Fort Worth, TX) and onabotulinumtoxinA (Botox; Allergan, Parsippany, NJ) have been conducted, and results indicate that the number of units of abobotulinumtoxinA needs to be approximately twice as high as that of onabotulinumtoxinA to achieve the same effect. An appreciation of the potential influence of all of the parameters that influence onset and duration of activity of BoNT-A, along with a thorough understanding of the anatomy of the face and potency of doses, are essential to tailoring treatment to individual patient needs and expectations. PMID:28388717
NASA Astrophysics Data System (ADS)
Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin
2017-10-01
The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yabusaki, Steven B.; Serne, R. Jeffrey; Rockhold, Mark L.
2015-03-30
Washington River Protection Solutions (WRPS) and its contractors at Pacific Northwest National Laboratory (PNNL) and Savannah River National Laboratory (SRNL) are conducting a development program to develop / refine the cementitious waste form for the wastes treated at the ETF and to provide the data needed to support the IDF PA. This technical approach document is intended to provide guidance to the cementitious waste form development program with respect to the waste form characterization and testing information needed to support the IDF PA. At the time of the preparation of this technical approach document, the IDF PA effort is justmore » getting started and the approach to analyze the performance of the cementitious waste form has not been determined. Therefore, this document looks at a number of different approaches for evaluating the waste form performance and describes the testing needed to provide data for each approach. Though the approach addresses a cementitious secondary aqueous waste form, it is applicable to other waste forms such as Cast Stone for supplemental immobilization of Hanford LAW. The performance of Cast Stone as a physical and chemical barrier to the release of contaminants of concern (COCs) from solidification of Hanford liquid low activity waste (LAW) and secondary wastes processed through the Effluent Treatment Facility (ETF) is of critical importance to the Hanford Integrated Disposal Facility (IDF) total system performance assessment (TSPA). The effectiveness of cementitious waste forms as a barrier to COC release is expected to evolve with time. PA modeling must therefore anticipate and address processes, properties, and conditions that alter the physical and chemical controls on COC transport in the cementitious waste forms over time. Most organizations responsible for disposal facility operation and their regulators support an iterative hierarchical safety/performance assessment approach with a general philosophy that modeling provides the critical link between the short-term understanding from laboratory and field tests, and the prediction of repository performance over repository time frames and scales. One common recommendation is that experiments be designed to permit the appropriate scaling in the models. There is a large contrast in the physical and chemical properties between the Cast Stone waste package and the IDF backfill and surrounding sediments. Cast Stone exhibits low permeability, high tortuosity, low carbonate, high pH, and low Eh whereas the backfill and native sediments have high permeability, low tortuosity, high carbonate, circumneutral pH, and high Eh. These contrasts have important implications for flow, transport, and reactions across the Cast Stone – backfill interface. Over time with transport across the interface and subsequent reactions, the sharp geochemical contrast will blur and there will be a range of spatially-distributed conditions. In general, COC mobility and transport will be sensitive to these geochemical variations, which also include physical changes in porosity and permeability from mineral reactions. Therefore, PA modeling must address processes, properties, and conditions that alter the physical and chemical controls on COC transport in the cementitious waste forms over time. Section 2 of this document reviews past Hanford PAs and SRS Saltstone PAs, which to date have mostly relied on the lumped parameter COC release conceptual models for TSPA predictions, and provides some details on the chosen values for the lumped parameters. Section 3 provides more details on the hierarchical modeling strategy and processes and mechanisms that control COC release. Section 4 summarizes and lists the key parameters for which numerical values are needed to perform PAs. Section 5 provides brief summaries of the methods used to measure the needed parameters and references to get more details.« less
Advanced In-Pile Instrumentation for Materials Testing Reactors
NASA Astrophysics Data System (ADS)
Rempe, J. L.; Knudson, D. L.; Daw, J. E.; Unruh, T. C.; Chase, B. M.; Davis, K. L.; Palmer, A. J.; Schley, R. S.
2014-08-01
The U.S. Department of Energy sponsors the Advanced Test Reactor (ATR) National Scientific User Facility (NSUF) program to promote U.S. research in nuclear science and technology. By attracting new research users - universities, laboratories, and industry - the ATR NSUF facilitates basic and applied nuclear research and development, advancing U.S. energy security needs. A key component of the ATR NSUF effort is to design, develop, and deploy new in-pile instrumentation techniques that are capable of providing real-time measurements of key parameters during irradiation. This paper describes the strategy developed by the Idaho National Laboratory (INL) for identifying instrumentation needed for ATR irradiation tests and the program initiated to obtain these sensors. New sensors developed from this effort are identified, and the progress of other development efforts is summarized. As reported in this paper, INL researchers are currently involved in several tasks to deploy real-time length and flux detection sensors, and efforts have been initiated to develop a crack growth test rig. Tasks evaluating `advanced' technologies, such as fiber-optics based length detection and ultrasonic thermometers, are also underway. In addition, specialized sensors for real-time detection of temperature and thermal conductivity are not only being provided to NSUF reactors, but are also being provided to several international test reactors.
Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment
NASA Astrophysics Data System (ADS)
Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin
2017-10-01
Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.
Gariano, John; Neifeld, Mark; Djordjevic, Ivan
2017-01-20
Here, we present the engineering trade studies of a free-space optical communication system operating over a 30 km maritime channel for the months of January and July. The system under study follows the BB84 protocol with the following assumptions: a weak coherent source is used, Eve is performing the intercept resend attack and photon number splitting attack, prior knowledge of Eve's location is known, and Eve is allowed to know a small percentage of the final key. In this system, we examine the effect of changing several parameters in the following areas: the implementation of the BB84 protocol over the public channel, the technology in the receiver, and our assumptions about Eve. For each parameter, we examine how different values impact the secure key rate for a constant brightness. Additionally, we will optimize the brightness of the source for each parameter to study the improvement in the secure key rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...
2017-07-11
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Diode laser (980nm) cartilage reshaping
NASA Astrophysics Data System (ADS)
El Kharbotly, A.; El Tayeb, T.; Mostafa, Y.; Hesham, I.
2011-03-01
Loss of facial or ear cartilage due to trauma or surgery is a major challenge to the otolaryngologists and plastic surgeons as the complicated geometric contours are difficult to be animated. Diode laser (980 nm) has been proven effective in reshaping and maintaining the new geometric shape achieved by laser. This study focused on determining the optimum laser parameters needed for cartilage reshaping with a controlled water cooling system. Harvested animal cartilages were angulated with different degrees and irradiated with different diode laser powers (980nm, 4x8mm spot size). The cartilage specimens were maintained in a deformation angle for two hours after irradiation then released for another two hours. They were serially measured and photographed. High-power Diode laser irradiation with water cooling is a cheep and effective method for reshaping the cartilage needed for reconstruction of difficult situations in otorhinolaryngologic surgery. Key words: cartilage,diode laser (980nm), reshaping.
A Secure Group Communication Architecture for a Swarm of Autonomous Unmanned Aerial Vehicles
2008-03-01
members to use the same decryption key. This shared decryption key is called the Session Encryption Key ( SEK ) or Traffic Encryption Key (TEK...Since everyone shares the SEK , members need to hold additional Key Encryption Keys (KEK) that are used to securely distribute the SEK to each valid...managing this process. To preserve the secrecy of the multicast data, the SEK needs to be updated upon certain events such as a member joining and
Compressed Sensing for Metrics Development
NASA Astrophysics Data System (ADS)
McGraw, R. L.; Giangrande, S. E.; Liu, Y.
2012-12-01
Models by their very nature tend to be sparse in the sense that they are designed, with a few optimally selected key parameters, to provide simple yet faithful representations of a complex observational dataset or computer simulation output. This paper seeks to apply methods from compressed sensing (CS), a new area of applied mathematics currently undergoing a very rapid development (see for example Candes et al., 2006), to FASTER needs for new approaches to model evaluation and metrics development. The CS approach will be illustrated for a time series generated using a few-parameter (i.e. sparse) model. A seemingly incomplete set of measurements, taken at a just few random sampling times, is then used to recover the hidden model parameters. Remarkably there is a sharp transition in the number of required measurements, beyond which both the model parameters and time series are recovered exactly. Applications to data compression, data sampling/collection strategies, and to the development of metrics for model evaluation by comparison with observation (e.g. evaluation of model predictions of cloud fraction using cloud radar observations) are presented and discussed in context of the CS approach. Cited reference: Candes, E. J., Romberg, J., and Tao, T. (2006), Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 489-509.
NASA Astrophysics Data System (ADS)
Silva, H.; Monasterios, G.
2016-01-01
The first key comparison in microwave frequencies within the SIM (Sistema Interamericano de Metrología) region has been carried out. The measurands were the S-parameters of 50 ohm coaxial devices with Type-N connectors and were measured at 2 GHz, 9 GHz and 18 GHz. SIM.EM.RF-K5b.CL was the identification assigned and it was based on a parent CCEM key comparison named CCEM.RF-K5b.CL. For this reason, the measurements standards and their nominal values were selected accordingly, i.e. two one-port devices (a matched and a mismatched load) to cover low and high reflection coefficients and two attenuators (3dB and 20 dB) to cover low and high transmission coefficients. This key comparison has met the need for ensuring traceability in high-frequency measurements across America by linking SIM's results to CCEM. Six NMIs have participated in this comparison which was piloted by the Instituto Nacional de Tecnología Industrial (Argentina). A linking method of multivariate values was proposed and implemented in order to allow the linking of 2-dimensional results. KEY WORDS FOR SEARCH Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Field spectrometer (S191H) preprocessor tape quality test program design document
NASA Technical Reports Server (NTRS)
Campbell, H. M.
1976-01-01
Program QA191H performs quality assurance tests on field spectrometer data recorded on 9-track magnetic tape. The quality testing involves the comparison of key housekeeping and data parameters with historic and predetermined tolerance limits. Samples of key parameters are processed during the calibration period and wavelength cal period, and the results are printed out and recorded on an historical file tape.
NASA Astrophysics Data System (ADS)
Fuchs, Christian; Poulenard, Sylvain; Perlot, Nicolas; Riedi, Jerome; Perdigues, Josep
2017-02-01
Optical satellite communications play an increasingly important role in a number of space applications. However, if the system concept includes optical links to the surface of the Earth, the limited availability due to clouds and other atmospheric impacts need to be considered to give a reliable estimate of the system performance. An OGS network is required for increasing the availability to acceptable figures. In order to realistically estimate the performance and achievable throughput in various scenarios, a simulation tool has been developed under ESA contract. The tool is based on a database of 5 years of cloud data with global coverage and can thus easily simulate different optical ground station network topologies for LEO- and GEO-to-ground links. Further parameters, like e.g. limited availability due to sun blinding and atmospheric turbulence, are considered as well. This paper gives an overview about the simulation tool, the cloud database, as well as the modelling behind the simulation scheme. Several scenarios have been investigated: LEO-to-ground links, GEO feeder links, and GEO relay links. The key results of the optical ground station network optimization and throughput estimations will be presented. The implications of key technical parameters, as e.g. memory size aboard the satellite, will be discussed. Finally, potential system designs for LEO- and GEO-systems will be presented.
Experimental Design for the LATOR Mission
NASA Technical Reports Server (NTRS)
Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.
2004-01-01
This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.
Akrami, Mohammad; Qian, Zhihui; Zou, Zhemin; Howard, David; Nester, Chris J; Ren, Lei
2018-04-01
The objective of this study was to develop and validate a subject-specific framework for modelling the human foot. This was achieved by integrating medical image-based finite element modelling, individualised multi-body musculoskeletal modelling and 3D gait measurements. A 3D ankle-foot finite element model comprising all major foot structures was constructed based on MRI of one individual. A multi-body musculoskeletal model and 3D gait measurements for the same subject were used to define loading and boundary conditions. Sensitivity analyses were used to investigate the effects of key modelling parameters on model predictions. Prediction errors of average and peak plantar pressures were below 10% in all ten plantar regions at five key gait events with only one exception (lateral heel, in early stance, error of 14.44%). The sensitivity analyses results suggest that predictions of peak plantar pressures are moderately sensitive to material properties, ground reaction forces and muscle forces, and significantly sensitive to foot orientation. The maximum region-specific percentage change ratios (peak stress percentage change over parameter percentage change) were 1.935-2.258 for ground reaction forces, 1.528-2.727 for plantar flexor muscles and 4.84-11.37 for foot orientations. This strongly suggests that loading and boundary conditions need to be very carefully defined based on personalised measurement data.
Supermassive black holes with higher Eddington ratios preferentially form in gas-rich galaxies
NASA Astrophysics Data System (ADS)
Izumi, Takuma
2018-06-01
The Eddington ratio (λEdd) of supermassive black holes (SMBHs) is a fundamental parameter that governs their cosmic growth. Although gas mass accretion onto SMBHs is sustained when they are surrounded by large amounts of gas, little is known about the molecular content of galaxies, particularly those hosting super-Eddington SMBHs (λEdd > 1: the key phase of SMBH growth). Here, we have compiled reported optical and 12CO(1-0) data of local quasars to characterize their hosts. We found that higher-λEdd SMBHs tend to reside in gas-rich (i.e., high gas mass to stellar mass fraction = fgas) galaxies. We used two methods to make this conclusion: one uses black hole mass as a surrogate for stellar mass by assuming a local co-evolutionary relationship, and the other directly uses stellar masses estimated from near-infrared observations. The fgas-λEdd correlation we found concurs with the cosmic decreasing trend in λEdd, as cold molecular gas is primarily consumed by star formation. This correlation qualitatively matches predictions of recent semi-analytic models of the cosmic downsizing of SMBHs as well. As the gas mass surface density would eventually be a key parameter controlling mass accretion, we need high-resolution observations to identify further differences in the molecular properties around super-Eddington and sub-Eddington SMBHs.
Supermassive black holes with higher Eddington ratios preferentially form in gas-rich galaxies
NASA Astrophysics Data System (ADS)
Izumi, Takuma
2018-05-01
The Eddington ratio (λEdd) of supermassive black holes (SMBHs) is a fundamental parameter that governs their cosmic growth. Although gas mass accretion onto SMBHs is sustained when they are surrounded by large amounts of gas, little is known about the molecular content of galaxies, particularly those hosting super-Eddington SMBHs (λEdd > 1: the key phase of SMBH growth). Here, we have compiled reported optical and 12CO(1-0) data of local quasars to characterize their hosts. We found that higher-λEdd SMBHs tend to reside in gas-rich (i.e., high gas mass to stellar mass fraction = fgas) galaxies. We used two methods to make this conclusion: one uses black hole mass as a surrogate for stellar mass by assuming a local co-evolutionary relationship, and the other directly uses stellar masses estimated from near-infrared observations. The fgas-λEdd correlation we found concurs with the cosmic decreasing trend in λEdd, as cold molecular gas is primarily consumed by star formation. This correlation qualitatively matches predictions of recent semi-analytic models of the cosmic downsizing of SMBHs as well. As the gas mass surface density would eventually be a key parameter controlling mass accretion, we need high-resolution observations to identify further differences in the molecular properties around super-Eddington and sub-Eddington SMBHs.
Kawai, Kosuke; Huong, Luong Thi Mai
2017-03-01
Proper management of food waste, a major component of municipal solid waste (MSW), is needed, especially in developing Asian countries where most MSW is disposed of in landfill sites without any pretreatment. Source separation can contribute to solving problems derived from the disposal of food waste. An organic waste source separation and collection programme has been operated in model areas in Hanoi, Vietnam, since 2007. This study proposed three key parameters (participation rate, proper separation rate and proper discharge rate) for behaviour related to source separation of household organic waste, and monitored the progress of the programme based on the physical composition of household waste sampled from 558 households in model programme areas of Hanoi. The results showed that 13.8% of 558 households separated organic waste, and 33.0% discharged mixed (unseparated) waste improperly. About 41.5% (by weight) of the waste collected as organic waste was contaminated by inorganic waste, and one-third of the waste disposed of as organic waste by separators was inorganic waste. We proposed six hypothetical future household behaviour scenarios to help local officials identify a final or midterm goal for the programme. We also suggested that the city government take further actions to increase the number of people participating in separating organic waste, improve the accuracy of separation and prevent non-separators from discharging mixed waste improperly.
Feasibility Study of a Satellite Solar Power Station
NASA Technical Reports Server (NTRS)
Glaser, P. E.; Maynard, O. E.; Mackovciak, J. J. R.; Ralph, E. I.
1974-01-01
A feasibility study of a satellite solar power station (SSPS) was conducted to: (1) explore how an SSPS could be flown and controlled in orbit; (2) determine the techniques needed to avoid radio frequency interference (RFI); and (3) determine the key environmental, technological, and economic issues involved. Structural and dynamic analyses of the SSPS structure were performed, and deflections and internal member loads were determined. Desirable material characteristics were assessed and technology developments identified. Flight control performance of the SSPS baseline design was evaluated and parametric sizing studies were performed. The study of RFI avoidance techniques covered (1) optimization of the microwave transmission system; (2) device design and expected RFI; and (3) SSPS RFI effects. The identification of key issues involved (1) microwave generation, transmissions, and rectification and solar energy conversion; (2) environmental-ecological impact and biological effects; and (3) economic issues, i.e., costs and benefits associated with the SSPS. The feasibility of the SSPS based on the parameters of the study was established.
Remais, Justin V; Xiao, Ning; Akullian, Adam; Qiu, Dongchuan; Blair, David
2011-04-01
For many pathogens with environmental stages, or those carried by vectors or intermediate hosts, disease transmission is strongly influenced by pathogen, host, and vector movements across complex landscapes, and thus quantitative measures of movement rate and direction can reveal new opportunities for disease management and intervention. Genetic assignment methods are a set of powerful statistical approaches useful for establishing population membership of individuals. Recent theoretical improvements allow these techniques to be used to cost-effectively estimate the magnitude and direction of key movements in infectious disease systems, revealing important ecological and environmental features that facilitate or limit transmission. Here, we review the theory, statistical framework, and molecular markers that underlie assignment methods, and we critically examine recent applications of assignment tests in infectious disease epidemiology. Research directions that capitalize on use of the techniques are discussed, focusing on key parameters needing study for improved understanding of patterns of disease.
NASA Technical Reports Server (NTRS)
Cross, Cynthia D.; Lewis, John F.; Barido, Richard A.; Carrasquillo, Robyn; Rains, George E.
2011-01-01
Recent changes in the overall NASA vision has resulted in further cost and schedule challenges for the Orion program. As a result, additional scrutiny has been focused on the use of new developments for hardware in the environmental control and life support systems. This paper will examine the Orion architecture as it is envisioned to support missions to the International Space Station and future exploration missions and determine what if any functions can be satisfied through the use of existing, heritage hardware designs. An initial evaluation of each component is included and where a heritage component was deemed likely further details are examined. Key technical parameters, mass, volume and vibration loads are a few of the specific items that are evaluated. Where heritage hardware has been identified that may be substituted in the Orion architecture a discussion of key requirement changes that may need to be made as well as recommendation to further evaluate applicability are noted.
NASA Astrophysics Data System (ADS)
Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan
2014-12-01
Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.
Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements
NASA Technical Reports Server (NTRS)
Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.
2016-01-01
The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.
Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements
NASA Technical Reports Server (NTRS)
Jansen, Ralph; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.
2015-01-01
The purpose of this presentation is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.
Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements
NASA Technical Reports Server (NTRS)
Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.
2015-01-01
The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.
Solís-Dominguez, Fernando A; White, Scott A; Hutter, Travis Borrillo; Amistadi, Mary Kay; Root, Robert A; Chorover, Jon; Maier, Raina M
2012-01-17
Phytostabilization of mine tailings acts to mitigate both eolian dispersion and water erosion events which can disseminate barren tailings over large distances. This technology uses plants to establish a vegetative cover to permanently immobilize contaminants in the rooting zone, often requiring addition of an amendment to assist plant growth. Here we report the results of a greenhouse study that evaluated the ability of six native plant species to grow in extremely acidic (pH ∼ 2.5) metalliferous (As, Pb, Zn: 2000-3000 mg kg(-1)) mine tailings from Iron King Mine Humboldt Smelter Superfund site when amended with a range of compost concentrations. Results revealed that three of the six plant species tested (buffalo grass, mesquite, and catclaw acacia) are good candidates for phytostabilization at an optimum level of 15% compost (w/w) amendment showing good growth and minimal shoot accumulation of metal(loid)s. A fourth candidate, quailbush, also met all criteria except for exceeding the domestic animal toxicity limit for shoot accumulation of zinc. A key finding of this study was that the plant species that grew most successfully on these tailings significantly influenced key tailings parameters; direct correlations between plant biomass and both increased tailings pH and neutrophilic heterotrophic bacterial counts were observed. We also observed decreased iron oxidizer counts and decreased bioavailability of metal(loid)s mainly as a result of compost amendment. Taken together, these results suggest that the phytostabilization process reduced tailings toxicity as well as the potential for metal(loid) mobilization. This study provides practical information on plant and tailings characteristics that is critically needed for successful implementation of assisted phytostabilization on acidic, metalliferous mine tailings sites.
Solís-Dominguez, Fernando A.; White, Scott A.; Hutter, Travis Borrillo; Amistadi, Mary Kay; Root, Robert A.; Chorover, Jon; Maier, Raina M.
2012-01-01
Phytostabilization of mine tailings acts to mitigate both eolian dispersion and water erosion events which can disseminate barren tailings over large distances. This technology uses plants to establish a vegetative cover to permanently immobilize contaminants in the rooting zone, often requiring addition of an amendment to assist plant growth. Here we report the results of a greenhouse study that evaluated the ability of six native plant species to grow in extremely acidic (pH ~ 2.5) metalliferous (As, Pb, Zn: 2000–3000 mg kg−1) mine tailings from Iron King Mine Humboldt Smelter Superfund site when amended with a range of compost concentrations. Results revealed that three of the six plant species tested (buffalo grass, mesquite, and catclaw acacia) are good candidates for phytostabilization at an optimum level of 15% compost (w/w) amendment showing good growth and minimal shoot accumulation of metal(loid)s. A fourth candidate, quailbush, also met all criteria except for exceeding the domestic animal toxicity limit for shoot accumulation of zinc. A key finding of this study was that the plant species that grew most successfully on these tailings significantly influenced key tailings parameters; direct correlations between plant biomass and both increased tailings pH and neutrophilic heterotrophic bacterial counts were observed. We also observed decreased iron oxidizer counts and decreased bioavailability of metal(loid)s mainly as a result of compost amendment. Taken together, these results suggest that the phytostabilization process reduced tailings toxicity as well as the potential for metal(loid) mobilization. This study provides practical information on plant and tailings characteristics that is critically needed for successful implementation of assisted phytostabilization on acidic, metalliferous mine tailings sites. PMID:22191663
A multi-model assessment of terrestrial biosphere model data needs
NASA Astrophysics Data System (ADS)
Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.
2017-12-01
Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial models to date, and provides a comprehensive roadmap for constraining model uncertainties through model development and data collection.
The impact of temporal sampling resolution on parameter inference for biological transport models.
Harrison, Jonathan U; Baker, Ruth E
2018-06-25
Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply our inference framework to a dataset that was generated with the aim of understanding the localization of RNA-protein complexes.
NASA Astrophysics Data System (ADS)
Rybus, Tomasz; Seweryn, Karol
2016-03-01
All devices designed to be used in space must be thoroughly tested in relevant conditions. For several classes of devices the reduced gravity conditions are the key factor. In early stages of development and later due to financial reasons, the tests need to be done on Earth. However, in Earth conditions it is impossible to obtain a different gravity field independent on all linear and rotational spatial coordinates. Therefore, various test-bed systems are used, with their design driven by the device's specific needs. One of such test-beds are planar air-bearing microgravity simulators. In such an approach, the tested objects (e.g., manipulators intended for on-orbit operations or vehicles simulating satellites in a close formation flight) are mounted on planar air-bearings that allow almost frictionless motion on a flat surface, thus simulating microgravity conditions in two dimensions. In this paper we present a comprehensive review of research activities related to planar air-bearing microgravity simulators, demonstrating achievements of the most active research groups and describing newest trends and ideas, such as tests of landing gears for low-g bodies. Major design parameters of air-bearing test-beds are also reviewed and a list of notable existing test-beds is presented.
Modeling High-Impact Weather and Climate: Lessons From a Tropical Cyclone Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Done, James; Holland, Greg; Bruyere, Cindy
2013-10-19
Although the societal impact of a weather event increases with the rarity of the event, our current ability to assess extreme events and their impacts is limited by not only rarity but also by current model fidelity and a lack of understanding of the underlying physical processes. This challenge is driving fresh approaches to assess high-impact weather and climate. Recent lessons learned in modeling high-impact weather and climate are presented using the case of tropical cyclones as an illustrative example. Through examples using the Nested Regional Climate Model to dynamically downscale large-scale climate data the need to treat bias inmore » the driving data is illustrated. Domain size, location, and resolution are also shown to be critical and should be guided by the need to: include relevant regional climate physical processes; resolve key impact parameters; and to accurately simulate the response to changes in external forcing. The notion of sufficient model resolution is introduced together with the added value in combining dynamical and statistical assessments to fill out the parent distribution of high-impact parameters. Finally, through the example of a tropical cyclone damage index, direct impact assessments are resented as powerful tools that distill complex datasets into concise statements on likely impact, and as highly effective communication devices.« less
Channel-parameter estimation for satellite-to-submarine continuous-variable quantum key distribution
NASA Astrophysics Data System (ADS)
Guo, Ying; Xie, Cailang; Huang, Peng; Li, Jiawei; Zhang, Ling; Huang, Duan; Zeng, Guihua
2018-05-01
This paper deals with a channel-parameter estimation for continuous-variable quantum key distribution (CV-QKD) over a satellite-to-submarine link. In particular, we focus on the channel transmittances and the excess noise which are affected by atmospheric turbulence, surface roughness, zenith angle of the satellite, wind speed, submarine depth, etc. The estimation method is based on proposed algorithms and is applied to low-Earth orbits using the Monte Carlo approach. For light at 550 nm with a repetition frequency of 1 MHz, the effects of the estimated parameters on the performance of the CV-QKD system are assessed by a simulation by comparing the secret key bit rate in the daytime and at night. Our results show the feasibility of satellite-to-submarine CV-QKD, providing an unconditionally secure approach to achieve global networks for underwater communications.
Sensitivity of black carbon concentrations and climate impact to aging and scavenging in OsloCTM2-M7
NASA Astrophysics Data System (ADS)
Lund, Marianne T.; Berntsen, Terje K.; Samset, Bjørn H.
2017-05-01
Accurate representation of black carbon (BC) concentrations in climate models is a key prerequisite for understanding its net climate impact. BC aging and scavenging are treated very differently in current models. Here, we examine the sensitivity of three-dimensional (3-D), temporally resolved BC concentrations to perturbations to individual model processes in the chemistry transport model OsloCTM2-M7. The main goals are to identify processes related to aerosol aging and scavenging where additional observational constraints may most effectively improve model performance, in particular for BC vertical profiles, and to give an indication of how model uncertainties in the BC life cycle propagate into uncertainties in climate impacts. Coupling OsloCTM2 with the microphysical aerosol module M7 allows us to investigate aging processes in more detail than possible with a simpler bulk parameterization. Here we include, for the first time in this model, a treatment of condensation of nitric acid on BC. Using kernels, we also estimate the range of radiative forcing and global surface temperature responses that may result from perturbations to key tunable parameters in the model. We find that BC concentrations in OsloCTM2-M7 are particularly sensitive to convective scavenging and the inclusion of condensation by nitric acid. The largest changes are found at higher altitudes around the Equator and at low altitudes over the Arctic. Convective scavenging of hydrophobic BC, and the amount of sulfate required for BC aging, are found to be key parameters, potentially reducing bias against HIAPER Pole-to-Pole Observations (HIPPO) flight-based measurements by 60 to 90 %. Even for extensive tuning, however, the total impact on global-mean surface temperature is estimated to less than 0.04 K. Similar results are found when nitric acid is allowed to condense on the BC aerosols. We conclude, in line with previous studies, that a shorter atmospheric BC lifetime broadly improves the comparison with measurements over the Pacific. However, we also find that the model-measurement discrepancies can not be uniquely attributed to uncertainties in a single process or parameter. Model development therefore needs to be focused on improvements to individual processes, supported by a broad range of observational and experimental data, rather than tuning of individual, effective parameters such as the global BC lifetime.
Optical Properties of Black and Brown Carbon Aerosols from Laboratory Combustion of Wildland Fuels
NASA Astrophysics Data System (ADS)
Beres, N. D.; Molzan, J.
2015-12-01
Aerosol light absorption in the solar spectral region (300 nm - 2300 nm) of the atmosphere is key for the direct aerosol radiative forcing, which is determined by aerosol single scattering albedo (SSA), asymmetry parameter, and by the albedo of the underlying surface. SSA is of key importance for the sign and quantity of aerosol direct radiative forcing; that is, does the aerosol make the earth look darker (heating) or whiter (cooling)? In addition, these optical properties are needed for satellite retrievals of aerosol optical depth and properties. During wildland fires, aerosol optical absorption is largely determined by black carbon (BC) and brown carbon (BrC) emissions. BC is strongly absorbing throughout the solar spectrum, while BrC absorption strongly increases toward shorter wavelength and can be neglected in the red and infrared. Optical properties of BrC emitted from wildland fires are poorly understood and need to be studied as function of fuel type and moisture content and combustion conditions. While much more is known about BC optical properties, knowledge for the ultraviolet (UV) spectral region is still lacking and critically needed for satellite remote sensing (e.g., TOMS, OMI) and for modeling of tropospheric photochemistry. Here, a project to better characterize biomass burning aerosol optical properties is described. It utilizes a laboratory biomass combustion chamber to generate aerosols through combustion of different wildland fuels of global and regional importance. Combustion aerosol optics is characterized with an integrating nephelometer to measure aerosol light scattering and a photoacoustic instrument to measure aerosol light absorption. These measurements will yield optical properties that are needed to improve qualitative and quantitative understanding of aerosol radiative forcing and satellite retrievals for absorbing carbonaceous aerosols from combustion of wildland fuels.
Assessment of Sensor Technologies for Advanced Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korsah, Kofi; Kisner, R. A.; Britton Jr., C. L.
This paper provides an assessment of sensor technologies and a determination of measurement needs for advanced reactors (AdvRx). It is a summary of a study performed to provide the technical basis for identifying and prioritizing research targets within the instrumentation and control (I&C) Technology Area under the Department of Energy’s (DOE’s) Advanced Reactor Technology (ART) program. The study covered two broad reactor technology categories: High Temperature Reactors and Fast Reactors. The scope of “High temperature reactors” included Gen IV reactors whose coolant exit temperatures exceed ≈650 °C and are moderated (as opposed to fast reactors). To bound the scope formore » fast reactors, this report reviewed relevant operating experience from US-operated Sodium Fast Reactor (SFR) and relevant test experience from the Fast Flux Test Facility (FFTF). For high temperature reactors the study showed that in many cases instrumentation have performed reasonably well in research and demonstration reactors. However, even in cases where the technology is “mature” (such as thermocouples), HTGRs can benefit from improved technologies. Current HTGR instrumentation is generally based on decades-old technology and adapting newer technologies could provide significant advantages. For sodium fast reactors, the study found that several key research needs arise around (1) radiation-tolerant sensor design for in-vessel or in-core applications, where possible non-invasive sensing approaches for key parameters that minimize the need to deploy sensors in-vessel, (2) approaches to exfiltrating data from in-vessel sensors while minimizing penetrations, (3) calibration of sensors in-situ, and (4) optimizing sensor placements to maximize the information content while minimizing the number of sensors needed.« less
Review of Concrete Biodeterioration in Relation to Buried Nuclear Waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turick, C; Berry, C.
Long-term storage of low level radioactive material in below ground concrete disposal units (DUs) (Saltstone Disposal Facility) is a means of depositing wastes generated from nuclear operations of the U.S. Department of Energy. Based on the currently modeled degradation mechanisms, possible microbial induced effects on the structural integrity of buried low level wastes must be addressed. Previous international efforts related to microbial impacts on concrete structures that house low level radioactive waste showed that microbial activity can play a significant role in the process of concrete degradation and ultimately structural deterioration. This literature review examines the recent research in thismore » field and is focused on specific parameters that are applicable to modeling and prediction of the fate of concrete vaults housing stored wastes and the wastes themselves. Rates of concrete biodegradation vary with the environmental conditions, illustrating a need to understand the bioavailability of key compounds involved in microbial activity. Specific parameters require pH and osmotic pressure to be within a certain range to allow for microbial growth as well as the availability and abundance of energy sources like components involved in sulfur, iron and nitrogen oxidation. Carbon flow and availability are also factors to consider in predicting concrete biodegradation. The results of this review suggest that microbial activity in Saltstone, (grouted low level radioactive waste) is unlikely due to very high pH and osmotic pressure. Biodegradation of the concrete vaults housing the radioactive waste however, is a possibility. The rate and degree of concrete biodegradation is dependent on numerous physical, chemical and biological parameters. Results from this review point to parameters to focus on for modeling activities and also, possible options for mitigation that would minimize concrete biodegradation. In addition, key chemical components that drive microbial activity on concrete surfaces are discussed.« less
NASA Astrophysics Data System (ADS)
Wirth, E. A.; Frankel, A. D.; Vidale, J. E.; Stone, I.; Nasser, M.; Stephenson, W. J.
2017-12-01
The Cascadia subduction zone has a long history of M8 to M9 earthquakes, inferred from coastal subsidence, tsunami records, and submarine landslides. These megathrust earthquakes occur mostly offshore, and an improved characterization of the megathrust is critical for accurate seismic hazard assessment in the Pacific Northwest. We run numerical simulations of 50 magnitude 9 earthquake rupture scenarios on the Cascadia megathrust, using a 3-D velocity model based on geologic constraints and regional seismicity, as well as active and passive source seismic studies. We identify key parameters that control the intensity of ground shaking and resulting seismic hazard. Variations in the down-dip limit of rupture (e.g., extending rupture to the top of the non-volcanic tremor zone, compared to a completely offshore rupture) result in a 2-3x difference in peak ground acceleration (PGA) for the inland city of Seattle, Washington. Comparisons of our simulations to paleoseismic data suggest that rupture extending to the 1 cm/yr locking contour (i.e., mostly offshore) provides the best fit to estimates of coastal subsidence during previous Cascadia earthquakes, but further constraints on the down-dip limit from microseismicity, offshore geodetics, and paleoseismic evidence are needed. Similarly, our simulations demonstrate that coastal communities experience a four-fold increase in PGA depending upon their proximity to strong-motion-generating areas (i.e., high strength asperities) on the deeper portions of the megathrust. An improved understanding of the structure and rheology of the plate interface and accretionary wedge, and better detection of offshore seismicity, may allow us to forecast locations of these asperities during a future Cascadia earthquake. In addition to these parameters, the seismic velocity and attenuation structure offshore also strongly affects the resulting ground shaking. This work outlines the range of plausible ground motions from an M9 Cascadia earthquake, and highlights the importance of offshore studies for constraining critical parameters and seismic hazard in the Pacific Northwest.
Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production
NASA Astrophysics Data System (ADS)
Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne
2018-05-01
A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.
Parameter as a Switch Between Dynamical States of a Network in Population Decoding.
Yu, Jiali; Mao, Hua; Yi, Zhang
2017-04-01
Population coding is a method to represent stimuli using the collective activities of a number of neurons. Nevertheless, it is difficult to extract information from these population codes with the noise inherent in neuronal responses. Moreover, it is a challenge to identify the right parameter of the decoding model, which plays a key role for convergence. To address the problem, a population decoding model is proposed for parameter selection. Our method successfully identified the key conditions for a nonzero continuous attractor. Both the theoretical analysis and the application studies demonstrate the correctness and effectiveness of this strategy.
Controlling Ethylene for Extended Preservation of Fresh Fruits and Vegetables
2008-12-01
into a process simulation to determine the effects of key design parameters on the overall performance of the system. Integrating process simulation...High Decay [Asian Pears High High Decay [ Avocados High High Decay lBananas Moderate ~igh Decay Cantaloupe High Moderate Decay Cherimoya Very High High...ozonolysis. Process simulation was subsequently used to understand the effect of key system parameters on EEU performance. Using this modeling work
Ba, Kamarel; Thiaw, Modou; Lazar, Najih; Sarr, Alassane; Brochier, Timothée; Ndiaye, Ismaïla; Faye, Alioune; Sadio, Oumar; Panfili, Jacques; Thiaw, Omar Thiom; Brehmer, Patrice
2016-01-01
The stock of the Senegalese flat sardinella, Sardinella maderensis, is highly exploited in Senegal, West Africa. Its growth and reproduction parameters are key biological indicators for improving fisheries management. This study reviewed these parameters using landing data from small-scale fisheries in Senegal and literature information dated back more than 25 years. Age was estimated using length-frequency data to calculate growth parameters and assess the growth performance index. With global climate change there has been an increase in the average sea surface temperature along the Senegalese coast but the length-weight parameters, sex ratio, size at first sexual maturity, period of reproduction and condition factor of S. maderensis have not changed significantly. The above parameters of S. maderensis have hardly changed, despite high exploitation and fluctuations in environmental conditions that affect the early development phases of small pelagic fish in West Africa. This lack of plasticity of the species regarding of the biological parameters studied should be considered when planning relevant fishery management plans.
Modeling diurnal land temperature cycles over Los Angeles using downscaled GOES imagery
NASA Astrophysics Data System (ADS)
Weng, Qihao; Fu, Peng
2014-11-01
Land surface temperature is a key parameter for monitoring urban heat islands, assessing heat related risks, and estimating building energy consumption. These environmental issues are characterized by high temporal variability. A possible solution from the remote sensing perspective is to utilize geostationary satellites images, for instance, images from Geostationary Operational Environmental System (GOES) and Meteosat Second Generation (MSG). These satellite systems, however, with coarse spatial but high temporal resolution (sub-hourly imagery at 3-10 km resolution), often limit their usage to meteorological forecasting and global climate modeling. Therefore, how to develop efficient and effective methods to disaggregate these coarse resolution images to a proper scale suitable for regional and local studies need be explored. In this study, we propose a least square support vector machine (LSSVM) method to achieve the goal of downscaling of GOES image data to half-hourly 1-km LSTs by fusing it with MODIS data products and Shuttle Radar Topography Mission (SRTM) digital elevation data. The result of downscaling suggests that the proposed method successfully disaggregated GOES images to half-hourly 1-km LSTs with accuracy of approximately 2.5 K when validated against with MODIS LSTs at the same over-passing time. The synthetic LST datasets were further explored for monitoring of surface urban heat island (UHI) in the Los Angeles region by extracting key diurnal temperature cycle (DTC) parameters. It is found that the datasets and DTC derived parameters were more suitable for monitoring of daytime- other than nighttime-UHI. With the downscaled GOES 1-km LSTs, the diurnal temperature variations can well be characterized. An accuracy of about 2.5 K was achieved in terms of the fitted results at both 1 km and 5 km resolutions.
Fire regime: history and definition of a key concept in disturbance ecology.
Krebs, Patrik; Pezzatti, Gianni B; Mazzoleni, Stefano; Talbot, Lee M; Conedera, Marco
2010-06-01
"Fire regime" has become, in recent decades, a key concept in many scientific domains. In spite of its wide spread use, the concept still lacks a clear and wide established definition. Many believe that it was first discussed in a famous report on national park management in the United States, and that it may be simply defined as a selection of a few measurable parameters that summarize the fire occurrence patterns in an area. This view has been uncritically perpetuated in the scientific community in the last decades. In this paper we attempt a historical reconstruction of the origin, the evolution and the current meaning of "fire regime" as a concept. Its roots go back to the 19th century in France and to the first half of the 20th century in French African colonies. The "fire regime" concept took time to evolve and pass from French into English usage and thus to the whole scientific community. This coincided with a paradigm shift in the early 1960s in the United States, where a favourable cultural, social and scientific climate led to the natural role of fires as a major disturbance in ecosystem dynamics becoming fully acknowledged. Today the concept of "fire regime" refers to a collection of several fire-related parameters that may be organized, assembled and used in different ways according to the needs of the users. A structure for the most relevant categories of parameters is proposed, aiming to contribute to a unified concept of "fire regime" that can reconcile the physical nature of fire with the socio-ecological context within which it occurs.
Model predictions of ocular injury from 1315-nm laser light
NASA Astrophysics Data System (ADS)
Polhamus, Garrett D.; Zuclich, Joseph A.; Cain, Clarence P.; Thomas, Robert J.; Foltz, Michael
2003-06-01
With the advent of future weapons systems that employ high energy lasers, the 1315 nm wavelength will present a new laser safety hazard to the armed forces. Experiments in non-human primates using this wavelength have demonstrated a range of ocular injuries, including corneal, lenticular and retinal lesions, as a function of pulse duration and spot size at the cornea. To improve our understanding of this phenomena, there is a need for a mathematical model that properly predicts these injuries and their dependence on appropriate exposure parameters. This paper describes the use of a finite difference model of laser thermal injury in the cornea and retina. The model was originally developed for use with shorter wavelength laser irradiation, and as such, requires estimation of several key parameters used in the computations. The predictions from the model are compared to the experimental data, and conclusions are drawn regarding the ability of the model to properly follow the published observations at this wavelength.
Foundations for Measuring Volume Rendering Quality
NASA Technical Reports Server (NTRS)
Williams, Peter L.; Uselton, Samuel P.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The goal of this paper is to provide a foundation for objectively comparing volume rendered images. The key elements of the foundation are: (1) a rigorous specification of all the parameters that need to be specified to define the conditions under which a volume rendered image is generated; (2) a methodology for difference classification, including a suite of functions or metrics to quantify and classify the difference between two volume rendered images that will support an analysis of the relative importance of particular differences. The results of this method can be used to study the changes caused by modifying particular parameter values, to compare and quantify changes between images of similar data sets rendered in the same way, and even to detect errors in the design, implementation or modification of a volume rendering system. If one has a benchmark image, for example one created by a high accuracy volume rendering system, the method can be used to evaluate the accuracy of a given image.
Disease scoring systems for oral lichen planus; a critical appraisal
Wang, Jing
2015-01-01
The aim of the present study has been to critically review 22 disease scoring systems (DSSs) on oral lichen planus (OLP) that have been reported in the literature during the past decades. Although the presently available DSSs may all have some merit, particularly for research purposes, the diversity of both the objective and subjective parameters used in these systems and the lack of acceptance of one of these systems for uniform use, there is a need for an international, authorized consensus meeting on this subject. Because of the natural course of OLP characterized by remissions and exacerbations and also due to the varying distribution pattern and the varying clinical types, e.g. reticular and erosive, the relevance of a DSS based on morphologic parameters is somewhat questionable. Instead, one may consider to only look for a quality of life scoring system adapted for use in OLP patients. Key words:Oral lichen planus, disease scoring system, classification. PMID:25681372
Direct approach for bioprocess optimization in a continuous flat-bed photobioreactor system.
Kwon, Jong-Hee; Rögner, Matthias; Rexroth, Sascha
2012-11-30
Application of photosynthetic micro-organisms, such as cyanobacteria and green algae, for the carbon neutral energy production raises the need for cost-efficient photobiological processes. Optimization of these processes requires permanent control of many independent and mutably dependent parameters, for which a continuous cultivation approach has significant advantages. As central factors like the cell density can be kept constant by turbidostatic control, light intensity and iron content with its strong impact on productivity can be optimized. Both are key parameters due to their strong dependence on photosynthetic activity. Here we introduce an engineered low-cost 5 L flat-plate photobioreactor in combination with a simple and efficient optimization procedure for continuous photo-cultivation of microalgae. Based on direct determination of the growth rate at constant cell densities and the continuous measurement of O₂ evolution, stress conditions and their effect on the photosynthetic productivity can be directly observed. Copyright © 2012 Elsevier B.V. All rights reserved.
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch
Karthikeyan, M.; Sree Ranga Raja, T.
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch.
Karthikeyan, M; Raja, T Sree Ranga
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plummer, S.E.; Malthus, T.J.; Clark, C.D.
1997-06-01
Seagrass meadows are a key component of shallow coastal environments acting as a food resource, nursery and contributing to water oxygenation. Given the importance of these meadows and their susceptibility to anthropogenic disturbance, it is vital that the extent and growth of seagrass is monitored. Remote sensing techniques offer the potential to determine biophysical characteristics of seagrass. This paper presents observations on the development and testing of an invertible model of seagrass canopy reflectance. The model is an adaptation of a land surface reflectance model to incorporate the effects of attenuation and scattering of incoming radiative flux in water. Sensitivitymore » analysis reveals that the subsurface reflectance is strongly dependent on the water depth, vegetation amount, the parameter which we wish to determine, and turbidity respectively. By contrast the chlorophyll concentration of water and gelbstoff are relatively unimportant. Water depth and turbidity need to be known or accommodated in any inversion as free parameters.« less
NASA Astrophysics Data System (ADS)
Wisniewski, H.; Gourdain, P.-A.
2017-10-01
APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.
Schuwirth, Nele; Reichert, Peter
2013-02-01
For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.
Lapertot, Miléna; Seignez, Chantal; Ebrahimi, Sirous; Delorme, Sandrine; Peringer, Paul
2007-06-01
This study focuses on the mass cultivation of bacteria adapted to the degradation of a mixture composed of toluene, ethylbenzene, o-, m- and p-xylenes (TEX). For the cultivation process Substrate Pulse Batch (SPB) technique was adapted under well-automated conditions. The key parameters to be monitored were handled by LabVIEW software including, temperature, pH, dissolved oxygen and turbidity. Other parameters, such as biomass, ammonium or residual substrate concentrations needed offline measurements. SPB technique has been successfully tested experimentally on TEX. The overall behavior of the mixed bacterial population was observed and discussed along the cultivation process. Carbon and nitrogen limitations were shown to affect the integrity of the bacterial cells as well as their production of exopolymeric substances (EPS). Average productivity and yield values successfully reached the industrial specifications, which were 0.45 kg(DW)m(-3) d(-1) and 0.59 g(DW)g (C) (-1) , respectively. Accuracy and reproducibility of the obtained results present the controlled SPB process as a feasible technique.
Towards a complete caracterisation of Ganymede's environnement
NASA Astrophysics Data System (ADS)
Cessateur, Gaël; Barthélémy, Mathieu; Lilensten, Jean; Dudok de Wit, Thierry; Kretzschmar, Matthieu; Mbemba Kabuiku, Lydie
2013-04-01
In the framework to the JUICE mission to the Jovian system, a complete picture of the interaction between Ganymede's atmosphere and external forcing is needed. This will definitely allow us to constrain instrument performances according to the mission objectives. The main source of information regarding the upper atmosphere is the non LTE UV-Visible-near IR emissions. Those emissions are both induce by the incident solar UV flux and particle precipitations. This work aims at characterizing the impact from those external forcing, and then at deriving some key physical parameters that are measurable by an orbiter, namely the oxygen red line at 630 nm or the resonant oxygen line at 130 nm for example. We will also present the 4S4J instrument, a proposed EUV radiometer, which will provides the solar local EUV flux, an invaluable parameter for the JUICE mission. Based on new technologies and a new design, only two passbands are considered for reconstructing the whole EUV spectrum.
Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection
NASA Astrophysics Data System (ADS)
Brasche, L. J. H.; Lopez, R.; Eisenmann, D.
2006-03-01
Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
Scaling Laws of Microactuators and Potential Applications of Electroactive Polymers in MEMS
NASA Technical Reports Server (NTRS)
Liu, Chang; Bar-Cohen, Y.
1999-01-01
Besides the scale factor that distinguishes the various species, fundamentally biological muscles changes little between species, indicating a highly optimized system. Electroactive polymer actuators offer the closest resemblance to biological muscles, however besides the large actuation displacement these materials are falling short with regards to the actuation force. As improved materials are emerging it is becoming necessary to address key issues such as the need for effective electromechanical modeling and guiding parameters in scaling the actuators. In this paper, we will review the scaling laws for three major actuation mechanisms that are of relevance to micro electromechanical systems: electrostatic actuation, magnetic actuation, thermal bimetallic actuation, and piezoelectric actuation.
A short review on thermosonic flip chip bonding
NASA Astrophysics Data System (ADS)
Suppiah, Sarveshvaran; Ong, Nestor Rubio; Sauli, Zaliman; Sarukunaselan, Karunavani; Alcain, Jesselyn Barro; Shahimin, Mukhzeer Mohamad; Retnasamy, Vithyacharan
2017-09-01
This review is to study the evolution and key findings, critical technical challenges, solutions and bonding equipment of thermosonic flip chip bonding. Based on the review done, it was found that ultrasonic power, bonding time and force are the three main critical parameters need to be optimized in order to achieve sound and reliable bonding between the die and substrate. A close monitoring of the ultrasonic power helped to prevent over bonding phenomena on flexible substrate. Gold stud bumping is commonly used in thermosonic bonding compared to solder due to its better reliability obtained in the LED and optoelectronic packages. The review comprised short details on the available thermosonic bonding equipment in the semiconductor industry as well.
Compilation and Review of Supersonic Business Jet Studies from 1963 through 1995
NASA Technical Reports Server (NTRS)
Maglieri, Domenic J.
2011-01-01
This document provides a compilation of all known supersonic business jet studies/activities conducted from 1963 through 1995 by university, industry and the NASA. First, an overview is provided which chronologically displays all known supersonic business jet studies/activities conducted by universities, industry, and the NASA along with the key features of the study vehicles relative to configuration, planform, operation parameters, and the source of study. This is followed by a brief description of each study along with some comments on the study. Mention will be made as to whether the studies addressed cost, market needs, and the environmental issues of airport-community noise, sonic boom, and ozone.
Online analysis and process control in recombinant protein production (review).
Palmer, Shane M; Kunji, Edmund R S
2012-01-01
Online analysis and control is essential for efficient and reproducible bioprocesses. A key factor in real-time control is the ability to measure critical variables rapidly. Online in situ measurements are the preferred option and minimize the potential loss of sterility. The challenge is to provide sensors with a good lifespan that withstand harsh bioprocess conditions, remain stable for the duration of a process without the need for recalibration, and offer a suitable working range. In recent decades, many new techniques that promise to extend the possibilities of analysis and control, not only by providing new parameters for analysis, but also through the improvement of accepted, well practiced, measurements have arisen.
Parameters of Technological Growth
ERIC Educational Resources Information Center
Starr, Chauncey; Rudman, Richard
1973-01-01
Examines the factors involved in technological growth and identifies the key parameters as societal resources and societal expectations. Concludes that quality of life can only be maintained by reducing population growth, since this parameter is the product of material levels, overcrowding, food, and pollution. (JR)
[Key content and formulation of national Chinese materia medica resources survey at county level].
Lu, Jian-Wei; Zhang, Xiao-Bo; Li, Hai-Tao; Guo, Lan-Ping; Zhao, Run-Huai; Zhang, Ben-Gang; Sun, Li-Ying; Huang, Lu-Qi
2013-08-01
According to National Census for Water, National Population Census, National Land and Resources Survey, and work experience of experimental measures for national Chinese materia medica resources(CMMR) survey,the national CMMR survey at the county level is the key point of whole survey, that includes organization and management, field survey, sorting data three key links. Organization and management works of national CMMR survey needs to finish four key contents, there are definite goals and tasks, practicable crew, preparation directory, and security assurance. Field survey works of the national CMMR survey needs to finish five key contents, there are preparation works for field survey, the choice of the key survey area (samples), fill in the questionnaire, video data collection, specimen and other physical collection. Sorting data works of the national CMMR survey needs to finish tree key contents, there are data, specimen and census results.
Real-Time On-Board Processing Validation of MSPI Ground Camera Images
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.
2010-01-01
The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.
Business model design for a wearable biofeedback system.
Hidefjäll, Patrik; Titkova, Dina
2015-01-01
Wearable sensor technologies used to track daily activities have become successful in the consumer market. In order for wearable sensor technology to offer added value in the more challenging areas of stress-rehab care and occupational health stress-related biofeedback parameters need to be monitored and more elaborate business models are needed. To identify probable success factors for a wearable biofeedback system (Affective Health) in the two mentioned market segments in a Swedish setting, we conducted literature studies and interviews with relevant representatives. Data were collected and used first to describe the two market segments and then to define likely feasible business model designs, according to the Business Model Canvas framework. Needs of stakeholders were identified as inputs to business model design. Value propositions, a key building block of a business model, were defined for each segment. The value proposition for occupational health was defined as "A tool that can both identify employees at risk of stress-related disorders and reinforce healthy sustainable behavior" and for healthcare as: "Providing therapists with objective data about the patient's emotional state and motivating patients to better engage in the treatment process".
Continuous Variable Quantum Key Distribution Using Polarized Coherent States
NASA Astrophysics Data System (ADS)
Vidiella-Barranco, A.; Borelli, L. F. M.
We discuss a continuous variables method of quantum key distribution employing strongly polarized coherent states of light. The key encoding is performed using the variables known as Stokes parameters, rather than the field quadratures. Their quantum counterpart, the Stokes operators Ŝi (i=1,2,3), constitute a set of non-commuting operators, being the precision of simultaneous measurements of a pair of them limited by an uncertainty-like relation. Alice transmits a conveniently modulated two-mode coherent state, and Bob randomly measures one of the Stokes parameters of the incoming beam. After performing reconciliation and privacy amplification procedures, it is possible to distill a secret common key. We also consider a non-ideal situation, in which coherent states with thermal noise, instead of pure coherent states, are used for encoding.
User's design handbook for a Standardized Control Module (SCM) for DC to DC Converters, volume 2
NASA Technical Reports Server (NTRS)
Lee, F. C.
1980-01-01
A unified design procedure is presented for selecting the key SCM control parameters for an arbitrarily given power stage configuration and parameter values, such that all regulator performance specifications can be met and optimized concurrently in a single design attempt. All key results and performance indices, for buck, boost, and buck/boost switching regulators which are relevant to SCM design considerations are included to facilitate frequent references.
Ahn, In-Young; Guillaumot, Charlène; Danis, Bruno
2017-01-01
Antarctic marine organisms are adapted to an extreme environment, characterized by a very low but stable temperature and a strong seasonality in food availability arousing from variations in day length. Ocean organisms are particularly vulnerable to global climate change with some regions being impacted by temperature increase and changes in primary production. Climate change also affects the biotic components of marine ecosystems and has an impact on the distribution and seasonal physiology of Antarctic marine organisms. Knowledge on the impact of climate change in key species is highly important because their performance affects ecosystem functioning. To predict the effects of climate change on marine ecosystems, a holistic understanding of the life history and physiology of Antarctic key species is urgently needed. DEB (Dynamic Energy Budget) theory captures the metabolic processes of an organism through its entire life cycle as a function of temperature and food availability. The DEB model is a tool that can be used to model lifetime feeding, growth, reproduction, and their responses to changes in biotic and abiotic conditions. In this study, we estimate the DEB model parameters for the bivalve Laternula elliptica using literature-extracted and field data. The DEB model we present here aims at better understanding the biology of L. elliptica and its levels of adaptation to its habitat with a special focus on food seasonality. The model parameters describe a metabolism specifically adapted to low temperatures, with a low maintenance cost and a high capacity to uptake and mobilise energy, providing this organism with a level of energetic performance matching that of related species from temperate regions. It was also found that L. elliptica has a large energy reserve that allows enduring long periods of starvation. Additionally, we applied DEB parameters to time-series data on biological traits (organism condition, gonad growth) to describe the effect of a varying environment in food and temperature on the organism condition and energy use. The DEB model developed here for L. elliptica allowed us to improve benchmark knowledge on the ecophysiology of this key species, providing new insights in the role of food availability and temperature on its life cycle and reproduction strategy. PMID:28850607
Agüera, Antonio; Ahn, In-Young; Guillaumot, Charlène; Danis, Bruno
2017-01-01
Antarctic marine organisms are adapted to an extreme environment, characterized by a very low but stable temperature and a strong seasonality in food availability arousing from variations in day length. Ocean organisms are particularly vulnerable to global climate change with some regions being impacted by temperature increase and changes in primary production. Climate change also affects the biotic components of marine ecosystems and has an impact on the distribution and seasonal physiology of Antarctic marine organisms. Knowledge on the impact of climate change in key species is highly important because their performance affects ecosystem functioning. To predict the effects of climate change on marine ecosystems, a holistic understanding of the life history and physiology of Antarctic key species is urgently needed. DEB (Dynamic Energy Budget) theory captures the metabolic processes of an organism through its entire life cycle as a function of temperature and food availability. The DEB model is a tool that can be used to model lifetime feeding, growth, reproduction, and their responses to changes in biotic and abiotic conditions. In this study, we estimate the DEB model parameters for the bivalve Laternula elliptica using literature-extracted and field data. The DEB model we present here aims at better understanding the biology of L. elliptica and its levels of adaptation to its habitat with a special focus on food seasonality. The model parameters describe a metabolism specifically adapted to low temperatures, with a low maintenance cost and a high capacity to uptake and mobilise energy, providing this organism with a level of energetic performance matching that of related species from temperate regions. It was also found that L. elliptica has a large energy reserve that allows enduring long periods of starvation. Additionally, we applied DEB parameters to time-series data on biological traits (organism condition, gonad growth) to describe the effect of a varying environment in food and temperature on the organism condition and energy use. The DEB model developed here for L. elliptica allowed us to improve benchmark knowledge on the ecophysiology of this key species, providing new insights in the role of food availability and temperature on its life cycle and reproduction strategy.
Agüera, Antonio; Collard, Marie; Jossart, Quentin; Moreau, Camille; Danis, Bruno
2015-01-01
Marine organisms in Antarctica are adapted to an extreme ecosystem including extremely stable temperatures and strong seasonality due to changes in day length. It is now largely accepted that Southern Ocean organisms are particularly vulnerable to global warming with some regions already being challenged by a rapid increase of temperature. Climate change affects both the physical and biotic components of marine ecosystems and will have an impact on the distribution and population dynamics of Antarctic marine organisms. To predict and assess the effect of climate change on marine ecosystems a more comprehensive knowledge of the life history and physiology of key species is urgently needed. In this study we estimate the Dynamic Energy Budget (DEB) model parameters for key benthic Antarctic species the sea star Odontaster validus using available information from literature and experiments. The DEB theory is unique in capturing the metabolic processes of an organism through its entire life cycle as a function of temperature and food availability. The DEB model allows for the inclusion of the different life history stages, and thus, becomes a tool that can be used to model lifetime feeding, growth, reproduction, and their responses to changes in biotic and abiotic conditions. The DEB model presented here includes the estimation of reproduction handling rules for the development of simultaneous oocyte cohorts within the gonad. Additionally it links the DEB model reserves to the pyloric caeca an organ whose function has long been ascribed to energy storage. Model parameters described a slowed down metabolism of long living animals that mature slowly. O. validus has a large reserve that-matching low maintenance costs- allow withstanding long periods of starvation. Gonad development is continuous and individual cohorts developed within the gonads grow in biomass following a power function of the age of the cohort. The DEB model developed here for O. validus allowed us to increase our knowledge on the ecophysiology of this species, providing new insights on the role of food availability and temperature on its life cycle and reproduction strategy.
Ma, Yuntao; Li, Baoguo; Zhan, Zhigang; Guo, Yan; Luquet, Delphine; de Reffye, Philippe; Dingkuhn, Michael
2007-01-01
Background and Aims It is increasingly accepted that crop models, if they are to simulate genotype-specific behaviour accurately, should simulate the morphogenetic process generating plant architecture. A functional–structural plant model, GREENLAB, was previously presented and validated for maize. The model is based on a recursive mathematical process, with parameters whose values cannot be measured directly and need to be optimized statistically. This study aims at evaluating the stability of GREENLAB parameters in response to three types of phenotype variability: (1) among individuals from a common population; (2) among populations subjected to different environments (seasons); and (3) among different development stages of the same plants. Methods Five field experiments were conducted in the course of 4 years on irrigated fields near Beijing, China. Detailed observations were conducted throughout the seasons on the dimensions and fresh biomass of all above-ground plant organs for each metamer. Growth stage-specific target files were assembled from the data for GREENLAB parameter optimization. Optimization was conducted for specific developmental stages or the entire growth cycle, for individual plants (replicates), and for different seasons. Parameter stability was evaluated by comparing their CV with that of phenotype observation for the different sources of variability. A reduced data set was developed for easier model parameterization using one season, and validated for the four other seasons. Key Results and Conclusions The analysis of parameter stability among plants sharing the same environment and among populations grown in different environments indicated that the model explains some of the inter-seasonal variability of phenotype (parameters varied less than the phenotype itself), but not inter-plant variability (parameter and phenotype variability were similar). Parameter variability among developmental stages was small, indicating that parameter values were largely development-stage independent. The authors suggest that the high level of parameter stability observed in GREENLAB can be used to conduct comparisons among genotypes and, ultimately, genetic analyses. PMID:17158141
Eight Key Facets of Small Business Management.
ERIC Educational Resources Information Center
Scott, James Calvert
1980-01-01
Identifies eight key facets of small business management and suggests activities that may be used to assist in their development. The key facets are (1) product or service, (2) competition, (3) marketing strategies, (4) personnel needs, (5) equipment and facility needs, (6) finances, (7) planning, and (8) entrepreneurship. (JOW)
Tracer SWIW tests in propped and un-propped fractures: parameter sensitivity issues, revisited
NASA Astrophysics Data System (ADS)
Ghergut, Julia; Behrens, Horst; Sauter, Martin
2017-04-01
Single-well injection-withdrawal (SWIW) or 'push-then-pull' tracer methods appear attractive for a number of reasons: less uncertainty on design and dimensioning, and lower tracer quantities required than for inter-well tests; stronger tracer signals, enabling easier and cheaper metering, and shorter metering duration required, reaching higher tracer mass recovery than in inter-well tests; last not least: no need for a second well. However, SWIW tracer signal inversion faces a major issue: the 'push-then-pull' design weakens the correlation between tracer residence times and georeservoir transport parameters, inducing insensitivity or ambiguity of tracer signal inversion w. r. to some of those georeservoir parameters that are supposed to be the target of tracer tests par excellence: pore velocity, transport-effective porosity, fracture or fissure aperture and spacing or density (where applicable), fluid/solid or fluid/fluid phase interface density. Hydraulic methods cannot measure the transport-effective values of such parameters, because pressure signals correlate neither with fluid motion, nor with material fluxes through (fluid-rock, or fluid-fluid) phase interfaces. The notorious ambiguity impeding parameter inversion from SWIW test signals has nourished several 'modeling attitudes': (i) regard dispersion as the key process encompassing whatever superposition of underlying transport phenomena, and seek a statistical description of flow-path collectives enabling to characterize dispersion independently of any other transport parameter, as proposed by Gouze et al. (2008), with Hansen et al. (2016) offering a comprehensive analysis of the various ways dispersion model assumptions interfere with parameter inversion from SWIW tests; (ii) regard diffusion as the key process, and seek for a large-time, asymptotically advection-independent regime in the measured tracer signals (Haggerty et al. 2001), enabling a dispersion-independent characterization of multiple-scale diffusion; (iii) attempt to determine both advective and non-advective transport parameters from one and the same conservative-tracer signal (relying on 'third-party' knowledge), or from twin signals of a so-called 'dual' tracer pair, e. g.: using tracers with contrasting reactivity and partitioning behavior to determine residual saturation in depleted oilfields (Tomich et al. 1973), or to determine advective parameters (Ghergut et al. 2014); using early-time signals of conservative and sorptive tracers for propped-fracture characterization (Karmakar et al. 2015); using mid-time signals of conservative tracers for a reservoir-borne inflow profiling in multi-frac systems (Ghergut et al. 2016), etc. The poster describes new uses of type-(iii) techniques for the specific purposes of shale-gas reservoir characterization, productivity monitoring, diagnostics and engineering of 're-frac' treatments, based on parameter sensitivity findings from German BMWi research project "TRENDS" (Federal Ministry for Economic Affairs and Energy, FKZ 0325515) and from the EU-H2020 project "FracRisk" (grant no. 640979).
Mangenah, Collin; Mavhu, Webster; Hatzold, Karin; Biddle, Andrea K; Madidi, Ngonidzashe; Ncube, Getrude; Mugurungi, Owen; Ticklay, Ismail; Cowan, Frances M; Thirumurthy, Harsha
2015-08-15
Safe and cost-effective programs for implementing early infant male circumcision (EIMC) in Africa need to be piloted. We present results on a relative cost analysis within a randomized noninferiority trial of EIMC comparing the AccuCirc device with Mogen clamp in Zimbabwe. Between January and June 2013, male infants who met inclusion criteria were randomized to EIMC through either AccuCirc or Mogen clamp conducted by a doctor, using a 2:1 allocation ratio. We evaluated the overall unit cost plus the key cost drivers of EIMC using both AccuCirc and Mogen clamp. Direct costs included consumable and nonconsumable supplies, device, personnel, associated staff training, and environmental costs. Indirect costs comprised capital and support personnel costs. In 1-way sensitivity analyses, we assessed potential changes in unit costs due to variations in main parameters, one at a time, holding all other values constant. The unit costs of EIMC using AccuCirc and Mogen clamp were $49.53 and $55.93, respectively. Key cost drivers were consumable supplies, capacity utilization, personnel costs, and device price. Unit prices are likely to be lowest at full capacity utilization and increase as capacity utilization decreases. Unit prices also fall with lower personnel salaries and increase with higher device prices. EIMC has a lower unit cost when using AccuCirc compared with Mogen clamp. To minimize unit costs, countries planning to scale-up EIMC using AccuCirc need to control costs of consumables and personnel. There is also need to negotiate a reasonable device price and maximize capacity utilization.
NASA Astrophysics Data System (ADS)
Ramanan, Natarajan; Kozman, Austin; Sims, James B.
2000-06-01
As the lithography industry moves toward finer features, specifications on temperature uniformity of the bake plates are expected to become more stringent. Consequently, aggressive improvements are needed to conventional bake station designs to make them perform significantly better than current market requirements. To this end, we have conducted a rigorous study that combines state-of-the-art simulation tools and experimental methods to predict the impact of the parameters that influence the uniformity of the wafer in proximity bake. The key observation from this detailed study is that the temperature uniformity of the wafer in proximity mode depends on a number of parameters in addition to the uniformity of the bake plate itself. These parameters include the lid design, the air flow distribution around the bake chamber, bake plate design and flatness of the bake plate and wafer. By performing careful experimental studies that were guided by extensive numerical simulations, we were able to understand the relative importance of each of these parameters. In an orderly fashion, we made appropriate design changes to curtail or eliminate the nonuniformity caused by each of these parameters. After implementing all these changes, we have now been able to match or improve the temperature uniformity of the wafer in proximity with that of a contact measurement on the bake plate. The wafer temperature uniformity is also very close to the theoretically predicted uniformity of the wafer.
A statistical survey of heat input parameters into the cusp thermosphere
NASA Astrophysics Data System (ADS)
Moen, J. I.; Skjaeveland, A.; Carlson, H. C.
2017-12-01
Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.
Experimental study designs to improve the evaluation of road mitigation measures for wildlife.
Rytwinski, Trina; van der Ree, Rodney; Cunnington, Glenn M; Fahrig, Lenore; Findlay, C Scott; Houlahan, Jeff; Jaeger, Jochen A G; Soanes, Kylie; van der Grift, Edgar A
2015-05-01
An experimental approach to road mitigation that maximizes inferential power is essential to ensure that mitigation is both ecologically-effective and cost-effective. Here, we set out the need for and standards of using an experimental approach to road mitigation, in order to improve knowledge of the influence of mitigation measures on wildlife populations. We point out two key areas that need to be considered when conducting mitigation experiments. First, researchers need to get involved at the earliest stage of the road or mitigation project to ensure the necessary planning and funds are available for conducting a high quality experiment. Second, experimentation will generate new knowledge about the parameters that influence mitigation effectiveness, which ultimately allows better prediction for future road mitigation projects. We identify seven key questions about mitigation structures (i.e., wildlife crossing structures and fencing) that remain largely or entirely unanswered at the population-level: (1) Does a given crossing structure work? What type and size of crossing structures should we use? (2) How many crossing structures should we build? (3) Is it more effective to install a small number of large-sized crossing structures or a large number of small-sized crossing structures? (4) How much barrier fencing is needed for a given length of road? (5) Do we need funnel fencing to lead animals to crossing structures, and how long does such fencing have to be? (6) How should we manage/manipulate the environment in the area around the crossing structures and fencing? (7) Where should we place crossing structures and barrier fencing? We provide experimental approaches to answering each of them using example Before-After-Control-Impact (BACI) study designs for two stages in the road/mitigation project where researchers may become involved: (1) at the beginning of a road/mitigation project, and (2) after the mitigation has been constructed; highlighting real case studies when available. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Sabaa, S.M.
1992-01-01
This study is concerned with the efficiency of World Bank projects in Egypt. The study seeks improvements in the methods of evaluating public sector projects in Egypt. To approaches are employed: (1) project identification to optimally allocate Egypt's and World Bank's resources; (2) project appraisal to assess the economic viability and efficiency of investments. The electricity sector is compared with the agriculture sector as a means of employing project identification for priority ordering of investment for development in Egypt. The key criteria for evaluation are the impacts of developments of each sector upon Egypt's national objectives and needs. These includemore » employment opportunities, growth, alleviation of poverty, cross comparison of per capita consumption in each sector, economic rate of return, national security, balance of payments and foreign debt. The allocation of scarce investments would have been more efficient in agriculture than in electricity in meeting Egypt's national objectives and needs. World Bank lending programs in Egypt reveal a priority ordering of electricity over agriculture and rural development. World Bank development projects in Egypt have not been optimally identified, and its programs have not followed an efficient allocation of World Bank's and Egypt's resources. The key parameters in evaluating economic viability and efficiency of development projects are: (1) the discount rate (the opportunity cost of public funds); (2) the exchange rate; and (3) the cost of major inputs, as approximated by shadow prices of labor, water, electricity, and transportation for development projects. Alternative approaches to estimating the opportunity cost of public funds are made. The parameters in evaluating the efficiency of projects have not been accurately estimated in the appraisal stage of the World Bank projects in Egypt, resulting in false or misleading information concerning the economic viability and efficiency of the projects.« less
Leung, Leanne; de Lemos, Mário L; Kovacic, Laurel
2017-01-01
Background With the rising cost of new oncology treatments, it is no longer sustainable to base initial drug funding decisions primarily on prospective clinical trials as their performance in real-life populations are often difficult to determine. In British Columbia, an approach in evidence building is to retrospectively analyse patient outcomes using observational research on an ad hoc basis. Methods The deliberative framework was constructed in three stages: framework design, framework validation and treatment programme characterization, and key informant interview. Framework design was informed through a literature review and analyses of provincial and national decision-making processes. Treatment programmes funded between 2010 and 2013 were used for framework validation. A selection concordance rate of 80% amongst three reviewers was considered to be a validation of the framework. Key informant interviews were conducted to determine the utility of this deliberative framework. Results A multi-domain deliberative framework with 15 assessment parameters was developed. A selection concordance rate of 84.2% was achieved for content validation of the framework. Nine treatment programmes from five different tumour groups were selected for retrospective outcomes analysis. Five contributory factors to funding uncertainties were identified. Key informants agreed that the framework is a comprehensive tool that targets the key areas involved in the funding decision-making process. Conclusions The oncology-based deliberative framework can be routinely used to assess treatment programmes from the major tumour sites for retrospective outcomes analysis. Key informants indicate this is a value-added tool and will provide insight to the current prospective funding model.
Analysis of Critical Earth Observation Priorities for Societal Benefit
NASA Astrophysics Data System (ADS)
Zell, E. R.; Huff, A. K.; Carpenter, A. T.; Friedl, L.
2011-12-01
To ensure that appropriate near real-time (NRT) and historical Earth observation data are available to benefit society and meet end-user needs, the Group on Earth Observations (GEO) sponsored a multi-disciplinary study to identify a set of critical and common Earth observations associated with 9 Societal Benefit Areas (SBAs): Agriculture, Biodiversity, Climate, Disasters, Ecosystems, Energy, Health, Water, and Weather. GEO is an intergovernmental organization working to improve the availability, access, and use of Earth observations to benefit society through a Global Earth Observation System of Systems (GEOSS). The study, overseen by the GEO User Interface Committee, focused on the "demand" side of Earth observation needs: which users need what types of data, and when? The methodology for the study was a meta-analysis of over 1,700 publicly available documents addressing Earth observation user priorities, under the guidance of expert advisors from around the world. The result was a ranking of 146 Earth observation parameters that are critical and common to multiple SBAs, based on an ensemble of 4 statistically robust methods. Within the results, key details emerged on NRT observations needed to serve a broad community of users. The NRT observation priorities include meteorological parameters, vegetation indices, land cover and soil property observations, water body and snow cover properties, and atmospheric composition. The results of the study and examples of NRT applications will be presented. The applications are as diverse as the list of priority parameters. For example, NRT meteorological and soil moisture information can support monitoring and forecasting for more than 25 infectious diseases, including epidemic diseases, such as malaria, and diseases of major concern in the U.S., such as Lyme disease. Quickly evolving events that impact forests, such as fires and insect outbreaks, can be monitored and forecasted with a combination of vegetation indices, fuel moisture content, burn scars, and meteorological parameters. Impacts to public health and livelihoods due to food insecurity, algal blooms, and air pollution can be addressed through NRT monitoring of specific events utilizing land cover, atmospheric composition, water quality, and meteorological observations. More broadly, the assessment of water availability for drinking and agriculture and the development of floods and storms rely on continuous feeds of NRT meteorological and atmospheric composition observations. Overall, this multi-disciplinary study of user needs for NRT data and products can inform the design and operation of NRT data systems. Follow-on work for this study will also be presented, focusing on the availability of current and future satellite measurements (including NRT) of the 30 most critical Earth observation priorities, as well as a detailed analysis of users' needs for precipitation data. The results of this study summarize the priorities for critical Earth observations utilized globally for societal benefit.
Quantitative evaluation of 3D images produced from computer-generated holograms
NASA Astrophysics Data System (ADS)
Sheerin, David T.; Mason, Ian R.; Cameron, Colin D.; Payne, Douglas A.; Slinger, Christopher W.
1999-08-01
Advances in computing and optical modulation techniques now make it possible to anticipate the generation of near real- time, reconfigurable, high quality, three-dimensional images using holographic methods. Computer generated holography (CGH) is the only technique which holds promise of producing synthetic images having the full range of visual depth cues. These realistic images will be viewable by several users simultaneously, without the need for headtracking or special glasses. Such a data visualization tool will be key to speeding up the manufacture of new commercial and military equipment by negating the need for the production of physical 3D models in the design phase. DERA Malvern has been involved in designing and testing fixed CGH in order to understand the connection between the complexity of the CGH, the algorithms used to design them, the processes employed in their implementation and the quality of the images produced. This poster describes results from CGH containing up to 108 pixels. The methods used to evaluate the reconstructed images are discussed and quantitative measures of image fidelity made. An understanding of the effect of the various system parameters upon final image quality enables a study of the possible system trade-offs to be carried out. Such an understanding of CGH production and resulting image quality is key to effective implementation of a reconfigurable CGH system currently under development at DERA.
Magnetic Field Response Measurement Acquisition System
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Taylor,Bryant D.; Shams, Qamar A.; Fox, Robert L.
2007-01-01
This paper presents a measurement acquisition method that alleviates many shortcomings of traditional measurement systems. The shortcomings are a finite number of measurement channels, weight penalty associated with measurements, electrical arcing, wire degradations due to wear or chemical decay and the logistics needed to add new sensors. Wire degradation has resulted in aircraft fatalities and critical space launches being delayed. The key to this method is the use of sensors designed as passive inductor-capacitor circuits that produce magnetic field responses. The response attributes correspond to states of physical properties for which the sensors measure. Power is wirelessly provided to the sensing element by using Faraday induction. A radio frequency antenna produces a time-varying magnetic field used to power the sensor and receive the magnetic field response of the sensor. An interrogation system for discerning changes in the sensor response frequency, resistance and amplitude has been developed and is presented herein. Multiple sensors can be interrogated using this method. The method eliminates the need for a data acquisition channel dedicated to each sensor. The method does not require the sensors to be near the acquisition hardware. Methods of developing magnetic field response sensors and the influence of key parameters on measurement acquisition are discussed. Examples of magnetic field response sensors and the respective measurement characterizations are presented. Implementation of this method on an aerospace system is discussed.
NASA Astrophysics Data System (ADS)
Xu, Wenfu; Hu, Zhonghua; Zhang, Yu; Liang, Bin
2017-03-01
After being launched into space to perform some tasks, the inertia parameters of a space robotic system may change due to fuel consumption, hardware reconfiguration, target capturing, and so on. For precision control and simulation, it is required to identify these parameters on orbit. This paper proposes an effective method for identifying the complete inertia parameters (including the mass, inertia tensor and center of mass position) of a space robotic system. The key to the method is to identify two types of simple dynamics systems: equivalent single-body and two-body systems. For the former, all of the joints are locked into a designed configuration and the thrusters are used for orbital maneuvering. The object function for optimization is defined in terms of acceleration and velocity of the equivalent single body. For the latter, only one joint is unlocked and driven to move along a planned (exiting) trajectory in free-floating mode. The object function is defined based on the linear and angular momentum equations. Then, the parameter identification problems are transformed into non-linear optimization problems. The Particle Swarm Optimization (PSO) algorithm is applied to determine the optimal parameters, i.e. the complete dynamic parameters of the two equivalent systems. By sequentially unlocking the 1st to nth joints (or unlocking the nth to 1st joints), the mass properties of body 0 to n (or n to 0) are completely identified. For the proposed method, only simple dynamics equations are needed for identification. The excitation motion (orbit maneuvering and joint motion) is also easily realized. Moreover, the method does not require prior knowledge of the mass properties of any body. It is general and practical for identifying a space robotic system on-orbit.
Estimation of end point foot clearance points from inertial sensor data.
Santhiranayagam, Braveena K; Lai, Daniel T H; Begg, Rezaul K; Palaniswami, Marimuthu
2011-01-01
Foot clearance parameters provide useful insight into tripping risks during walking. This paper proposes a technique for the estimate of key foot clearance parameters using inertial sensor (accelerometers and gyroscopes) data. Fifteen features were extracted from raw inertial sensor measurements, and a regression model was used to estimate two key foot clearance parameters: First maximum vertical clearance (m x 1) after toe-off and the Minimum Toe Clearance (MTC) of the swing foot. Comparisons are made against measurements obtained using an optoelectronic motion capture system (Optotrak), at 4 different walking speeds. General Regression Neural Networks (GRNN) were used to estimate the desired parameters from the sensor features. Eight subjects foot clearance data were examined and a Leave-one-subject-out (LOSO) method was used to select the best model. The best average Root Mean Square Errors (RMSE) across all subjects obtained using all sensor features at the maximum speed for m x 1 was 5.32 mm and for MTC was 4.04 mm. Further application of a hill-climbing feature selection technique resulted in 0.54-21.93% improvement in RMSE and required fewer input features. The results demonstrated that using raw inertial sensor data with regression models and feature selection could accurately estimate key foot clearance parameters.
NASA Astrophysics Data System (ADS)
Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua
2014-11-01
Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.
Public Key-Based Need-to-Know Authorization Engine Final Report CRADA No. TSB-1553-98
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mark, R.; Williams, R.
The goals of this project were to develop a public key-based authentication service plug-in based on LLNL's requirements, integrate the public key-based authentication with the Intra Verse authorization service adn the LLNL NTK server by developing a full-featured version of the prototyped Intra Verse need-to-know plug in; and to test the authorization and need-to-know plug-in in a secured extranet prototype among selected national Labs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bignan, G.; Gonnier, C.; Lyoussi, A.
2015-07-01
Research and development on fuel and material behaviour under irradiation is a key issue for sustainable nuclear energy in order to meet specific needs by keeping the best level of safety. These needs mainly deal with a constant improvement of performances and safety in order to optimize the fuel cycle and hence to reach nuclear energy sustainable objectives. A sustainable nuclear energy requires a high level of performances in order to meet specific needs such as: - Pursuing improvement of the performances and safety of present and coming water cooled reactor technologies. This will require a continuous R and Dmore » support following a long-term trend driven by the plant life management, safety demonstration, flexibility and economics improvement. Experimental irradiations of structure materials are necessary to anticipate these material behaviours and will contribute to their optimisation. - Upgrading continuously nuclear fuel technology in present and future nuclear power plants to achieve better performances and to optimise the fuel cycle keeping the best level of safety. Fuel evolution for generation II, III and III+ is a key stake requiring developments, qualification tests and safety experiments to ensure the competitiveness and safety: experimental tests exploring the full range of fuel behaviour determine fuel stability limits and safety margins, as a major input for the fuel reliability analysis. To perform such accurate and innovative progress and developments, specific and ad hoc instrumentation, irradiation devices, measurement methods are necessary to be set up inside or beside the material testing reactor (MTR) core. These experiments require beforehand in situ and on line sophisticated measurements to accurately determine different key parameters such as thermal and fast neutron fluxes and nuclear heating in order to precisely monitor and control the conducted assays. The new Material Testing Reactor JHR (Jules Horowitz Reactor) currently under construction at CEA Cadarache research centre in the south of France will represent a major Research Infrastructure for scientific studies regarding material and fuel behavior under irradiation. It will also be devoted to medical isotopes production. Hence JHR will offer a real opportunity to perform R and D programs regarding needs above and hence will crucially contribute to the selection, optimization and qualification of these innovative materials and fuels. The JHR reactor objectives, principles and main characteristics associated to specific experimental devices associated to measurement techniques and methodology, their performances, their limitations and field of applications will be presented and discussed. (authors)« less
NASA Astrophysics Data System (ADS)
Ohtsuka, N.; Shindo, Y.; Makita, A.
2010-06-01
Instrumented Charpy test was conducted on small sized specimen of 21/4Cr-1Mo steel. In the test the single specimen key curve method was applied to determine the value of fracture toughness for the initiation of crack extension with hydrogen free, KIC, and for hydrogen embrittlement cracking, KIH. Also the tearing modulus as a parameter for resistance to crack extension was determined. The role of these parameters was discussed at an upper shelf temperature and at a transition temperature. Then the key curve method combined with instrumented Charpy test was proven to be used to evaluate not only temper embrittlement but also hydrogen embrittlement.
Du, Liuliu; Batterman, Stuart; Godwin, Christopher; Chin, Jo-Yu; Parker, Edith; Breen, Michael; Brakefield, Wilma; Robins, Thomas; Lewis, Toby
2012-12-12
Air change rates (ACRs) and interzonal flows are key determinants of indoor air quality (IAQ) and building energy use. This paper characterizes ACRs and interzonal flows in 126 houses, and evaluates effects of these parameters on IAQ. ACRs measured using weeklong tracer measurements in several seasons averaged 0.73 ± 0.76 h(-1) (median = 0.57 h(-1), n = 263) in the general living area, and much higher, 1.66 ± 1.50 h(-1) (median = 1.23 h(-1), n = 253) in bedrooms. Living area ACRs were highest in winter and lowest in spring; bedroom ACRs were highest in summer and lowest in spring. Bedrooms received an average of 55 ± 18% of air from elsewhere in the house; the living area received only 26 ± 20% from the bedroom. Interzonal flows did not depend on season, indoor smoking or the presence of air conditioners. A two-zone IAQ model calibrated for the field study showed large differences in pollutant levels between the living area and bedroom, and the key parameters affecting IAQ were emission rates, emission source locations, air filter use, ACRs, interzonal flows, outdoor concentrations, and PM penetration factors. The single-zone models that are commonly used for residences have substantial limitations and may inadequately represent pollutant concentrations and exposures in bedrooms and potentially other environments other where people spend a substantial fraction of time.
Du, Liuliu; Batterman, Stuart; Godwin, Christopher; Chin, Jo-Yu; Parker, Edith; Breen, Michael; Brakefield, Wilma; Robins, Thomas; Lewis, Toby
2012-01-01
Air change rates (ACRs) and interzonal flows are key determinants of indoor air quality (IAQ) and building energy use. This paper characterizes ACRs and interzonal flows in 126 houses, and evaluates effects of these parameters on IAQ. ACRs measured using weeklong tracer measurements in several seasons averaged 0.73 ± 0.76 h−1 (median = 0.57 h−1, n = 263) in the general living area, and much higher, 1.66 ± 1.50 h−1 (median = 1.23 h−1, n = 253) in bedrooms. Living area ACRs were highest in winter and lowest in spring; bedroom ACRs were highest in summer and lowest in spring. Bedrooms received an average of 55 ± 18% of air from elsewhere in the house; the living area received only 26 ± 20% from the bedroom. Interzonal flows did not depend on season, indoor smoking or the presence of air conditioners. A two-zone IAQ model calibrated for the field study showed large differences in pollutant levels between the living area and bedroom, and the key parameters affecting IAQ were emission rates, emission source locations, air filter use, ACRs, interzonal flows, outdoor concentrations, and PM penetration factors. The single-zone models that are commonly used for residences have substantial limitations and may inadequately represent pollutant concentrations and exposures in bedrooms and potentially other environments other where people spend a substantial fraction of time. PMID:23235286
NASA Astrophysics Data System (ADS)
Destefanis, Stefano; Tracino, Emanuele; Giraudo, Martina
2014-06-01
During a mission involving a spacecraft using nuclear power sources (NPS), the consequences to the population induced by an accident has to be taken into account carefully.Part of the study (led by AREVA, with TAS-I as one of the involved parties) was devoted to "Worst Case Scenario Consolidation". In particular, one of the activities carried out by TAS-I had the aim of characterizing the accidental environment (explosion on launch pad or during launch) and consolidate the requirements given as input in the study. The resulting requirements became inputs for Nuclear Power Source container design.To do so, TAS-I did first an overview of the available technical literature (mostly developed in the frame of NASA Mercury / Apollo program), to identify the key parameters to be used for analytical assessment (blast pressure wave, fragments size, speed and distribution, TNT equivalent of liquid propellant).Then, a simplified Radioss model was setup, to verify both the cards needed for blast / fragment impact analysis and the consistency between preliminary results and available technical literature (Radioss is commonly used to design mine - resistant vehicles, by simulating the effect of blasts onto structural elements, and it is used in TAS-I for several types of analysis, including land impact, water impact and fluid - structure interaction).The obtained results (albeit produced by a very simplified model) are encouraging, showing that the analytical tool and the selected key parameters represent a step in the right direction.
NASA Technical Reports Server (NTRS)
Abney, Morgan B.; Perry, Jay L.
2016-01-01
Over the last 55 years, NASA has evolved life support for crewed space exploration vehicles from simple resupply during Project Mercury to the complex and highly integrated system of systems aboard the International Space Station. As NASA targets exploration destinations farther from low Earth orbit and mission durations of 500 to 1000 days, life support systems must evolve to meet new requirements. In addition to having more robust, reliable, and maintainable hardware, limiting resupply becomes critical for managing mission logistics and cost. Supplying a crew with the basics of food, water, and oxygen become more challenging as the destination ventures further from Earth. Aboard ISS the Atmosphere Revitalization Subsystem (ARS) supplies the crew's oxygen demand by electrolyzing water. This approach makes water a primary logistics commodity that must be managed carefully. Chemical reduction of metabolic carbon dioxide (CO2) provides a method of recycling oxygen thereby reducing the net ARS water demand and therefore minimizing logistics needs. Multiple methods have been proposed to achieve this recovery and have been reported in the literature. However, depending on the architecture and the technology approach, "oxygen recovery" can be defined in various ways. This discontinuity makes it difficult to compare technologies directly. In an effort to clarify community discussions of Oxygen Recovery, we propose specific definitions and describe the methodology used to arrive at those definitions. Additionally, we discuss key performance parameters for Oxygen Recovery technology development including challenges with comparisons to state-of-the-art.
Temporal variation of velocity and turbulence characteristics at a tidal energy site
NASA Astrophysics Data System (ADS)
Gunawan, B.; Neary, V. S.; Colby, J.
2013-12-01
This study examines the temporal variability, frequency, direction and magnitude of the mean current, turbulence, hydrodynamic force and tidal power availability at a proposed tidal energy site in a tidal channel located in East River, NY, USA. The channel has a width of 190 m, a mean water level of 9.8 m and a mean tidal range of 1.3 m. A two-month velocity measurement was conducted at the design hub-height of a tidal turbine using an acoustic Doppler velocimeter (ADV). The site has semi-diurnal tidal characteristics with tidal current pattern resembles that of sinusoidal function. The five-minute mean currents at the site varied between 0 and 2.4 m s-1. Flood current magnitudes were typically higher that the ebb current magnitudes, which skewed the tidal energy production towards the flood period. The effect of small-scale turbulence on the computed velocity, hydrodynamic load and power densities timeseries were investigated. Excluding the small-scale turbulence may lead to a significant underestimation of the mean and the maximum values of the analyzed variable. Comparison of hydrodynamic conditions with other tidal energy sites indicates that the key parameters for tidal energy site development are likely to be site-specific, which highlight the need to develop a classification system for tidal energy sites. Such a classification system would enable a direct comparison of key parameters between potential project locations and ultimately help investors in the decision making process. Turbulence intensity vs. mean current magnitude
Smythe, Gayle M; White, Jason D
2011-12-18
Voluntary wheel running can potentially be used to exacerbate the disease phenotype in dystrophin-deficient mdx mice. While it has been established that voluntary wheel running is highly variable between individuals, the key parameters of wheel running that impact the most on muscle pathology have not been examined in detail. We conducted a 2-week test of voluntary wheel running by mdx mice and the impact of wheel running on disease pathology. There was significant individual variation in the average daily distance (ranging from 0.003 ± 0.005 km to 4.48 ± 0.96 km), culminating in a wide range (0.040 km to 67.24 km) of total cumulative distances run by individuals. There was also variation in the number and length of run/rest cycles per night, and the average running rate. Correlation analyses demonstrated that in the quadriceps muscle, a low number of high distance run/rest cycles was the most consistent indicator for increased tissue damage. The amount of rest time between running bouts was a key factor associated with gastrocnemius damage. These data emphasize the need for detailed analysis of individual running performance, consideration of the length of wheel exposure time, and the selection of appropriate muscle groups for analysis, when applying the use of voluntary wheel running to disease exacerbation and/or pre-clinical testing of the efficacy of therapeutic agents in the mdx mouse.
Device-independent secret-key-rate analysis for quantum repeaters
NASA Astrophysics Data System (ADS)
Holz, Timo; Kampermann, Hermann; Bruß, Dagmar
2018-01-01
The device-independent approach to quantum key distribution (QKD) aims to establish a secret key between two or more parties with untrusted devices, potentially under full control of a quantum adversary. The performance of a QKD protocol can be quantified by the secret key rate, which can be lower bounded via the violation of an appropriate Bell inequality in a setup with untrusted devices. We study secret key rates in the device-independent scenario for different quantum repeater setups and compare them to their device-dependent analogon. The quantum repeater setups under consideration are the original protocol by Briegel et al. [Phys. Rev. Lett. 81, 5932 (1998), 10.1103/PhysRevLett.81.5932] and the hybrid quantum repeater protocol by van Loock et al. [Phys. Rev. Lett. 96, 240501 (2006), 10.1103/PhysRevLett.96.240501]. For a given repeater scheme and a given QKD protocol, the secret key rate depends on a variety of parameters, such as the gate quality or the detector efficiency. We systematically analyze the impact of these parameters and suggest optimized strategies.
Security of Color Image Data Designed by Public-Key Cryptosystem Associated with 2D-DWT
NASA Astrophysics Data System (ADS)
Mishra, D. C.; Sharma, R. K.; Kumar, Manish; Kumar, Kuldeep
2014-08-01
In present times the security of image data is a major issue. So, we have proposed a novel technique for security of color image data by public-key cryptosystem or asymmetric cryptosystem. In this technique, we have developed security of color image data using RSA (Rivest-Shamir-Adleman) cryptosystem with two-dimensional discrete wavelet transform (2D-DWT). Earlier proposed schemes for security of color images designed on the basis of keys, but this approach provides security of color images with the help of keys and correct arrangement of RSA parameters. If the attacker knows about exact keys, but has no information of exact arrangement of RSA parameters, then the original information cannot be recovered from the encrypted data. Computer simulation based on standard example is critically examining the behavior of the proposed technique. Security analysis and a detailed comparison between earlier developed schemes for security of color images and proposed technique are also mentioned for the robustness of the cryptosystem.
Heavy doping effects in high efficiency silicon solar cells
NASA Technical Reports Server (NTRS)
Lindholm, F. A.
1984-01-01
Several of the key parameters describing the heavily doped regions of silicon solar cells are examined. The experimentally determined energy gap narrowing and minority carrier diffusivity and mobility are key factors in the investigation.
Nuclear thermal propulsion engine system design analysis code development
NASA Astrophysics Data System (ADS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.
1992-01-01
A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.
Yoon, Yeo Hun; Kim, Seung Jae; Kim, Dong Hwan
2015-12-01
The scanning electron microscope is used in various fields to go beyond diffraction limits of the optical microscope. However, the electron pathway should be conducted in a vacuum so as not to scatter electrons. The pretreatment of the sample is needed for use in the vacuum. To directly observe large and fully hydrophilic samples without pretreatment, the atmospheric scanning electron microscope (ASEM) is needed. We developed an electron filter unit and an electron detector unit for implementation of the ASEM. The key of the electron filter unit is that electrons are transmitted while air molecules remain untransmitted through the unit. The electron detector unit collected the backscattered electrons. We conducted experiments using the selected materials with Havar foil, carbon film and SiN film. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Measuring Information Security: Guidelines to Build Metrics
NASA Astrophysics Data System (ADS)
von Faber, Eberhard
Measuring information security is a genuine interest of security managers. With metrics they can develop their security organization's visibility and standing within the enterprise or public authority as a whole. Organizations using information technology need to use security metrics. Despite the clear demands and advantages, security metrics are often poorly developed or ineffective parameters are collected and analysed. This paper describes best practices for the development of security metrics. First attention is drawn to motivation showing both requirements and benefits. The main body of this paper lists things which need to be observed (characteristic of metrics), things which can be measured (how measurements can be conducted) and steps for the development and implementation of metrics (procedures and planning). Analysis and communication is also key when using security metrics. Examples are also given in order to develop a better understanding. The author wants to resume, continue and develop the discussion about a topic which is or increasingly will be a critical factor of success for any security managers in larger organizations.
NASA Technical Reports Server (NTRS)
Howard, David; Perry,Jay; Sargusingh, Miriam; Toomarian, Nikzad
2016-01-01
NASA's technology development roadmaps provide guidance to focus technological development on areas that enable crewed exploration missions beyond low-Earth orbit. Specifically, the technology area roadmap on human health, life support and habitation systems describes the need for life support system (LSS) technologies that can improve reliability and in-situ maintainability within a minimally-sized package while enabling a high degree of mission autonomy. To address the needs outlined by the guiding technology area roadmap, NASA's Advanced Exploration Systems (AES) Program has commissioned the Life Support Systems (LSS) Project to lead technology development in the areas of water recovery and management, atmosphere revitalization, and environmental monitoring. A notional exploration LSS architecture derived from the International Space has been developed and serves as the developmental basis for these efforts. Functional requirements and key performance parameters that guide the exploration LSS technology development efforts are presented and discussed. Areas where LSS flight operations aboard the ISS afford lessons learned that are relevant to exploration missions are highlighted.
Adaptive Multichannel Radiation Sensors for Plant Parameter Monitoring
NASA Astrophysics Data System (ADS)
Mollenhauer, Hannes; Remmler, Paul; Schuhmann, Gudrun; Lausch, Angela; Merbach, Ines; Assing, Martin; Mollenhauer, Olaf; Dietrich, Peter; Bumberger, Jan
2016-04-01
Nutrients such as nitrogen are playing a key role in the plant life cycle. They are much needed for chlorophyll production and other plant cell components. Therefore, the crop yield is strongly affected by plant nutrient status. Due to the spatial and temporal variability of soil characteristics or swaying agricultural inputs the plant development varies within a field. Thus, the determination of these fluctuations in the plant development is valuable for a detection of stress conditions and optimization of fertilisation due to its high environmental and economic impact. Plant parameters play crucial roles in plant growth estimation and prediction since they are used as indicators of plant performance. Especially indices derived out of remote sensing techniques provide quantitative information about agricultural crops instantaneously, and above all, non-destructively. Due to the specific absorption of certain plant pigments, a characteristic spectral signature can be seen in the visible and IR part of the electromagnetic spectrum, known as narrow-band peaks. In an analogous manner, the presence and concentration of different nutrients cause a characteristic spectral signature. To this end, an adequate remote sensing monitoring concept is needed, considering heterogeneity and dynamic of the plant population and economical aspects. This work will present the development and field investigations of an inexpensive multichannel radiation sensor to observe the incoming and reflected specific parts or rather distinct wavelengths of the solar light spectrum on the crop and facilitate the determination of different plant indices. Based on the selected sensor wavelengths, the sensing device allows the detection of specific parameters, e.g. plant vitality, chlorophyll content or nitrogen content. Besides the improvement of the sensor characteristic, the simple wavelength adaption, and the price-performance ratio, the achievement of appropriate energy efficiency as well as a suitable protection against disturbances and environmental influences are key challenges of this work. The multichannel sensors were tested in a mobile wireless sensor network in the frame of the Static Fertilisation Experiment in Bad Lauchstädt, Germany. The sensor nodes were permanently installed for one crop cycle on three different spring barley plots with diverse nitrogen fertilisation levels. In addition, weekly surveys of field spectrometer and chlorophyll meter measurements as well as tissue analyses of plant samples were implemented. The results of this experiment show a strong correlation of chlorophyll and nitrogen content indices in comparison to the simultaneously running commercial radiation transmittance or reflectance sensors.
Hassler, Christel S.; Sinoir, Marie; Clementson, Lesley A.; Butler, Edward C. V.
2012-01-01
Bottle assays and large-scale fertilization experiments have demonstrated that, in the Southern Ocean, iron often controls the biomass and the biodiversity of primary producers. To grow, phytoplankton need numerous other trace metals (micronutrients) required for the activity of key enzymes and other intracellular functions. However, little is known of the potential these other trace elements have to limit the growth of phytoplankton in the Southern Ocean. This study, investigates whether micronutrients other than iron (Zn, Co, Cu, Cd, Ni) need to be considered as parameters for controlling the phytoplankton growth from the Australian Subantarctic to the Polar Frontal Zones during the austral summer 2007. Analysis of nutrient disappearance ratios, suggested differential zones in phytoplankton growth control in the study region with a most intense phytoplankton growth limitation between 49 and 50°S. Comparison of micronutrient disappearance ratios, metal distribution, and biomarker pigments used to identify dominating phytoplankton groups, demonstrated that a complex interaction between Fe, Zn, and Co might exist in the study region. Although iron remains the pivotal micronutrient for phytoplankton growth and community structure, Zn and Co are also important for the nutrition and the growth of most of the dominating phytoplankton groups in the Subantarctic Zone region. Understanding of the parameters controlling phytoplankton is paramount, as it affects the functioning of the Southern Ocean, its marine resources and ultimately the global carbon cycle. PMID:22787456
Neural integrators for decision making: a favorable tradeoff between robustness and sensitivity
Cain, Nicholas; Barreiro, Andrea K.; Shadlen, Michael
2013-01-01
A key step in many perceptual decision tasks is the integration of sensory inputs over time, but a fundamental questions remain about how this is accomplished in neural circuits. One possibility is to balance decay modes of membranes and synapses with recurrent excitation. To allow integration over long timescales, however, this balance must be exceedingly precise. The need for fine tuning can be overcome via a “robust integrator” mechanism in which momentary inputs must be above a preset limit to be registered by the circuit. The degree of this limiting embodies a tradeoff between sensitivity to the input stream and robustness against parameter mistuning. Here, we analyze the consequences of this tradeoff for decision-making performance. For concreteness, we focus on the well-studied random dot motion discrimination task and constrain stimulus parameters by experimental data. We show that mistuning feedback in an integrator circuit decreases decision performance but that the robust integrator mechanism can limit this loss. Intriguingly, even for perfectly tuned circuits with no immediate need for a robustness mechanism, including one often does not impose a substantial penalty for decision-making performance. The implication is that robust integrators may be well suited to subserve the basic function of evidence integration in many cognitive tasks. We develop these ideas using simulations of coupled neural units and the mathematics of sequential analysis. PMID:23446688
Biogas Production: Microbiology and Technology.
Schnürer, Anna
Biogas, containing energy-rich methane, is produced by microbial decomposition of organic material under anaerobic conditions. Under controlled conditions, this process can be used for the production of energy and a nutrient-rich residue suitable for use as a fertilising agent. The biogas can be used for production of heat, electricity or vehicle fuel. Different substrates can be used in the process and, depending on substrate character, various reactor technologies are available. The microbiological process leading to methane production is complex and involves many different types of microorganisms, often operating in close relationships because of the limited amount of energy available for growth. The microbial community structure is shaped by the incoming material, but also by operating parameters such as process temperature. Factors leading to an imbalance in the microbial community can result in process instability or even complete process failure. To ensure stable operation, different key parameters, such as levels of degradation intermediates and gas quality, are often monitored. Despite the fact that the anaerobic digestion process has long been used for industrial production of biogas, many questions need still to be resolved to achieve optimal management and gas yields and to exploit the great energy and nutrient potential available in waste material. This chapter discusses the different aspects that need to be taken into consideration to achieve optimal degradation and gas production, with particular focus on operation management and microbiology.
NASA Astrophysics Data System (ADS)
Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato
2017-07-01
An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.
The use of flow cytometry to examine calcium signalling by TRPV1 in mixed cell populations.
Assas, Bakri M; Abdulaal, Wesam H; Wakid, Majed H; Zakai, Haytham A; Miyan, J; Pennock, J L
2017-06-15
Flow cytometric analysis of calcium mobilisation has been in use for many years in the study of specific receptor engagement or isolated cell:cell communication. However, calcium mobilisation/signaling is key to many cell functions including apoptosis, mobility and immune responses. Here we combine multiplex surface staining of whole spleen with Indo-1 AM to visualise calcium mobilisation and examine calcium signaling in a mixed immune cell culture over time. We demonstrate responses to a TRPV1 agonist in distinct cell subtypes without the need for cell separation. Multi parameter staining alongside Indo-1 AM to demonstrate calcium mobilization allows the study of real time calcium signaling in a complex environment. Copyright © 2017. Published by Elsevier Inc.
Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack
NASA Astrophysics Data System (ADS)
Nalegaev, S. S.; Petrov, N. V.
Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.
WiseView: Visualizing motion and variability of faint WISE sources
NASA Astrophysics Data System (ADS)
Caselden, Dan; Westin, Paul, III; Meisner, Aaron; Kuchner, Marc; Colin, Guillaume
2018-06-01
WiseView renders image blinks of Wide-field Infrared Survey Explorer (WISE) coadds spanning a multi-year time baseline in a browser. The software allows for easy visual identification of motion and variability for sources far beyond the single-frame detection limit, a key threshold not surmounted by many studies. WiseView transparently gathers small image cutouts drawn from many terabytes of unWISE coadds, facilitating access to this large and unique dataset. Users need only input the coordinates of interest and can interactively tune parameters including the image stretch, colormap and blink rate. WiseView was developed in the context of the Backyard Worlds: Planet 9 citizen science project, and has enabled hundreds of brown dwarf candidate discoveries by citizen scientists and professional astronomers.
A review of failure models for unidirectional ceramic matrix composites under monotonic loads
NASA Technical Reports Server (NTRS)
Tripp, David E.; Hemann, John H.; Gyekenyesi, John P.
1989-01-01
Ceramic matrix composites offer significant potential for improving the performance of turbine engines. In order to achieve their potential, however, improvements in design methodology are needed. In the past most components using structural ceramic matrix composites were designed by trial and error since the emphasis of feasibility demonstration minimized the development of mathematical models. To understand the key parameters controlling response and the mechanics of failure, the development of structural failure models is required. A review of short term failure models with potential for ceramic matrix composite laminates under monotonic loads is presented. Phenomenological, semi-empirical, shear-lag, fracture mechanics, damage mechanics, and statistical models for the fast fracture analysis of continuous fiber unidirectional ceramic matrix composites under monotonic loads are surveyed.
Thermoelectric Energy Conversion: Future Directions and Technology Development Needs
NASA Technical Reports Server (NTRS)
Fleurial, Jean-Pierre
2007-01-01
This viewgraph presentation reviews the process of thermoelectric energy conversion along with key technology needs and challenges. The topics include: 1) The Case for Thermoelectrics; 2) Advances in Thermoelectrics: Investment Needed; 3) Current U.S. Investment (FY07); 4) Increasing Thermoelectric Materials Conversion Efficiency Key Science Needs and Challenges; 5) Developing Advanced TE Components & Systems Key Technology Needs and Challenges; 6) Thermoelectrics; 7) 200W Class Lightweight Portable Thermoelectric Generator; 8) Hybrid Absorption Cooling/TE Power Cogeneration System; 9) Major Opportunities in Energy Industry; 10) Automobile Waste Heat Recovery; 11) Thermoelectrics at JPL; 12) Recent Advances at JPL in Thermoelectric Converter Component Technologies; 13) Thermoelectrics Background on Power Generation and Cooling Operational Modes; 14) Thermoelectric Power Generation; and 15) Thermoelectric Cooling.
Hospital influenza pandemic stockpiling needs: A computer simulation.
Abramovich, Mark N; Hershey, John C; Callies, Byron; Adalja, Amesh A; Tosh, Pritish K; Toner, Eric S
2017-03-01
A severe influenza pandemic could overwhelm hospitals but planning guidance that accounts for the dynamic interrelationships between planning elements is lacking. We developed a methodology to calculate pandemic supply needs based on operational considerations in hospitals and then tested the methodology at Mayo Clinic in Rochester, MN. We upgraded a previously designed computer modeling tool and input carefully researched resource data from the hospital to run 10,000 Monte Carlo simulations using various combinations of variables to determine resource needs across a spectrum of scenarios. Of 10,000 iterations, 1,315 fell within the parameters defined by our simulation design and logical constraints. From these valid iterations, we projected supply requirements by percentile for key supplies, pharmaceuticals, and personal protective equipment requirements needed in a severe pandemic. We projected supplies needs for a range of scenarios that use up to 100% of Mayo Clinic-Rochester's surge capacity of beds and ventilators. The results indicate that there are diminishing patient care benefits for stockpiling on the high side of the range, but that having some stockpile of critical resources, even if it is relatively modest, is most important. We were able to display the probabilities of needing various supply levels across a spectrum of scenarios. The tool could be used to model many other hospital preparedness issues, but validation in other settings is needed. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
A Solution to the Cosmic Conundrum including Cosmological Constant and Dark Energy Problems
NASA Astrophysics Data System (ADS)
Singh, A.
2009-12-01
A comprehensive solution to the cosmic conundrum is presented that also resolves key paradoxes of quantum mechanics and relativity. A simple mathematical model, the Gravity Nullification model (GNM), is proposed that integrates the missing physics of the spontaneous relativistic conversion of mass to energy into the existing physics theories, specifically a simplified general theory of relativity. Mechanistic mathematical expressions are derived for a relativistic universe expansion, which predict both the observed linear Hubble expansion in the nearby universe and the accelerating expansion exhibited by the supernova observations. The integrated model addresses the key questions haunting physics and Big Bang cosmology. It also provides a fresh perspective on the misconceived birth and evolution of the universe, especially the creation and dissolution of matter. The proposed model eliminates singularities from existing models and the need for the incredible and unverifiable assumptions including the superluminous inflation scenario, multiple universes, multiple dimensions, Anthropic principle, and quantum gravity. GNM predicts the observed features of the universe without any explicit consideration of time as a governing parameter.
Effect of fuel stratification on detonation wave propagation
NASA Astrophysics Data System (ADS)
Masselot, Damien; Fievet, Romain; Raman, Venkat
2016-11-01
Rotating detonation engines (RDEs) form a class of pressure-gain combustion systems of higher efficiency compared to conventional gas turbine engines. One of the key features of the design is the injection system, as reactants need to be continuously provided to the detonation wave to sustain its propagation speed. As inhomogeneities in the reactant mixture can perturb the detonation wave front, premixed fuel jet injectors might seem like the most stable solution. However, this introduces the risk of the detonation wave propagating through the injector, causing catastrophic failure. On the other hand, non-premixed fuel injection will tend to quench the detonation wave near the injectors, reducing the likelihood of such failure. Still, the effects of such non-premixing and flow inhomogeneities ahead of a detonation wave have yet to be fully understood and are the object of this study. A 3D channel filled with O2 diluted in an inert gas with circular H2 injectors is simulated as a detonation wave propagates through the system. The impact of key parameters such as injector spacing, injector size, mixture composition and time variations will be discussed. PhD Candidate.
Health policy--why research it and how: health political science.
de Leeuw, Evelyne; Clavier, Carole; Breton, Eric
2014-09-23
The establishment of policy is key to the implementation of actions for health. We review the nature of policy and the definition and directions of health policy. In doing so, we explicitly cast a health political science gaze on setting parameters for researching policy change for health. A brief overview of core theories of the policy process for health promotion is presented, and illustrated with empirical evidence. The key arguments are that (a) policy is not an intervention, but drives intervention development and implementation; (b) understanding policy processes and their pertinent theories is pivotal for the potential to influence policy change; (c) those theories and associated empirical work need to recognise the wicked, multi-level, and incremental nature of elements in the process; and, therefore, (d) the public health, health promotion, and education research toolbox should more explicitly embrace health political science insights. The rigorous application of insights from and theories of the policy process will enhance our understanding of not just how, but also why health policy is structured and implemented the way it is.
NASA Astrophysics Data System (ADS)
Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang
2017-10-01
Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.
Motor-Skill Learning in an Insect Inspired Neuro-Computational Control System
Arena, Eleonora; Arena, Paolo; Strauss, Roland; Patané, Luca
2017-01-01
In nature, insects show impressive adaptation and learning capabilities. The proposed computational model takes inspiration from specific structures of the insect brain: after proposing key hypotheses on the direct involvement of the mushroom bodies (MBs) and on their neural organization, we developed a new architecture for motor learning to be applied in insect-like walking robots. The proposed model is a nonlinear control system based on spiking neurons. MBs are modeled as a nonlinear recurrent spiking neural network (SNN) with novel characteristics, able to memorize time evolutions of key parameters of the neural motor controller, so that existing motor primitives can be improved. The adopted control scheme enables the structure to efficiently cope with goal-oriented behavioral motor tasks. Here, a six-legged structure, showing a steady-state exponentially stable locomotion pattern, is exposed to the need of learning new motor skills: moving through the environment, the structure is able to modulate motor commands and implements an obstacle climbing procedure. Experimental results on a simulated hexapod robot are reported; they are obtained in a dynamic simulation environment and the robot mimicks the structures of Drosophila melanogaster. PMID:28337138
Sweetapple, Christine; Fu, Guangtao; Butler, David
2013-09-01
This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality. Copyright © 2013 Elsevier Ltd. All rights reserved.
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
2012-09-01
Services FSD Federated Services Daemon I&A Identification and Authentication IKE Internet Key Exchange KPI Key Performance Indicator LAN Local Area...spection takes place in different processes in the server architecture. Key Performance Indica- tor ( KPI )s associated with the system need to be...application and risk analysis of security controls. Thus, measurement of the KPIs is needed before an informed tradeoff between the performance penalties
NASA Astrophysics Data System (ADS)
Farhadi, L.; Abdolghafoorian, A.
2015-12-01
The land surface is a key component of climate system. It controls the partitioning of available energy at the surface between sensible and latent heat, and partitioning of available water between evaporation and runoff. Water and energy cycle are intrinsically coupled through evaporation, which represents a heat exchange as latent heat flux. Accurate estimation of fluxes of heat and moisture are of significant importance in many fields such as hydrology, climatology and meteorology. In this study we develop and apply a Bayesian framework for estimating the key unknown parameters of terrestrial water and energy balance equations (i.e. moisture and heat diffusion) and their uncertainty in land surface models. These equations are coupled through flux of evaporation. The estimation system is based on the adjoint method for solving a least-squares optimization problem. The cost function consists of aggregated errors on state (i.e. moisture and temperature) with respect to observation and parameters estimation with respect to prior values over the entire assimilation period. This cost function is minimized with respect to parameters to identify models of sensible heat, latent heat/evaporation and drainage and runoff. Inverse of Hessian of the cost function is an approximation of the posterior uncertainty of parameter estimates. Uncertainty of estimated fluxes is estimated by propagating the uncertainty for linear and nonlinear function of key parameters through the method of First Order Second Moment (FOSM). Uncertainty analysis is used in this method to guide the formulation of a well-posed estimation problem. Accuracy of the method is assessed at point scale using surface energy and water fluxes generated by the Simultaneous Heat and Water (SHAW) model at the selected AmeriFlux stations. This method can be applied to diverse climates and land surface conditions with different spatial scales, using remotely sensed measurements of surface moisture and temperature states
Reliability and performance evaluation of systems containing embedded rule-based expert systems
NASA Technical Reports Server (NTRS)
Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.
1989-01-01
A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.
Predictive modeling of transient storage and nutrient uptake: Implications for stream restoration
O'Connor, Ben L.; Hondzo, Miki; Harvey, Judson W.
2010-01-01
This study examined two key aspects of reactive transport modeling for stream restoration purposes: the accuracy of the nutrient spiraling and transient storage models for quantifying reach-scale nutrient uptake, and the ability to quantify transport parameters using measurements and scaling techniques in order to improve upon traditional conservative tracer fitting methods. Nitrate (NO3–) uptake rates inferred using the nutrient spiraling model underestimated the total NO3– mass loss by 82%, which was attributed to the exclusion of dispersion and transient storage. The transient storage model was more accurate with respect to the NO3– mass loss (±20%) and also demonstrated that uptake in the main channel was more significant than in storage zones. Conservative tracer fitting was unable to produce transport parameter estimates for a riffle-pool transition of the study reach, while forward modeling of solute transport using measured/scaled transport parameters matched conservative tracer breakthrough curves for all reaches. Additionally, solute exchange between the main channel and embayment surface storage zones was quantified using first-order theory. These results demonstrate that it is vital to account for transient storage in quantifying nutrient uptake, and the continued development of measurement/scaling techniques is needed for reactive transport modeling of streams with complex hydraulic and geomorphic conditions.
NASA Astrophysics Data System (ADS)
Bertrand, Lionel; Jusseaume, Jessie; Géraud, Yves; Diraison, Marc; Damy, Pierre-Clément; Navelot, Vivien; Haffen, Sébastien
2018-03-01
In fractured reservoirs in the basement of extensional basins, fault and fracture parameters like density, spacing and length distribution are key properties for modelling and prediction of reservoir properties and fluids flow. As only large faults are detectable using basin-scale geophysical investigations, these fine-scale parameters need to be inferred from faults and fractures in analogous rocks at the outcrop. In this study, we use the western shoulder of the Upper Rhine Graben as an outcropping analogue of several deep borehole projects in the basement of the graben. Geological regional data, DTM (Digital Terrain Model) mapping and outcrop studies with scanlines are used to determine the spatial arrangement of the faults from the regional to the reservoir scale. The data shows that: 1) The fault network can be hierarchized in three different orders of scale and structural blocks with a characteristic structuration. This is consistent with other basement rocks studies in other rifting system allowing the extrapolation of the important parameters for modelling. 2) In the structural blocks, the fracture network linked to the faults is linked to the interplay between rock facies variation linked to the rock emplacement and the rifting event.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
Żak, Arkadiusz
2014-01-01
One of the side effects of each electrical device work is the electromagnetic field generated near its workplace. All organisms, including humans, are exposed daily to the influence of different types of this field, characterized by various physical parameters. Therefore, it is important to accurately determine the effects of an electromagnetic field on the physiological and pathological processes occurring in cells, tissues, and organs. Numerous epidemiological and experimental data suggest that the extremely low frequency magnetic field generated by electrical transmission lines and electrically powered devices and the high frequencies electromagnetic radiation emitted by electronic devices have a potentially negative impact on the circadian system. On the other hand, several studies have found no influence of these fields on chronobiological parameters. According to the current state of knowledge, some previously proposed hypotheses, including one concerning the key role of melatonin secretion disruption in pathogenesis of electromagnetic field induced diseases, need to be revised. This paper reviews the data on the effect of electric, magnetic, and electromagnetic fields on melatonin and cortisol rhythms—two major markers of the circadian system as well as on sleep. It also provides the basic information about the nature, classification, parameters, and sources of these fields. PMID:25136557
Klinzing, Gerard R; Zavaliangos, Antonios
2016-08-01
This work establishes a predictive model that explicitly recognizes microstructural parameters in the description of the overall mass uptake and local gradients of moisture into tablets. Model equations were formulated based on local tablet geometry to describe the transient uptake of moisture. An analytical solution to a simplified set of model equations was solved to predict the overall mass uptake and moisture gradients with the tablets. The analytical solution takes into account individual diffusion mechanisms in different scales of porosity and diffusion into the solid phase. The time constant of mass uptake was found to be a function of several key material properties, such as tablet relative density, pore tortuosity, and equilibrium moisture content of the material. The predictions of the model are in excellent agreement with experimental results for microcrystalline cellulose tablets without the need for parameter fitting. The model presented provides a new method to analyze the transient uptake of moisture into hydrophilic materials with the knowledge of only a few fundamental material and microstructural parameters. In addition, the model allows for quick and insightful predictions of moisture diffusion for a variety of practical applications including pharmaceutical tablets, porous polymer systems, or cementitious materials. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cazzulani, Gabriele; Resta, Ferruccio; Ripamonti, Francesco
2012-04-01
During the last years, more and more mechanical applications saw the introduction of active control strategies. In particular, the need of improving the performances and/or the system health is very often associated to vibration suppression. This goal can be achieved considering both passive and active solutions. In this sense, many active control strategies have been developed, such as the Independent Modal Space Control (IMSC) or the resonant controllers (PPF, IRC, . . .). In all these cases, in order to tune and optimize the control strategy, the knowledge of the system dynamic behaviour is very important and it can be achieved both considering a numerical model of the system or through an experimental identification process. Anyway, dealing with non-linear or time-varying systems, a tool able to online identify the system parameters becomes a key-point for the control logic synthesis. The aim of the present work is the definition of a real-time technique, based on ARMAX models, that estimates the system parameters starting from the measurements of piezoelectric sensors. These parameters are returned to the control logic, that automatically adapts itself to the system dynamics. The problem is numerically investigated considering a carbon-fiber plate model forced through a piezoelectric patch.
Analysis of the variation of range parameters of thermal cameras
NASA Astrophysics Data System (ADS)
Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał
2016-10-01
Measured range characteristics may vary considerably (up to several dozen percent) between different samples of the same camera type. The question is whether the manufacturing process somehow lacks repeatability or the commonly used measurement procedures themselves need improvement. The presented paper attempts to deal with the aforementioned question. The measurement method has been thoroughly analyzed as well as the measurement test bed. Camera components (such as detector and optics) have also been analyzed and their key parameters have been measured, including noise figures of the entire system. Laboratory measurements are the most precise method used to determine range parameters of a thermal camera. However, in order to obtain reliable results several important conditions have to be fulfilled. One must have the test equipment capable of measurement accuracy (uncertainty) significantly better than the magnitudes of measured quantities. The measurements must be performed in a controlled environment thus excluding the influence of varying environmental conditions. The personnel must be well-trained, experienced in testing the thermal imaging devices and familiar with the applied measurement procedures. The measurement data recorded for several dozen of cooled thermal cameras (from one of leading camera manufacturers) have been the basis of the presented analysis. The measurements were conducted in the accredited research laboratory of Institute of Optoelectronics (Military University of Technology).
Parameter Heterogeneity In Breast Cancer Cost Regressions – Evidence From Five European Countries
Banks, Helen; Campbell, Harry; Douglas, Anne; Fletcher, Eilidh; McCallum, Alison; Moger, Tron Anders; Peltola, Mikko; Sveréus, Sofia; Wild, Sarah; Williams, Linda J.; Forbes, John
2015-01-01
Abstract We investigate parameter heterogeneity in breast cancer 1‐year cumulative hospital costs across five European countries as part of the EuroHOPE project. The paper aims to explore whether conditional mean effects provide a suitable representation of the national variation in hospital costs. A cohort of patients with a primary diagnosis of invasive breast cancer (ICD‐9 codes 174 and ICD‐10 C50 codes) is derived using routinely collected individual breast cancer data from Finland, the metropolitan area of Turin (Italy), Norway, Scotland and Sweden. Conditional mean effects are estimated by ordinary least squares for each country, and quantile regressions are used to explore heterogeneity across the conditional quantile distribution. Point estimates based on conditional mean effects provide a good approximation of treatment response for some key demographic and diagnostic specific variables (e.g. age and ICD‐10 diagnosis) across the conditional quantile distribution. For many policy variables of interest, however, there is considerable evidence of parameter heterogeneity that is concealed if decisions are based solely on conditional mean results. The use of quantile regression methods reinforce the need to consider beyond an average effect given the greater recognition that breast cancer is a complex disease reflecting patient heterogeneity. © 2015 The Authors. Health Economics Published by John Wiley & Sons Ltd. PMID:26633866
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.
John R. Squires; Patricia L. Kennedy
2006-01-01
The contentious and litigious history associated with managing Northern Goshawks (Accipiter gentilis) has focused much research attention toward understanding this speciesâ life history. Results from these studies address many key information needs that are useful to managers and decision makers, but many pressing information needs exist to address key...
Mavhu, Webster; Hatzold, Karin; Biddle, Andrea K.; Madidi, Ngonidzashe; Ncube, Getrude; Mugurungi, Owen; Ticklay, Ismail; Cowan, Frances M.; Thirumurthy, Harsha
2015-01-01
Background: Safe and cost-effective programs for implementing early infant male circumcision (EIMC) in Africa need to be piloted. We present results on a relative cost analysis within a randomized noninferiority trial of EIMC comparing the AccuCirc device with Mogen clamp in Zimbabwe. Methods: Between January and June 2013, male infants who met inclusion criteria were randomized to EIMC through either AccuCirc or Mogen clamp conducted by a doctor, using a 2:1 allocation ratio. We evaluated the overall unit cost plus the key cost drivers of EIMC using both AccuCirc and Mogen clamp. Direct costs included consumable and nonconsumable supplies, device, personnel, associated staff training, and environmental costs. Indirect costs comprised capital and support personnel costs. In 1-way sensitivity analyses, we assessed potential changes in unit costs due to variations in main parameters, one at a time, holding all other values constant. Results: The unit costs of EIMC using AccuCirc and Mogen clamp were $49.53 and $55.93, respectively. Key cost drivers were consumable supplies, capacity utilization, personnel costs, and device price. Unit prices are likely to be lowest at full capacity utilization and increase as capacity utilization decreases. Unit prices also fall with lower personnel salaries and increase with higher device prices. Conclusions: EIMC has a lower unit cost when using AccuCirc compared with Mogen clamp. To minimize unit costs, countries planning to scale-up EIMC using AccuCirc need to control costs of consumables and personnel. There is also need to negotiate a reasonable device price and maximize capacity utilization. PMID:26017658
Vieillard-Baron, Antoine; Naeije, R; Haddad, F; Bogaard, H J; Bull, T M; Fletcher, N; Lahm, T; Magder, S; Orde, S; Schmidt, G; Pinsky, M R
2018-05-09
This is a state-of-the-art article of the diagnostic process, etiologies and management of acute right ventricular (RV) failure in critically ill patients. It is based on a large review of previously published articles in the field, as well as the expertise of the authors. The authors propose the ten key points and directions for future research in the field. RV failure (RVF) is frequent in the ICU, magnified by the frequent need for positive pressure ventilation. While no universal definition of RVF is accepted, we propose that RVF may be defined as a state in which the right ventricle is unable to meet the demands for blood flow without excessive use of the Frank-Starling mechanism (i.e. increase in stroke volume associated with increased preload). Both echocardiography and hemodynamic monitoring play a central role in the evaluation of RVF in the ICU. Management of RVF includes treatment of the causes, respiratory optimization and hemodynamic support. The administration of fluids is potentially deleterious and unlikely to lead to improvement in cardiac output in the majority of cases. Vasopressors are needed in the setting of shock to restore the systemic pressure and avoid RV ischemia; inotropic drug or inodilator therapies may also be needed. In the most severe cases, recent mechanical circulatory support devices are proposed to unload the RV and improve organ perfusion CONCLUSION: RV function evaluation is key in the critically-ill patients for hemodynamic management, as fluid optimization, vasopressor strategy and respiratory support. RV failure may be diagnosed by the association of different devices and parameters, while echocardiography is crucial.
ERIC Educational Resources Information Center
Mengoni, Silvana; Bardsley, Janet; Oates, John
2015-01-01
Key working is a way of supporting children and young people with special educational needs and disabilities (SEND) and their families, and is highly regarded by families and practitioners. However, there is a lack of up-to-date research exploring key working in the current context of policy reforms in England. This article reports an evaluation…
Lightweight Provenance Service for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Dong; Chen, Yong; Carns, Philip
Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. Themore » need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.« less
Enabling Broadband as Commodity within Access Networks: A QoS Recipe
NASA Astrophysics Data System (ADS)
Areizaga, Enrique; Foglar, Andreas; Elizondo, Antonio J.; Geilhardt, Frank
This paper describes the QoS features that will transform the access networks landscape in order to bring “Broadband” as a commodity while setting up the pillars of the “Future Media Internet”. Quality of Experience is obviously key for emerging and future services. Broadcasting services will first need to equal the QoE of their counterparts in the Open-air market (for IP-TV examples would be artifact-free, no picture freezing, fast zapping times) and offer new features often using interactivity (Time-shifted TV, access to more content, 3DTV with feeling of presence). The huge variety of communications alternatives will lead to different requirements per customer, whose needs will also be dependent on parameters like where the connection is made, the time of the day/day of the week/period of the year or even his/her mood. Today’s networks, designed for providing just Broadband connectivity, will not be enough to satisfy customer’s needs and will necessarily support the introduction of new and innovative services. The Networks of the future should learn from the way the users are communicating, what services they are using, where, when, and how, and adapt accordingly.
NASA Astrophysics Data System (ADS)
Qiu, Zhaoyang; Wang, Pei; Zhu, Jun; Tang, Bin
2016-12-01
Nyquist folding receiver (NYFR) is a novel ultra-wideband receiver architecture which can realize wideband receiving with a small amount of equipment. Linear frequency modulated/binary phase shift keying (LFM/BPSK) hybrid modulated signal is a novel kind of low probability interception signal with wide bandwidth. The NYFR is an effective architecture to intercept the LFM/BPSK signal and the LFM/BPSK signal intercepted by the NYFR will add the local oscillator modulation. A parameter estimation algorithm for the NYFR output signal is proposed. According to the NYFR prior information, the chirp singular value ratio spectrum is proposed to estimate the chirp rate. Then, based on the output self-characteristic, matching component function is designed to estimate Nyquist zone (NZ) index. Finally, matching code and subspace method are employed to estimate the phase change points and code length. Compared with the existing methods, the proposed algorithm has a better performance. It also has no need to construct a multi-channel structure, which means the computational complexity for the NZ index estimation is small. The simulation results demonstrate the efficacy of the proposed algorithm.
Avila, Manuel; Graterol, Eduardo; Alezones, Jesús; Criollo, Beisy; Castillo, Dámaso; Kuri, Victoria; Oviedo, Norman; Moquete, Cesar; Romero, Marbella; Hanley, Zaida; Taylor, Margie
2012-06-01
The appearance of rice grain is a key aspect in quality determination. Mainly, this analysis is performed by expert analysts through visual observation; however, due to the subjective nature of the analysis, the results may vary among analysts. In order to evaluate the concordance between analysts from Latin-American rice quality laboratories for rice grain appearance through digital images, an inter-laboratory test was performed with ten analysts and images of 90 grains captured with a high resolution scanner. Rice grains were classified in four categories including translucent, chalky, white belly, and damaged grain. Data was categorized using statistic parameters like mode and its frequency, the relative concordance, and the reproducibility parameter kappa. Additionally, a referential image gallery of typical grain for each category was constructed based on mode frequency. Results showed a Kappa value of 0.49, corresponding to a moderate reproducibility, attributable to subjectivity in the visual analysis of grain images. These results reveal the need for standardize the evaluation criteria among analysts to improve the confidence of the determination of rice grain appearance.
NASA Astrophysics Data System (ADS)
Neuberg, J. W.; Thomas, M.; Pascal, K.; Karl, S.
2012-04-01
Geophysical datasets are essential to guide particularly short-term forecasting of volcanic activity. Key parameters are derived from these datasets and interpreted in different ways, however, the biggest impact on the interpretation is not determined by the range of parameters but controlled through the parameterisation and the underlying conceptual model of the volcanic process. On the other hand, the increasing number of sophisticated geophysical models need to be constrained by monitoring data, to transform a merely numerical exercise into a useful forecasting tool. We utilise datasets from the "big three", seismology, deformation and gas emissions, to gain insight in the mutual relationship between conceptual models and constraining data. We show that, e.g. the same seismic dataset can be interpreted with respect to a wide variety of different models with very different implications to forecasting. In turn, different data processing procedures lead to different outcomes even though they are based on the same conceptual model. Unsurprisingly, the most reliable interpretation will be achieved by employing multi-disciplinary models with overlapping constraints.
Mutation rates among RNA viruses
Drake, John W.; Holland, John J.
1999-01-01
The rate of spontaneous mutation is a key parameter in modeling the genetic structure and evolution of populations. The impact of the accumulated load of mutations and the consequences of increasing the mutation rate are important in assessing the genetic health of populations. Mutation frequencies are among the more directly measurable population parameters, although the information needed to convert them into mutation rates is often lacking. A previous analysis of mutation rates in RNA viruses (specifically in riboviruses rather than retroviruses) was constrained by the quality and quantity of available measurements and by the lack of a specific theoretical framework for converting mutation frequencies into mutation rates in this group of organisms. Here, we describe a simple relation between ribovirus mutation frequencies and mutation rates, apply it to the best (albeit far from satisfactory) available data, and observe a central value for the mutation rate per genome per replication of μg ≈ 0.76. (The rate per round of cell infection is twice this value or about 1.5.) This value is so large, and ribovirus genomes are so informationally dense, that even a modest increase extinguishes the population. PMID:10570172
Issues in stinging insect allergy immunotherapy: a review.
Finegold, Ira
2008-08-01
The treatment of insect allergy by desensitization still continues to present with some unanswered questions. This review will focus mainly on articles that have dealt with these issues in the past 2 years. With the publication in 2007 of Allergen Immunotherapy: a practice parameter second update, many of the key issues were reviewed and summarized. Other recent studies deal with omalizumab pretreatment of patients with systemic mastocytosis and very severe allergic reactions to immunotherapy. It would appear that venom immunotherapy is somewhat unique compared to inhalant allergen immunotherapy in that premedication prior to rush protocols may not be necessary and that intervals of therapy may be longer than with allergen immunotherapy. The use of concomitant medications such as beta-blockers may be indicated in special situations. Angiotensin-converting enzyme inhibitors can be stopped temporarily before venom injections to prevent reactions. The issue of when to discontinue immunotherapy remains unsettled and should be individualized to patient requirements. The newest revision of the Immunotherapy Parameters provides much needed information concerning successful treatment with immunotherapy of Hymenoptera-sensitive patients.
Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti
2017-08-11
In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.
Shared environmental influences on personality: A combined twin and adoption approach
Matteson, Lindsay K.; McGue, Matt; Iacono, William G.
2013-01-01
In the past, shared environmental influences on personality traits have been found to be negligible in behavior genetic studies (e.g., Bouchard & McGue, 2003). However, most studies have been based on biometrical modeling of twins only. Failure to meet key assumptions of the classical twin design could lead to biased estimates of shared environmental effects. Alternative approaches to the etiology of personality are needed. In the current study we estimated the impact of shared environmental factors on adolescent personality by simultaneously modeling both twin and adoption data. We found evidence for significant shared environmental influences on Multidimensional Personality Questionnaire (MPQ) Absorption (15% variance explained), Alienation (10%), Harm Avoidance (14%), and Traditionalism (26%) scales. Additionally, we found that in most cases biometrical models constraining parameter estimates to be equal across study type (twins versus adoptees) fit no worse than models allowing these parameters to vary; this suggests that results converge across study design despite the potential (sometimes opposite) biases of twin and adoption studies. Thus, we can be more confident that our findings represent the true contribution of shared environmental variance to personality development. PMID:24065564
Chahal, Manjit; Celler, George K; Jaluria, Yogesh; Jiang, Wei
2012-02-13
Employing a semi-analytic approach, we study the influence of key structural and optical parameters on the thermo-optic characteristics of photonic crystal waveguide (PCW) structures on a silicon-on-insulator (SOI) platform. The power consumption and spatial temperature profile of such structures are given as explicit functions of various structural, thermal and optical parameters, offering physical insight not available in finite-element simulations. Agreement with finite-element simulations and experiments is demonstrated. Thermal enhancement of the air-bridge structure is analyzed. The practical limit of thermo-optic switching power in slow light PCWs is discussed, and the scaling with key parameters is analyzed. Optical switching with sub-milliwatt power is shown viable.
Washington state short line rail inventory and needs assessment.
DOT National Transportation Integrated Search
2015-06-01
The recently completed State Rail Plan for the state of Washington identified several key issues facing the states : rail system. Among these key issues are abandonment, port access and competitive needs of the ports and local : production regions...
Fast adaptive optical system for the high-power laser beam correction in atmosphere
NASA Astrophysics Data System (ADS)
Kudryashov, Alexis; Lylova, Anna; Samarkin, Vadim; Sheldakova, Julia; Alexandrov, Alexander
2017-09-01
Key elements of the fast adaptive optical system (AOS), having correction frequency of 1400 Hz, for atmospheric turbulence compensation, are described in this paper. A water-cooled bimorph deformable mirror with 46 electrodes, as well as stacked actuator deformable mirror with 81 piezoactuators and 2000 Hz Shack-Hartmann wavefront sensor were considered to be used to control the light beam. The parameters of the turbulence at the 1.2 km path of the light propagation were measured and analyzed. The key parameters for such an adaptive system were worked out.
Key aspects of cost effective collector and solar field design
NASA Astrophysics Data System (ADS)
von Reeken, Finn; Nicodemo, Dario; Keck, Thomas; Weinrebe, Gerhard; Balz, Markus
2016-05-01
A study has been performed where different key parameters influencing solar field cost are varied. By using levelised cost of energy as figure of merit it is shown that parameters like GoToStow wind speed, heliostat stiffness or tower height should be adapted to respective site conditions from an economical point of view. The benchmark site Redstone (Northern Cape Province, South Africa) has been compared to an alternate site close to Phoenix (AZ, USA) regarding site conditions and their effect on cost-effective collector and solar field design.
The PROCARE consortium: toward an improved allocation strategy for kidney allografts.
Otten, H G; Joosten, I; Allebes, W A; van der Meer, A; Hilbrands, L B; Baas, M; Spierings, E; Hack, C E; van Reekum, F; van Zuilen, A D; Verhaar, M C; Bots, M L; Seelen, M A J; Sanders, J S F; Hepkema, B G; Lambeck, A J; Bungener, L B; Roozendaal, C; Tilanus, M G J; Vanderlocht, J; Voorter, C E; Wieten, L; van Duijnhoven, E; Gelens, M; Christiaans, M; van Ittersum, F; Nurmohamed, A; Lardy, N M; Swelsen, W T; van Donselaar-van der Pant, K A M I; van der Weerd, N C; Ten Berge, I J M; Bemelman, F J; Hoitsma, A J; de Fijter, J W; Betjes, M G H; Roelen, D L; Claas, F H J
2014-10-01
Kidney transplantation is the best treatment option for patients with end-stage renal failure. At present, approximately 800 Dutch patients are registered on the active waiting list of Eurotransplant. The waiting time in the Netherlands for a kidney from a deceased donor is on average between 3 and 4 years. During this period, patients are fully dependent on dialysis, which replaces only partly the renal function, whereas the quality of life is limited. Mortality among patients on the waiting list is high. In order to increase the number of kidney donors, several initiatives have been undertaken by the Dutch Kidney Foundation including national calls for donor registration and providing information on organ donation and kidney transplantation. The aim of the national PROCARE consortium is to develop improved matching algorithms that will lead to a prolonged survival of transplanted donor kidneys and a reduced HLA immunization. The latter will positively affect the waiting time for a retransplantation. The present algorithm for allocation is among others based on matching for HLA antigens, which were originally defined by antibodies using serological typing techniques. However, several studies suggest that this algorithm needs adaptation and that other immune parameters which are currently not included may assist in improving graft survival rates. We will employ a multicenter-based evaluation on 5429 patients transplanted between 1995 and 2005 in the Netherlands. The association between key clinical endpoints and selected laboratory defined parameters will be examined, including Luminex-defined HLA antibody specificities, T and B cell epitopes recognized on the mismatched HLA antigens, non-HLA antibodies, and also polymorphisms in complement and Fc receptors functionally associated with effector functions of anti-graft antibodies. From these data, key parameters determining the success of kidney transplantation will be identified which will lead to the identification of additional parameters to be included in future matching algorithms aiming to extend survival of transplanted kidneys and to diminish HLA immunization. Computer simulation studies will reveal the number of patients having a direct benefit from improved matching, the effect on shortening of the waiting list, and the decrease in waiting time. Copyright © 2014. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A; Cole, Wesley J; Sun, Yinong
Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss commonmore » modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges associate with integration of variable generation resources.« less
Songhurst, Anna; Coulson, Tim
2014-03-01
Few universal trends in spatial patterns of wildlife crop-raiding have been found. Variations in wildlife ecology and movements, and human spatial use have been identified as causes of this apparent unpredictability. However, varying spatial patterns of spatial autocorrelation (SA) in human-wildlife conflict (HWC) data could also contribute. We explicitly explore the effects of SA on wildlife crop-raiding data in order to facilitate the design of future HWC studies. We conducted a comparative survey of raided and nonraided fields to determine key drivers of crop-raiding. Data were subsampled at different spatial scales to select independent raiding data points. The model derived from all data was fitted to subsample data sets. Model parameters from these models were compared to determine the effect of SA. Most methods used to account for SA in data attempt to correct for the change in P-values; yet, by subsampling data at broader spatial scales, we identified changes in regression estimates. We consequently advocate reporting both model parameters across a range of spatial scales to help biological interpretation. Patterns of SA vary spatially in our crop-raiding data. Spatial distribution of fields should therefore be considered when choosing the spatial scale for analyses of HWC studies. Robust key drivers of elephant crop-raiding included raiding history of a field and distance of field to a main elephant pathway. Understanding spatial patterns and determining reliable socio-ecological drivers of wildlife crop-raiding is paramount for designing mitigation and land-use planning strategies to reduce HWC. Spatial patterns of HWC are complex, determined by multiple factors acting at more than one scale; therefore, studies need to be designed with an understanding of the effects of SA. Our methods are accessible to a variety of practitioners to assess the effects of SA, thereby improving the reliability of conservation management actions.
Lewis, Thomas L; Fothergill, Rachael T; Karthikesalingam, Alan
2016-10-24
Rupture of an abdominal aortic aneurysm (rAAA) carries a considerable mortality rate and is often fatal. rAAA can be treated through open or endovascular surgical intervention and it is possible that more rapid access to definitive intervention might be a key aspect of improving mortality for rAAA. Diagnosis is not always straightforward with up to 42% of rAAA initially misdiagnosed, introducing potentially harmful delay. There is a need for an effective clinical decision support tool for accurate prehospital diagnosis and triage to enable transfer to an appropriate centre. Prospective multicentre observational study assessing the diagnostic accuracy of a prehospital smartphone triage tool for detection of rAAA. The study will be conducted across London in conjunction with London Ambulance Service (LAS). A logistic score predicting the risk of rAAA by assessing ten key parameters was developed and retrospectively validated through logistic regression analysis of ambulance records and Hospital Episode Statistics data for 2200 patients from 2005 to 2010. The triage tool is integrated into a secure mobile app for major smartphone platforms. Key parameters collected from the app will be retrospectively matched with final hospital discharge diagnosis for each patient encounter. The primary outcome is to assess the sensitivity, specificity and positive predictive value of the rAAA triage tool logistic score in prospective use as a mob app for prehospital ambulance clinicians. Data collection started in November 2014 and the study will recruit a minimum of 1150 non-consecutive patients over a time period of 2 years. Full ethical approval has been gained for this study. The results of this study will be disseminated in peer-reviewed publications, and international/national presentations. CPMS 16459; pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Security of Y-00 and Similar Quantum Cryptographic Protocols
2004-11-16
security of Y-00 type protocols is clarified. Key words: Quantum cryptography PACS: 03.67.Dd Anew approach to quantum cryptog- raphy called KCQ, ( keyed ...classical- noise key generation [2] or the well known BB84 quantum protocol [3]. A special case called αη (or Y-00 in Japan) has been experimentally in... quantum noise for typical op- erating parameters. It weakens both the data and key security , possibly information-theoretically and cer- tainly
NASA Astrophysics Data System (ADS)
Tagaris, Efthimios; -Eleni Sotiropoulou, Rafaella; Sotiropoulos, Andreas; Spanos, Ioannis; Milonas, Panayiotis; Michaelakis, Antonios
2017-04-01
Establishment and seasonal abundance of a region for Invasive Mosquito Species (IMS) are related to climatic parameters such as temperature and precipitation. In this work the current state is assessed using data from the European Climate Assessment and Dataset (ECA&D) project over Greece and Italy for the development of current spatial risk databases of IMS. Results are validated from the installation of a prototype IMS monitoring device that has been designed and developed in the framework of the LIFE CONOPS project at key points across the two countries. Since climate models suggest changes in future temperature and precipitation rates, the future potentiality of IMS establishment and spread over Greece and Italy is assessed using the climatic parameters in 2050's provided by the NASA GISS GCM ModelE under the IPCC-A1B emissions scenarios. The need for regional climate projections in a finer grid size is assessed using the Weather Research and Forecasting (WRF) model to dynamically downscale GCM simulations. The estimated changes in the future meteorological parameters are combined with the observation data in order to estimate the future levels of the climatic parameters of interest. The final product includes spatial distribution maps presenting the future suitability of a region for the establishment and seasonal abundance of the IMS over Greece and Italy. Acknowledgement: LIFE CONOPS project "Development & demonstration of management plans against - the climate change enhanced - invasive mosquitoes in S. Europe" (LIFE12 ENV/GR/000466).
Impact of signal scattering and parametric uncertainties on receiver operating characteristics
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.
2017-05-01
The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.
Research on Product Conceptual Design Based on Integrated of TRIZ and HOQ
NASA Astrophysics Data System (ADS)
Xie, Jianmin; Tang, Xiaowo; Shao, Yunfei
The conceptual design determines the success of the final product quality and competition of market. The determination of design parameters and the effective method to resolve parameters contradiction are the key to success. In this paper, the concept of HOQ products designed to determine the parameters, then using the TRIZ contradiction matrix and inventive principles of design parameters to solve the problem of contradictions. Facts have proved that the effective method is to obtain the product concept design parameters and to resolve contradictions line parameters.
System Engineering Analysis For Improved Scout Business Information Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Slyke, D. A.
The project uses system engineering principles to address the need of Boy Scout leaders for an integrated system to facilitate advancement and awards records, leader training and planning for meetings and activities. Existing products to address needs of Scout leaders and relevant stakeholders function to support record keeping and some communication functions but opportunity exists for a better system to fully integrate these functions with training delivery and recording, activity planning along with feedback and information gathering from stakeholders. Key stakeholders for the sytem include Scouts and their families, leaders, training providers, sellers of supplies and awards, content generators andmore » facilities that serve Scout activities. Key performance parameters for the system are protection of personal information, availability of current information, information accuracy and information content that has depth. Implementation concepts considered for the system include (1) owned and operated by Boy Scouts of America, (2) Contracted out to a vendor (3) distributed system that functions with BSA managed interfaces. The selected concept is to contract out to a vendor to maximize the likelihood of successful integration and take advantage of the best technology. Development of requirements considers three key use cases (1) System facilitates planning a hike with training needed satisfied in advance and advancement recording real time (2) Scheduling and documenting in-person training, (3) Family interested in Scouting receives information and can request follow-up. Non-functional requirements are analyzed with the Quality Function Deployment tool. Requirement addressing frequency of backup, compatibility with legacy and new technology, language support, software update are developed to address system reliability and intuitive interface. System functions analyzed include update of activity database, maintenance of advancement status, archive of documents, and monitoring of content that is accessible. The study examines risks associated with information security, technological change and continued popularity of Scouting. Mitigation is based on system functions that are defined. The approach to developing an improved system for facilitating Boy Scout leader functions was iterative with insights into capabilities coming in the course of working through the used cases and sequence diagrams.« less
StratBAM: A Discrete-Event Simulation Model to Support Strategic Hospital Bed Capacity Decisions.
Devapriya, Priyantha; Strömblad, Christopher T B; Bailey, Matthew D; Frazier, Seth; Bulger, John; Kemberling, Sharon T; Wood, Kenneth E
2015-10-01
The ability to accurately measure and assess current and potential health care system capacities is an issue of local and national significance. Recent joint statements by the Institute of Medicine and the Agency for Healthcare Research and Quality have emphasized the need to apply industrial and systems engineering principles to improving health care quality and patient safety outcomes. To address this need, a decision support tool was developed for planning and budgeting of current and future bed capacity, and evaluating potential process improvement efforts. The Strategic Bed Analysis Model (StratBAM) is a discrete-event simulation model created after a thorough analysis of patient flow and data from Geisinger Health System's (GHS) electronic health records. Key inputs include: timing, quantity and category of patient arrivals and discharges; unit-level length of care; patient paths; and projected patient volume and length of stay. Key outputs include: admission wait time by arrival source and receiving unit, and occupancy rates. Electronic health records were used to estimate parameters for probability distributions and to build empirical distributions for unit-level length of care and for patient paths. Validation of the simulation model against GHS operational data confirmed its ability to model real-world data consistently and accurately. StratBAM was successfully used to evaluate the system impact of forecasted patient volumes and length of stay in terms of patient wait times, occupancy rates, and cost. The model is generalizable and can be appropriately scaled for larger and smaller health care settings.
Steil, Garry M; Hipszer, Brian; Reifman, Jaques
2010-05-01
One year after its initial meeting, the Glycemia Modeling Working Group reconvened during the 2009 Diabetes Technology Meeting in San Francisco, CA. The discussion, involving 39 scientists, again focused on the need for individual investigators to have access to the clinical data required to develop and refine models of glucose metabolism, the need to understand the differences among the distinct models and control algorithms, and the significance of day-to-day subject variability. The key conclusion was that model-based comparisons of different control algorithms, or the models themselves, are limited by the inability to access individual model-patient parameters. It was widely agreed that these parameters, as opposed to the average parameters that are typically reported, are necessary to perform such comparisons. However, the prevailing view was that, if investigators were to make the parameters available, it would limit their ability (and that of their institution) to benefit from the invested work in developing their models. A general agreement was reached regarding the importance of each model having an insulin pharmacokinetic/pharmacodynamic profile that is not different from profiles reported in the literature (88% of the respondents agreed that the model should have similar curves or be analyzed separately) and the importance of capturing intraday variance in insulin sensitivity (91% of the respondents indicated that this could result in changes in fasting glucose of >or=15%, with 52% of the respondents believing that the variability could effect changes of >or=30%). Seventy-six percent of the participants indicated that high-fat meals were thought to effect changes in other model parameters in addition to gastric emptying. There was also widespread consensus as to how a closed-loop controller should respond to day-to-day changes in model parameters (with 76% of the participants indicating that fasting glucose should be within 15% of target, with 30% of the participants believing that it should be at target). The group was evenly divided as to whether the glucose sensor per se continues to be the major obstacle in achieving closed-loop control. Finally, virtually all participants agreed that a future two-day workshop should be organized to compare, contrast, and understand the differences among the different models and control algorithms. (c) 2010 Diabetes Technology Society.
NASA Astrophysics Data System (ADS)
Almehmadi, Fares S.; Chatterjee, Monish R.
2014-12-01
Using intensity feedback, the closed-loop behavior of an acousto-optic hybrid device under profiled beam propagation has been recently shown to exhibit wider chaotic bands potentially leading to an increase in both the dynamic range and sensitivity to key parameters that characterize the encryption. In this work, a detailed examination is carried out vis-à-vis the robustness of the encryption/decryption process relative to parameter mismatch for both analog and pulse code modulation signals, and bit error rate (BER) curves are used to examine the impact of additive white noise. The simulations with profiled input beams are shown to produce a stronger encryption key (i.e., much lower parametric tolerance thresholds) relative to simulations with uniform plane wave input beams. In each case, it is shown that the tolerance for key parameters drops by factors ranging from 10 to 20 times below those for uniform plane wave propagation. Results are shown to be at consistently lower tolerances for secure transmission of analog and digital signals using parameter tolerance measures, as well as BER performance measures for digital signals. These results hold out the promise for considerably greater information transmission security for such a system.
Prediction of Geomagnetic Activity and Key Parameters in High-Latitude Ionosphere-Basic Elements
NASA Technical Reports Server (NTRS)
Lyatsky, W.; Khazanov, G. V.
2007-01-01
Prediction of geomagnetic activity and related events in the Earth's magnetosphere and ionosphere is an important task of the Space Weather program. Prediction reliability is dependent on the prediction method and elements included in the prediction scheme. Two main elements are a suitable geomagnetic activity index and coupling function -- the combination of solar wind parameters providing the best correlation between upstream solar wind data and geomagnetic activity. The appropriate choice of these two elements is imperative for any reliable prediction model. The purpose of this work was to elaborate on these two elements -- the appropriate geomagnetic activity index and the coupling function -- and investigate the opportunity to improve the reliability of the prediction of geomagnetic activity and other events in the Earth's magnetosphere. The new polar magnetic index of geomagnetic activity and the new version of the coupling function lead to a significant increase in the reliability of predicting the geomagnetic activity and some key parameters, such as cross-polar cap voltage and total Joule heating in high-latitude ionosphere, which play a very important role in the development of geomagnetic and other activity in the Earth s magnetosphere, and are widely used as key input parameters in modeling magnetospheric, ionospheric, and thermospheric processes.
Post-processing procedure for industrial quantum key distribution systems
NASA Astrophysics Data System (ADS)
Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey
2016-08-01
We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.
Mitochondria are key regulators of cellular energy homeostasis and may play a key role in the mechanisms of neurodegenerative disorders and chemical induced neurotoxicity. However, mitochondrial bioenergetic parameters have not been systematically evaluated within multiple brain ...
Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin
2018-02-19
At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.
A clinically parameterized mathematical model of Shigella immunity to inform vaccine design
Wahid, Rezwanul; Toapanta, Franklin R.; Simon, Jakub K.; Sztein, Marcelo B.
2018-01-01
We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella′s lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella′s LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine. PMID:29304144
A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.
Davis, Courtney L; Wahid, Rezwanul; Toapanta, Franklin R; Simon, Jakub K; Sztein, Marcelo B
2018-01-01
We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.
Lu, Y; Zhang, M
2016-08-20
Objective: To study the applicability, the high frequency used content, the feasibility, and issues needed to be solved of the standard of GBZ 1-2010, aiming to provide technical evidence for the revision of GBZ1. Methods: In the study, the data were collected by referring to the literature database and the questionnaire from June 2013 to June 2015. There were 2 surveys carried out in the study, with methods including questionnaire survey and specific interview. The investigation methods include the paper version of the questionnaire by mail, the electronic version of the questionnaire by e-mail, and the online survey. And 111 questionnaires were collected. Results: In total, the applicability survey (the first survey) received 156 suggestions covering 76 items from 23 facilities, and 13 key technical issues were summarized to be solved as priorities. In the application survey (the second survey) , the leading three jobs using GBZ 1-2010 were the occupational hazards evaluation for constructive project (82.0%) , lecturing/training (65.8%) , occupational hazards monitoring (64.9%) , respectively. The high frequency used contents of GBZ 1-2010 were the sixth part "the basic hygienic requirements for workplace" (90.1%) , the fifth part "site selection, overall layout and workshop design" (87.4%) , the seventh part "the basic hygienic requirements for welfare room" (85.6%) , respectively. In the results of feasibility, scores of the fourth part "the general rules" , the fifth part "site selection, overall layout and workshop design" , the sixth part "the basic hygienic requirements for workplace" , the seventh part "the basic hygienic requirements for welfare room" , the eighth "emergency rescue" , annex A "the correct use instructions" , annex B "buffer zone standards for industrial enterprises" were 2.6, 3.1, 3.5, 3.8, 3.2, 3.3, 2.6, respectively. Among 111 questionnaires, the parts needed to be modified as priories were the fifth part "site selection, overall layout and workshop design" (51.4%) , and the sixth part "the basic hygienic requirements for workplace" (51.4%) . In results of the key technical issues needed to be modified of GBZ 1-2010, the contents needed to be added as priories were the occupational prevention and control requirements of biological factors (51.4%) , technical parameters of dust in workplace (48.7%) , technical parameters of hazardous agents in workplace (46.9%) , the quality and quantity requirements of fresh air (46.0%) , the setting conditions of the emergency rescue station (46.0%) , the hygienic design requirements of joint workshops and the evidence (45.1%) , and requirements for medical emergency rescue personnel equipped and qualified (45.1%) . Conclusion: GBZ 1-2010 is feasible and practical, mainly used by occupational health technical service organizations in the occupational hazards evaluation for constructive project, lecturing/training, occupational hazards monitoring, et al. GBZ 1 plays a directive role in government decision-making, control of constructive projects from the beginning, training and capacity building of occupational health professionals, and the prevention and treatment of occupational diseases in enterprises, needing to strengthen the implementation of GBZ 1. And on the basis of the above key technical issues to be revised, international cooperation and exchanges should be strengthened, so that it is adapted to the development of the modern enterprise system.
Møller, Jacob; Boldrin, Alessio; Christensen, Thomas H
2009-11-01
Anaerobic digestion (AD) of source-separated municipal solid waste (MSW) and use of the digestate is presented from a global warming (GW) point of view by providing ranges of greenhouse gas (GHG) emissions that are useful for calculation of global warming factors (GWFs), i.e. the contribution to GW measured in CO(2)-equivalents per tonne of wet waste. The GHG accounting was done by distinguishing between direct contributions at the AD facility and indirect upstream or downstream contributions. GHG accounting for a generic AD facility with either biogas utilization at the facility or upgrading of the gas for vehicle fuel resulted in a GWF from -375 (a saving) to 111 (a load) kg CO(2)-eq. tonne(-1) wet waste. In both cases the digestate was used for fertilizer substitution. This large range was a result of the variation found for a number of key parameters: energy substitution by biogas, N(2)O-emission from digestate in soil, fugitive emission of CH( 4), unburned CH(4), carbon bound in soil and fertilizer substitution. GWF for a specific type of AD facility was in the range -95 to -4 kg CO(2)-eq. tonne(-1) wet waste. The ranges of uncertainty, especially of fugitive losses of CH(4) and carbon sequestration highly influenced the result. In comparison with the few published GWFs for AD, the range of our data was much larger demonstrating the need to use a consistent and robust approach to GHG accounting and simultaneously accept that some key parameters are highly uncertain.
Detailed assessment of global transport-energy models’ structures and projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, Sonia; Mishra, Gouri Shankar; Fulton, Lew
This paper focuses on comparing the frameworks and projections from four major global transportation models with considerable transportation technology and behavioral detail. We analyze and compare the modeling frameworks, underlying data, assumptions, intermediate parameters, and projections to identify the sources of divergence or consistency, as well as key knowledge gaps. We find that there are significant differences in the base-year data and key parameters for future projections, especially for developing countries. These include passenger and freight activity, mode shares, vehicle ownership rates, and even energy consumption by mode, particularly for shipping, aviation and trucking. This may be due in partmore » to a lack of previous efforts to do such consistency-checking and “bench-marking.” We find that the four models differ in terms of the relative roles of various mitigation strategies to achieve a 2°C / 450 ppm CO2e target: the economics-based integrated assessment models favor the use of low carbon fuels as the primary mitigation option followed by efficiency improvements, whereas transport-only and expert-based models favor efficiency improvements of vehicles followed by mode shifts. We offer recommendations for future modeling improvements focusing on (1) reducing data gaps; (2) translating the findings from this study into relevant policy implications such as feasibility of current policy goals, additional policy targets needed, regional vs. global reductions, etc.; (3) modeling strata of demographic groups to improve understanding of vehicle ownership levels, travel behavior, and urban vs. rural considerations; and (4) conducting coordinated efforts in aligning input assumptions and historical data, policy analysis, and modeling insights.« less
NASA Astrophysics Data System (ADS)
Forsythe, Nathan; Kilsby, Chris G.; Fowler, Hayley J.; Archer, David R.
2010-05-01
The water resources of the Upper Indus Basin (UIB) are of the utmost importance to the economic wellbeing of Pakistan. The irrigated agriculture made possible by Indus river runoff underpins the food security for Pakistan's nearly 200 million people. Contributions from hydropower account for more than one fifth of peak installed electrical generating capacity in a country where widespread, prolonged load-shedding handicaps business activity and industrial development. Pakistan's further socio-economic development thus depends largely on optimisation of its precious water resources. Confident, accurate projections of future water resource availability and variability are urgent insights needed by development planners and infrastructure managers at all levels. Correctly projecting future hydrological conditions depends first and foremost on a thorough understanding of the underlying mechanisms and processes of present hydroclimatology. The vertical and horizontal spatial variations in key climate parameters (temperature, precipitation) govern the contributions of the various elevation zones and subcatchments comprising the UIB. Trends in this complex mountainous region are highly varied by season and parameter. Observed changes here often do not match general global trends or even necessarily those found in neighbouring regions. This study considers data from a variety sources in order to compose the most complete picture possible of the vertical hydroclimatology of the UIB. The study presents the observed climatology and trends for precipitation and temperature from local observations at long-record meteorological stations (Pakistan Meteorological Department). These data are compared to characterisations of additional water cycle parameters (humidity, cloud, snow cover and snow-water-equivalent) derived from local short-record automatic weather stations, the ECMWF ‘ERA' reanalysis projects and satellite based observations (AVHRR, MODIS, etc). The potential implications of the vertical (hypsometric) distribution of these parameters are considered. Interlinkages between observed changes in these parameters and the evolution of large-scale circulation indices (ENSO, NAO, local vorticity) are also investigated. In parallel to these climatological considerations, the study presents the typology of the observed UIB hydrological regimes -- glacial, nival and pluvial -- including interannual variability as quantified from the available river gauging record. In order to begin to assess potential implications of future climate change on UIB hydrology, key modes of variability in the climate parameters are identified. The study then analyses in detail the corresponding observed anomalies in UIB discharge for years exemplifying these modes. In conclusion, this work postulates potential impacts of changes in the hydrological variability stemming from continuation of estimated present local climatic trends.
[Are non-invasive tests going to replace liver biopsy for diagnosis of liver fibrosis?].
Restellini, Sophie; Spahr, Laurent
2012-06-27
Liver fibrosis is associated with chronic liver diseases, and may evolve into cirrhosis that may be complicated by liver failure and portal hypertension. Detection and quantification of liver fibrosis is a key point in the follow-up of patients with chronic liver diseases. Liver biopsy is the gold standard method to assess and quantify fibrosis, but its invasiveness is a limiting factor in everyday clinical practice. Non invasive markers using either biological or radiological parameters have been developed and may decrease the need for liver biopsy in some cases. However, information is limited to fibrosis, and cut-offs values and diagnostic accuracies for significant fibrosis may vary according to the etiology of liver disease. Liver biopsy allows the assessment of intermediate stages of fibrosis and describes accompanying lesions.
Variance-based selection may explain general mating patterns in social insects.
Rueppell, Olav; Johnson, Nels; Rychtár, Jan
2008-06-23
Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.
Stream temperature and stage monitoring using fisherman looking for fish.
NASA Astrophysics Data System (ADS)
Hut, Rolf; Tyler, Scott
2015-04-01
Fly Fishing is a popular pastime in large parts of the world. Two key facts that fly fisherman need to know to find the ideal fishing spot is water depth and water temperature. These are also two parameters of interest to hydrologist, especially those interested in the hyporheic zone. We present a device that serves both fisherman and hydrologists: sensor-waders. A classic pair of waders is equipped with temperature and water height sensors. Measurement values are communicated to an app on the smartphone of the fisherman. This app provides the fisherman with real time information on local conditions. By using the geolocation of the smartphone, the measurement values are also send to a remote server for use in hydrological research. We will present a first proof of concept of the sensor-waders.
Simulation reduction using the Taguchi method
NASA Technical Reports Server (NTRS)
Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.
1993-01-01
A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.
Assuring NASA's Safety and Mission Critical Software
NASA Technical Reports Server (NTRS)
Deadrick, Wesley
2015-01-01
What is IV&V? Independent Verification and Validation (IV&V) is an objective examination of safety and mission critical software processes and products. Independence: 3 Key parameters: Technical Independence; Managerial Independence; Financial Independence. NASA IV&V perspectives: Will the system's software: Do what it is supposed to do?; Not do what it is not supposed to do?; Respond as expected under adverse conditions?. Systems Engineering: Determines if the right system has been built and that it has been built correctly. IV&V Technical Approaches: Aligned with IEEE 1012; Captured in a Catalog of Methods; Spans the full project lifecycle. IV&V Assurance Strategy: The IV&V Project's strategy for providing mission assurance; Assurance Strategy is driven by the specific needs of an individual project; Implemented via an Assurance Design; Communicated via Assurance Statements.
How to choose the therapeutic goals to improve tissue perfusion in septic shock
de Assuncao, Murillo Santucci Cesar; Corrêa, Thiago Domingos; Bravim, Bruno de Arruda; Silva, Eliézer
2015-01-01
The early recognition and treatment of severe sepsis and septic shock is the key to a successful outcome. The longer the delay in starting treatment, the worse the prognosis due to persistent tissue hypoperfusion and consequent development and worsening of organ dysfunction. One of the main mechanisms responsible for the development of cellular dysfunction is tissue hypoxia. The adjustments necessary for adequate tissue blood flow and therefore of oxygen supply to metabolic demand according to the assessment of the cardiac index and oxygen extraction rate should be performed during resuscitation period, especially in high complexity patients. New technologies, easily handled at the bedside, and new studies that directly assess the impact of macro-hemodynamic parameter optimization on microcirculation and in the clinical outcome of septic patients, are needed. PMID:26313438
An empirical-statistical model for laser cladding of Ti-6Al-4V powder on Ti-6Al-4V substrate
NASA Astrophysics Data System (ADS)
Nabhani, Mohammad; Razavi, Reza Shoja; Barekat, Masoud
2018-03-01
In this article, Ti-6Al-4V powder alloy was directly deposited on Ti-6Al-4V substrate using laser cladding process. In this process, some key parameters such as laser power (P), laser scanning rate (V) and powder feeding rate (F) play important roles. Using linear regression analysis, this paper develops the empirical-statistical relation between these key parameters and geometrical characteristics of single clad tracks (i.e. clad height, clad width, penetration depth, wetting angle, and dilution) as a combined parameter (PαVβFγ). The results indicated that the clad width linearly depended on PV-1/3 and powder feeding rate had no effect on it. The dilution controlled by a combined parameter as VF-1/2 and laser power was a dispensable factor. However, laser power was the dominant factor for the clad height, penetration depth, and wetting angle so that they were proportional to PV-1F1/4, PVF-1/8, and P3/4V-1F-1/4, respectively. Based on the results of correlation coefficient (R > 0.9) and analysis of residuals, it was confirmed that these empirical-statistical relations were in good agreement with the measured values of single clad tracks. Finally, these relations led to the design of a processing map that can predict the geometrical characteristics of the single clad tracks based on the key parameters.
Urich, Christian; Rauch, Wolfgang
2014-12-01
Long-term projections for key drivers needed in urban water infrastructure planning such as climate change, population growth, and socio-economic changes are deeply uncertain. Traditional planning approaches heavily rely on these projections, which, if a projection stays unfulfilled, can lead to problematic infrastructure decisions causing high operational costs and/or lock-in effects. New approaches based on exploratory modelling take a fundamentally different view. Aim of these is, to identify an adaptation strategy that performs well under many future scenarios, instead of optimising a strategy for a handful. However, a modelling tool to support strategic planning to test the implication of adaptation strategies under deeply uncertain conditions for urban water management does not exist yet. This paper presents a first step towards a new generation of such strategic planning tools, by combing innovative modelling tools, which coevolve the urban environment and urban water infrastructure under many different future scenarios, with robust decision making. The developed approach is applied to the city of Innsbruck, Austria, which is spatially explicitly evolved 20 years into the future under 1000 scenarios to test the robustness of different adaptation strategies. Key findings of this paper show that: (1) Such an approach can be used to successfully identify parameter ranges of key drivers in which a desired performance criterion is not fulfilled, which is an important indicator for the robustness of an adaptation strategy; and (2) Analysis of the rich dataset gives new insights into the adaptive responses of agents to key drivers in the urban system by modifying a strategy. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reproducibility of geochemical and climatic signals in the Atlantic coral Montastraea faveolata
Smith, Joseph M.; Quinn, T.M.; Helmle, K.P.; Halley, R.B.
2006-01-01
Monthly resolved, 41-year-long stable isotopic and elemental ratio time series were generated from two separate heads of Montastraea faveolata from Looe Key, Florida, to assess the fidelity of using geochemical variations in Montastraea, the dominant reef-building coral of the Atlantic, to reconstruct sea surface environmental conditions at this site. The stable isotope time series of the two corals replicate well; mean values of ??18O and ??13C are indistinguishable between cores (compare 0.70??? versus 0.68??? for ??13C and -3.90??? versus - 3.94??? for ??18O). Mean values from the Sr/Ca time series differ by 0.037 mmol/mol, which is outside of analytical error and indicates that nonenvironmental factors are influencing the coral Sr/ Ca records at Looe Key. We have generated significant ?? 18O-sea surface temperature (SST) (R = -0.84) and Sr/ Ca-SST (R = -0.86) calibration equations at Looe Key; however, these equations are different from previously published equations for Montastraea. Variations in growth parameters or kinetic effects are not sufficient to explain either the observed differences in the mean offset between Sr/Ca time series or the disagreement between previous calibrations and our calculated ??18O-SST and Sr/Ca-SST relationships. Calibration differences are most likely due to variations in seawater chemistry in the continentally influenced waters at Looe Key. Additional geochemical replication studies of Montastraea are needed and should include multiple coral heads from open ocean localities complemented whenever possible by seawater chemistry determinations. Copyright 2006 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun
2018-03-01
Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.
NASA Astrophysics Data System (ADS)
Lin, Zhuosheng; Yu, Simin; Lü, Jinhu
2017-06-01
In this paper, a novel approach for constructing one-way hash function based on 8D hyperchaotic map is presented. First, two nominal matrices both with constant and variable parameters are adopted for designing 8D discrete-time hyperchaotic systems, respectively. Then each input plaintext message block is transformed into 8 × 8 matrix following the order of left to right and top to bottom, which is used as a control matrix for the switch of the nominal matrix elements both with the constant parameters and with the variable parameters. Through this switching control, a new nominal matrix mixed with the constant and variable parameters is obtained for the 8D hyperchaotic map. Finally, the hash function is constructed with the multiple low 8-bit hyperchaotic system iterative outputs after being rounded down, and its secure analysis results are also given, validating the feasibility and reliability of the proposed approach. Compared with the existing schemes, the main feature of the proposed method is that it has a large number of key parameters with avalanche effect, resulting in the difficulty for estimating or predicting key parameters via various attacks.
Constant False Alarm Rate (CFAR) Autotrend Evaluation Report
2011-12-01
represent a level of uncertainty in the performance analysis. The performance analysis produced the following Key Performance Indicators ( KPIs ) as...Identity KPI Key Performance Indicator MooN M-out-of-N MSPU Modernized Signal Processor Unit NFF No Fault Found PAT Parameter Allocation Table PD
Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-07-15
In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper's capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determinedmore » for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency ({eta} {approx} 20%) and dark count probability (p{sub dark} {approx} 10{sup -7})« less
Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol
NASA Astrophysics Data System (ADS)
Molotkov, S. N.
2008-07-01
In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper’s capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determined for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency (η ≈ 20%) and dark count probability ( p dark ˜ 10-7).
Duan, Q.; Schaake, J.; Andreassian, V.; Franks, S.; Goteti, G.; Gupta, H.V.; Gusev, Y.M.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.N.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T.; Wood, E.F.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. The MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of these workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of the second and third MOPEX workshops were provided with data from 12 basins in the southeastern US and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Different modeling groups carried out all the required experiments independently using eight different models, and the results from these models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment and its design. The main experimental results are analyzed. A key finding is that existing a priori parameter estimation procedures are problematic and need improvement. Significant improvement of these procedures may be achieved through model calibration of well-monitored hydrologic basins. This paper concludes with a discussion of the lessons learned, and points out further work and future strategy. ?? 2005 Elsevier Ltd. All rights reserved.
A framework for scalable parameter estimation of gene circuit models using structural information.
Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin
2013-07-01
Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.
Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P
2018-01-01
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.
Psychoacoustical evaluation of natural and urban sounds in soundscapes.
Yang, Ming; Kang, Jian
2013-07-01
Among various sounds in the environment, natural sounds, such as water sounds and birdsongs, have proven to be highly preferred by humans, but the reasons for these preferences have not been thoroughly researched. This paper explores differences between various natural and urban environmental sounds from the viewpoint of objective measures, especially psychoacoustical parameters. The sound samples used in this study include the recordings of single sound source categories of water, wind, birdsongs, and urban sounds including street music, mechanical sounds, and traffic noise. The samples are analyzed with a number of existing psychoacoustical parameter algorithmic models. Based on hierarchical cluster and principal components analyses of the calculated results, a series of differences has been shown among different sound types in terms of key psychoacoustical parameters. While different sound categories cannot be identified using any single acoustical and psychoacoustical parameter, identification can be made with a group of parameters, as analyzed with artificial neural networks and discriminant functions in this paper. For artificial neural networks, correlations between network predictions and targets using the average and standard deviation data of psychoacoustical parameters as inputs are above 0.95 for the three natural sound categories and above 0.90 for the urban sound category. For sound identification/classification, key parameters are fluctuation strength, loudness, and sharpness.
Launch Vehicle Propulsion Design with Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.
2005-01-01
The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.
[Assistenza cardiocircolatoria. (Cardio-circulatory care)].
Cogo, P E
2010-06-01
Mortality in pediatric cardiovascular failure is markedly improved with the advent of neonatal and pediatric intensive care and with the implementation of treatment guidelines. In 2002 the American College of Critical Care Medicine Clinical Practice Parameters for Hemodynamic Support of Pediatric and Neonatal Shock reported mortality rates of 0%-5% in previously healthy and 10% in chronically ill children with septic shock associated with implementation of "best clinical practices". Early recognition of shock is the key to successful resuscitation in critically ill children. Often, shock results in or co-exists with myo-cardial dysfunction or acute lung injury. Recognition and appropriate management of these insults is crucial for successful outcomes. Resuscitation should be directed to restoration of tissue perfusion and normalization of cardiac and respiratory function. The underlying cause of shock should also be addressed urgently. The physiological response of individual children to shock resuscitation varies and is often unpredictable. Therefore, repeated assessments of vital parameters are needed for taking appropriate decisions. Global indices of tissue oxygen delivery help in targeting therapies more accurately. Isotonic fluids form the cornerstone of treatment and the amount required for resuscitation is based on etiologies and therapeutic response. After resuscitation has been initiated, targeted history and clinical evaluation must be performed to ascertain the cause of shock and management of co-morbidities should be implemented simultaneously. While the management of shock can be protocol based, the treatment needs to be individualized depending on the suspected etiology and therapeutic response particularly for children who suffer from congenital heart disease.
Applications of bioenergetics models to fish ecology and management: where do we go from here?
Hansen, Michael J.; Boisclair, Daniel; Brandt, Stephen B.; Hewett, Steven W.; Kitchell, James F.; Lucas, Martyn C.; Ney, John J.
1993-01-01
Papers and panel discussions given during a 1992 symposium on bioenergetics models are summarized. Bioenergetics models have been applied to a variety of research and management questions related to fish stocks, populations, food webs, and ecosystems. Applications include estimates of the intensity and dynamics of predator-prey interactions, nutrient cycling within aquatic food webs of varying trophic structure, and food requirements of single animals, whole populations, and communities of fishes. As tools in food web and ecosystem applications, bioenergetics models have been used to compare forage consumption by salmonid predators across the Laurentian Great Lakes for single populations and whole communities, and to estimate the growth potential of pelagic predators in Chesapeake Bay and Lake Ontario. Some critics say that bioenergetics models lack sufficient detail to produce reliable results in such field applications, whereas others say that the models are too complex to be useful tools for fishery managers. Nevertheless, bioenergetics models have achieved notable predictive successes. Improved estimates are needed for model parameters such as metabolic costs of activity, and more complete studies are needed of the bioenergetics of larval and juvenile fishes. Future research on bioenergetics should include laboratory and field measurements of key model parameters such as weight-dependent maximum consumption, respiration and activity, and thermal habitats actually occupied by fish. Future applications of bioenergetics models to fish populations also depend on accurate estimates of population sizes and survival rates.
Deficiencies of the cryptography based on multiple-parameter fractional Fourier transform.
Ran, Qiwen; Zhang, Haiying; Zhang, Jin; Tan, Liying; Ma, Jing
2009-06-01
Methods of image encryption based on fractional Fourier transform have an incipient flaw in security. We show that the schemes have the deficiency that one group of encryption keys has many groups of keys to decrypt the encrypted image correctly for several reasons. In some schemes, many factors result in the deficiencies, such as the encryption scheme based on multiple-parameter fractional Fourier transform [Opt. Lett.33, 581 (2008)]. A modified method is proposed to avoid all the deficiencies. Security and reliability are greatly improved without increasing the complexity of the encryption process. (c) 2009 Optical Society of America.
Key management of the double random-phase-encoding method using public-key encryption
NASA Astrophysics Data System (ADS)
Saini, Nirmala; Sinha, Aloka
2010-03-01
Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.
NASA Astrophysics Data System (ADS)
Mäkelä, Jarmo; Susiluoto, Jouni; Markkanen, Tiina; Aurela, Mika; Järvinen, Heikki; Mammarella, Ivan; Hagemann, Stefan; Aalto, Tuula
2016-12-01
We examined parameter optimisation in the JSBACH (Kaminski et al., 2013; Knorr and Kattge, 2005; Reick et al., 2013) ecosystem model, applied to two boreal forest sites (Hyytiälä and Sodankylä) in Finland. We identified and tested key parameters in soil hydrology and forest water and carbon-exchange-related formulations, and optimised them using the adaptive Metropolis (AM) algorithm for Hyytiälä with a 5-year calibration period (2000-2004) followed by a 4-year validation period (2005-2008). Sodankylä acted as an independent validation site, where optimisations were not made. The tuning provided estimates for full distribution of possible parameters, along with information about correlation, sensitivity and identifiability. Some parameters were correlated with each other due to a phenomenological connection between carbon uptake and water stress or other connections due to the set-up of the model formulations. The latter holds especially for vegetation phenology parameters. The least identifiable parameters include phenology parameters, parameters connecting relative humidity and soil dryness, and the field capacity of the skin reservoir. These soil parameters were masked by the large contribution from vegetation transpiration. In addition to leaf area index and the maximum carboxylation rate, the most effective parameters adjusting the gross primary production (GPP) and evapotranspiration (ET) fluxes in seasonal tuning were related to soil wilting point, drainage and moisture stress imposed on vegetation. For daily and half-hourly tunings the most important parameters were the ratio of leaf internal CO2 concentration to external CO2 and the parameter connecting relative humidity and soil dryness. Effectively the seasonal tuning transferred water from soil moisture into ET, and daily and half-hourly tunings reversed this process. The seasonal tuning improved the month-to-month development of GPP and ET, and produced the most stable estimates of water use efficiency. When compared to the seasonal tuning, the daily tuning is worse on the seasonal scale. However, daily parametrisation reproduced the observations for average diurnal cycle best, except for the GPP for Sodankylä validation period, where half-hourly tuned parameters were better. In general, the daily tuning provided the largest reduction in model-data mismatch. The models response to drought was unaffected by our parametrisations and further studies are needed into enhancing the dry response in JSBACH.
Sizing Power Components of an Electrically Driven Tail Cone Thruster and a Range Extender
NASA Technical Reports Server (NTRS)
Jansen, Ralph H.; Bowman, Cheryl; Jankovsky, Amy
2016-01-01
The aeronautics industry has been challenged on many fronts to increase efficiency, reduce emissions, and decrease dependency on carbon-based fuels. This paper provides an overview of the turboelectric and hybrid electric technologies being developed under NASA's Advanced Air Transportation Technology (AATT) Project and discusses how these technologies can impact vehicle design. The discussion includes an overview of key hybrid electric studies and technology investments, the approach to making informed investment decisions based on key performance parameters and mission studies, and the power system architectures for two candidate aircraft. Finally, the power components for a single-aisle turboelectric aircraft with an electrically driven tail cone thruster and for a hybrid-electric nine-passenger aircraft with a range extender are parametrically sized, and the sensitivity of these components to key parameters is presented.
Research progress of on-the-go soil parameter sensors based on NIRS
NASA Astrophysics Data System (ADS)
An, Xiaofei; Meng, Zhijun; Wu, Guangwei; Guo, Jianhua
2014-11-01
Both the ever-increasing prices of fertilizer and growing ecological concern over chemical run-off into sources of drinking water have brought the issues of precision agriculture and site-specific management to the forefront of present technological development within agriculture and ecology. Soil is an important and basic element in agriculture production. Acquisition of soil information plays an important role in precision agriculture. The soil parameters include soil total nitrogen, phosporus, potassium, soil organic matter, soil moisture, electrical conductivity and pH value and so on. Field rapid acquisition to all the kinds of soil physical and chemical parameters is one of the most important research directions. And soil parameter real-time monitoring is also the trend of future development in precision agriculture. While developments in precision agriculture and site-specific management procedures have made significant in-roads on these issues and many researchers have developed effective means to determine soil properties, routinely obtaining robust on-the-go measurements of soil properties which are reliable enough to drive effective fertilizer application remains a challenge. NIRS technology provides a new method to obtain soil parameter with low cost and rapid advantage. In this paper, research progresses of soil on-the-go spectral sensors at domestic and abroad was combed and analyzed. There is a need for the sensing system to perform at least six key indexes for any on-the-go soil spectral sensor to be successful. The six indexes are detection limit, specificity, robustness, accuracy, cost and easy-to-use. Both the research status and problems were discussed. Finally, combining the national conditions of china, development tendency of on-the-go soil spectral sensors was proposed. In the future, on-the-go soil spectral sensors with reliable enough, sensitive enough and continuous detection would become popular in precision agriculture.
Finite frequency shear wave splitting tomography: a model space search approach
NASA Astrophysics Data System (ADS)
Mondal, P.; Long, M. D.
2017-12-01
Observations of seismic anisotropy provide key constraints on past and present mantle deformation. A common method for upper mantle anisotropy is to measure shear wave splitting parameters (delay time and fast direction). However, the interpretation is not straightforward, because splitting measurements represent an integration of structure along the ray path. A tomographic approach that allows for localization of anisotropy is desirable; however, tomographic inversion for anisotropic structure is a daunting task, since 21 parameters are needed to describe general anisotropy. Such a large parameter space does not allow a straightforward application of tomographic inversion. Building on previous work on finite frequency shear wave splitting tomography, this study aims to develop a framework for SKS splitting tomography with a new parameterization of anisotropy and a model space search approach. We reparameterize the full elastic tensor, reducing the number of parameters to three (a measure of strength based on symmetry considerations for olivine, plus the dip and azimuth of the fast symmetry axis). We compute Born-approximation finite frequency sensitivity kernels relating model perturbations to splitting intensity observations. The strong dependence of the sensitivity kernels on the starting anisotropic model, and thus the strong non-linearity of the inverse problem, makes a linearized inversion infeasible. Therefore, we implement a Markov Chain Monte Carlo technique in the inversion procedure. We have performed tests with synthetic data sets to evaluate computational costs and infer the resolving power of our algorithm for synthetic models with multiple anisotropic layers. Our technique can resolve anisotropic parameters on length scales of ˜50 km for realistic station and event configurations for dense broadband experiments. We are proceeding towards applications to real data sets, with an initial focus on the High Lava Plains of Oregon.
ENHANCING THE STABILITY OF POROUS CATALYSTS WITH SUPERCRITICAL REACTION MEDIA. (R826034)
Adsorption/desorption and pore-transport are key parameters influencing the activity and product selectivity in porous catalysts. With conventional reaction media (gas or liquid phase), one of these parameters is generally favorable while the other is not. For instance, while ...
Induced unconventional superconductivity on the surface states of Bi2Te3 topological insulator.
Charpentier, Sophie; Galletti, Luca; Kunakova, Gunta; Arpaia, Riccardo; Song, Yuxin; Baghdadi, Reza; Wang, Shu Min; Kalaboukhov, Alexei; Olsson, Eva; Tafuri, Francesco; Golubev, Dmitry; Linder, Jacob; Bauch, Thilo; Lombardi, Floriana
2017-12-08
Topological superconductivity is central to a variety of novel phenomena involving the interplay between topologically ordered phases and broken-symmetry states. The key ingredient is an unconventional order parameter, with an orbital component containing a chiral p x + ip y wave term. Here we present phase-sensitive measurements, based on the quantum interference in nanoscale Josephson junctions, realized by using Bi 2 Te 3 topological insulator. We demonstrate that the induced superconductivity is unconventional and consistent with a sign-changing order parameter, such as a chiral p x + ip y component. The magnetic field pattern of the junctions shows a dip at zero externally applied magnetic field, which is an incontrovertible signature of the simultaneous existence of 0 and π coupling within the junction, inherent to a non trivial order parameter phase. The nano-textured morphology of the Bi 2 Te 3 flakes, and the dramatic role played by thermal strain are the surprising key factors for the display of an unconventional induced order parameter.
Sumner, T; Shephard, E; Bogle, I D L
2012-09-07
One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.
Bardhan, Jaydeep P; Knepley, Matthew G; Brune, Peter
2015-01-01
In this paper, we present an exact, infinite-series solution to Lorentz nonlocal continuum electrostatics for an arbitrary charge distribution in a spherical solute. Our approach relies on two key steps: (1) re-formulating the PDE problem using boundary-integral equations, and (2) diagonalizing the boundary-integral operators using the fact that their eigenfunctions are the surface spherical harmonics. To introduce this uncommon approach for calculations in separable geometries, we first re-derive Kirkwood's classic results for a protein surrounded concentrically by a pure-water ion-exclusion (Stern) layer and then a dilute electrolyte, which is modeled with the linearized Poisson-Boltzmann equation. The eigenfunction-expansion approach provides a computationally efficient way to test some implications of nonlocal models, including estimating the reasonable range of the nonlocal length-scale parameter λ. Our results suggest that nonlocal solvent response may help to reduce the need for very high dielectric constants in calculating pH-dependent protein behavior, though more sophisticated nonlocal models are needed to resolve this question in full. An open-source MATLAB implementation of our approach is freely available online.
Bardhan, Jaydeep P.; Knepley, Matthew G.; Brune, Peter
2015-01-01
In this paper, we present an exact, infinite-series solution to Lorentz nonlocal continuum electrostatics for an arbitrary charge distribution in a spherical solute. Our approach relies on two key steps: (1) re-formulating the PDE problem using boundary-integral equations, and (2) diagonalizing the boundary-integral operators using the fact that their eigenfunctions are the surface spherical harmonics. To introduce this uncommon approach for calculations in separable geometries, we first re-derive Kirkwood’s classic results for a protein surrounded concentrically by a pure-water ion-exclusion (Stern) layer and then a dilute electrolyte, which is modeled with the linearized Poisson–Boltzmann equation. The eigenfunction-expansion approach provides a computationally efficient way to test some implications of nonlocal models, including estimating the reasonable range of the nonlocal length-scale parameter λ. Our results suggest that nonlocal solvent response may help to reduce the need for very high dielectric constants in calculating pH-dependent protein behavior, though more sophisticated nonlocal models are needed to resolve this question in full. An open-source MATLAB implementation of our approach is freely available online. PMID:26273581
Machine learning plus optical flow: a simple and sensitive method to detect cardioactive drugs
NASA Astrophysics Data System (ADS)
Lee, Eugene K.; Kurokawa, Yosuke K.; Tu, Robin; George, Steven C.; Khine, Michelle
2015-07-01
Current preclinical screening methods do not adequately detect cardiotoxicity. Using human induced pluripotent stem cell-derived cardiomyocytes (iPS-CMs), more physiologically relevant preclinical or patient-specific screening to detect potential cardiotoxic effects of drug candidates may be possible. However, one of the persistent challenges for developing a high-throughput drug screening platform using iPS-CMs is the need to develop a simple and reliable method to measure key electrophysiological and contractile parameters. To address this need, we have developed a platform that combines machine learning paired with brightfield optical flow as a simple and robust tool that can automate the detection of cardiomyocyte drug effects. Using three cardioactive drugs of different mechanisms, including those with primarily electrophysiological effects, we demonstrate the general applicability of this screening method to detect subtle changes in cardiomyocyte contraction. Requiring only brightfield images of cardiomyocyte contractions, we detect changes in cardiomyocyte contraction comparable to - and even superior to - fluorescence readouts. This automated method serves as a widely applicable screening tool to characterize the effects of drugs on cardiomyocyte function.
The next generation in aircraft protection against advanced MANPADS
NASA Astrophysics Data System (ADS)
Chapman, Stuart
2014-10-01
This paper discusses the advanced and novel technologies and underlying systems capabilities that Selex ES has applied during the development, test and evaluation of the twin head Miysis DIRCM System in order to ensure that it provides the requisite levels of protection against the latest, sophisticated all-aspect IR MANPADS. The importance of key performance parameters, including the fundamental need for "spherical" coverage, rapid time to energy-on-target, laser tracking performance and radiant intensity on seeker dome is covered. It also addresses the approach necessary to ensure that the equipment is suited to all air platforms from the smallest helicopters to large transports, while also ensuring that it achieves an inherent high reliability and an ease of manufacture and repair such that a step change in through-life cost in comparison to previous generation systems can be achieved. The benefits and issues associated with open architecture design are also considered. Finally, the need for extensive test and evaluation at every stage, including simulation, laboratory testing, platform and target dynamic testing in a System Integration Laboratory (SIL), flight trial, missile live-fire, environmental testing and reliability testing is also described.
Fast Printing and In-Situ Morphology Observation of Organic Photovoltaics using Slot-Die Coating
NASA Astrophysics Data System (ADS)
Liu, Feng; Ferdous, Sunzida; Wang, Cheng; Hexamer, Alexander; Russell, Thomas; Cheng Wang Collaboration; Thomas Russell Team
2014-03-01
The solvent-processibility of polymer semiconductors is a key advantage for the fabrication of large area, organic bulk-heterojunction (BHJ) photovoltaic devices. Most reported power conversion efficiencies (PCE) are based on small active areas, fabricated by spin-coating technique. In general, this does not reflect device fabrication in an industrial setting. To realize commercial viability, devices need to be fabricated in a roll-to-roll fashion. The evolution of the morphology associated with different processing parameters, like solvent choice, concentration and temperature, needs to be understood and controlled. We developed a mini slot-die coater, to fabricate BHJ devices using various low band gap polymers mixed with phenyl-C71-butyric acid methyl ester (PCBM). Solvent choice, processing additives, coating rate and coating temperatures were used to control the final morphology. Efficiencies comparable to lab-setting spin-coated devices are obtained. The evolution of the morphology was monitored by in situ scattering measurements, detecting the onset of the polymer chain packing in solution that led to the formation of a fibrillar network in the film.
An online peak extraction algorithm for ion mobility spectrometry data.
Kopczynski, Dominik; Rahmann, Sven
2015-01-01
Ion mobility (IM) spectrometry (IMS), coupled with multi-capillary columns (MCCs), has been gaining importance for biotechnological and medical applications because of its ability to detect and quantify volatile organic compounds (VOC) at low concentrations in the air or in exhaled breath at ambient pressure and temperature. Ongoing miniaturization of spectrometers creates the need for reliable data analysis on-the-fly in small embedded low-power devices. We present the first fully automated online peak extraction method for MCC/IMS measurements consisting of several thousand individual spectra. Each individual spectrum is processed as it arrives, removing the need to store the measurement before starting the analysis, as is currently the state of the art. Thus the analysis device can be an inexpensive low-power system such as the Raspberry Pi. The key idea is to extract one-dimensional peak models (with four parameters) from each spectrum and then merge these into peak chains and finally two-dimensional peak models. We describe the different algorithmic steps in detail and evaluate the online method against state-of-the-art peak extraction methods.
Sensitivity of liquid clouds to homogenous freezing parameterizations.
Herbert, Ross J; Murray, Benjamin J; Dobbie, Steven J; Koop, Thomas
2015-03-16
Water droplets in some clouds can supercool to temperatures where homogeneous ice nucleation becomes the dominant freezing mechanism. In many cloud resolving and mesoscale models, it is assumed that homogeneous ice nucleation in water droplets only occurs below some threshold temperature typically set at -40°C. However, laboratory measurements show that there is a finite rate of nucleation at warmer temperatures. In this study we use a parcel model with detailed microphysics to show that cloud properties can be sensitive to homogeneous ice nucleation as warm as -30°C. Thus, homogeneous ice nucleation may be more important for cloud development, precipitation rates, and key cloud radiative parameters than is often assumed. Furthermore, we show that cloud development is particularly sensitive to the temperature dependence of the nucleation rate. In order to better constrain the parameterization of homogeneous ice nucleation laboratory measurements are needed at both high (>-35°C) and low (<-38°C) temperatures. Homogeneous freezing may be significant as warm as -30°CHomogeneous freezing should not be represented by a threshold approximationThere is a need for an improved parameterization of homogeneous ice nucleation.
NASA Technical Reports Server (NTRS)
Perry, Jay L.; Sargusingh, Miriam J.; Toomarian, Nikzad
2016-01-01
The National Aeronautics and Space Administration's (NASA) technology development roadmaps provide guidance to focus technological development in areas that enable crewed exploration missions beyond low-Earth orbit. Specifically, the technology area roadmap on human health, life support and habitation systems describes the need for life support system (LSS) technologies that can improve reliability and in-flight maintainability within a minimally-sized package while enabling a high degree of mission autonomy. To address the needs outlined by the guiding technology area roadmap, NASA's Advanced Exploration Systems (AES) Program has commissioned the Life Support Systems (LSS) Project to lead technology development in the areas of water recovery and management, atmosphere revitalization, and environmental monitoring. A notional exploration LSS architecture derived from the International Space has been developed and serves as the developmental basis for these efforts. Functional requirements and key performance parameters that guide the exploration LSS technology development efforts are presented and discussed. Areas where LSS flight operations aboard the ISS afford lessons learned that are relevant to exploration missions are highlighted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunawan, Budi; Neary, Vincent Sinclair; Mortensen, Josh
Hydrokinetic energy from flowing water in open channels has the potential to support local electricity needs with lower regulatory or capital investment than impounding water with more conventional means. MOU agencies involved in federal hydropower development have identified the need to better understand the opportunities for hydrokinetic (HK) energy development within existing canal systems that may already have integrated hydropower plants. This document provides an overview of the main considerations, tools, and assessment methods, for implementing field tests in an open-channel water system to characterize current energy converter (CEC) device performance and hydrodynamic effects. It describes open channel processes relevantmore » to their HK site and perform pertinent analyses to guide siting and CEC layout design, with the goal of streamlining the evaluation process and reducing the risk of interfering with existing uses of the site. This document outlines key site parameters of interest and effective tools and methods for measurement and analysis with examples drawn from the Roza Main Canal, in Yakima, WA to illustrate a site application.« less
A Parametric Study on Using Active Debris Removal for LEO Environment Remediation
NASA Technical Reports Server (NTRS)
2010-01-01
Recent analyses on the instability of the orbital debris population in the low Earth orbit (LEO) region and the collision between Iridium 33 and Cosmos 2251 have reignited the interest in using active debris removal (ADR) to remediate the environment. There are; however, monumental technical, resource, operational, legal, and political challenges in making economically viable ADR a reality. Before a consensus on the need for ADR can be reached, a careful analysis of its effectiveness must be conducted. The goal is to demonstrate the need and feasibility of using ADR to better preserve the future environment and to guide its implementation to maximize the benefit-to-cost ratio. This paper describes a new sensitivity study on using ADR to stabilize the future LEO debris environment. The NASA long-term orbital debris evolutionary model, LEGEND, is used to quantify the effects of several key parameters, including target selection criteria/constraints and the starting epoch of ADR implementation. Additional analyses on potential ADR targets among the currently existing satellites and the benefits of collision avoidance maneuvers are also included.
Fuzzy logic control system to provide autonomous collision avoidance for Mars rover vehicle
NASA Technical Reports Server (NTRS)
Murphy, Michael G.
1990-01-01
NASA is currently involved with planning unmanned missions to Mars to investigate the terrain and process soil samples in advance of a manned mission. A key issue involved in unmanned surface exploration on Mars is that of supporting autonomous maneuvering since radio communication involves lengthy delays. It is anticipated that specific target locations will be designated for sample gathering. In maneuvering autonomously from a starting position to a target position, the rover will need to avoid a variety of obstacles such as boulders or troughs that may block the shortest path to the target. The physical integrity of the rover needs to be maintained while minimizing the time and distance required to attain the target position. Fuzzy logic lends itself well to building reliable control systems that function in the presence of uncertainty or ambiguity. The following major issues are discussed: (1) the nature of fuzzy logic control systems and software tools to implement them; (2) collision avoidance in the presence of fuzzy parameters; and (3) techniques for adaptation in fuzzy logic control systems.
NASA Astrophysics Data System (ADS)
Llorens-Chiralt, R.; Weiss, P.; Mikonsaari, I.
2014-05-01
Material characterization is one of the key steps when conductive polymers are developed. The dispersion of carbon nanotubes (CNTs) in a polymeric matrix using melt mixing influence final composite properties. The compounding becomes trial and error using a huge amount of materials, spending time and money to obtain competitive composites. Traditional methods to carry out electrical conductivity characterization include compression and injection molding. Both methods need extra equipments and moulds to obtain standard bars. This study aims to investigate the accuracy of the data obtained from absolute resistance recorded during the melt compounding, using an on-line setup developed by our group, and to correlate these values with off-line characterization and processing parameters (screw/barrel configuration, throughput, screw speed, temperature profile and CNTs percentage). Compounds developed with different percentages of multi walled carbon nanotubes (MWCNTs) and polycarbonate has been characterized during and after extrusion. Measurements, on-line resistance and off-line resistivity, showed parallel response and reproducibility, confirming method validity. The significance of the results obtained stems from the fact that we are able to measure on-line resistance and to change compounding parameters during production to achieve reference values reducing production/testing cost and ensuring material quality. Also, this method removes errors which can be found in test bars development, showing better correlation with compounding parameters.
Daoud, Salima; Chakroun-Feki, Nozha; Sellami, Afifa; Ammar-Keskes, Leila; Rebai, Tarek
2016-01-01
Semen analysis is a key part of male infertility investigation. The necessity of quality management implementation in the andrology laboratory has been recognized in order to ensure the reliability of its results. The aim of this study was to evaluate intra- and inter-individual variability in the assessment of semen parameters in our laboratory through a quality control programme. Four participants from the laboratory with different experience levels have participated in this study. Semen samples of varying quality were assessed for sperm motility, concentration and morphology and the results were used to evaluate inter-participant variability. In addition, replicates of each semen sample were analyzed to determine intra-individual variability for semen parameters analysis. The average values of inter-participant coefficients of variation for sperm motility, concentration and morphology were 12.8%, 19.8% and 48.9% respectively. The mean intra-participant coefficients of variation were, respectively, 6.9%, 12.3% and 42.7% for sperm motility, concentration and morphology. Despite some random errors of under- or overestimation, the overall results remained within the limits of acceptability for all participants. Sperm morphology assessment was particularly influenced by the participant's level of experience. The present data emphasize the need for appropriate training of the laboratory staff and for regular participation in internal quality control programmes in order to improve the reliability of laboratory results.
Dosimetric evaluation of intrafractional tumor motion by means of a robot driven phantom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richter, Anne; Wilbert, Juergen; Flentje, Michael
2011-10-15
Purpose: The aim of the work was to investigate the influence of intrafractional tumor motion to the accumulated (absorbed) dose. The accumulated dose was determined by means of calculations and measurements with a robot driven motion phantom. Methods: Different motion scenarios and compensation techniques were realized in a phantom study to investigate the influence of motion on image acquisition, dose calculation, and dose measurement. The influence of motion on the accumulated dose was calculated by employing two methods (a model based and a voxel based method). Results: Tumor motion resulted in a blurring of steep dose gradients and a reductionmore » of dose at the periphery of the target. A systematic variation of motion parameters allowed the determination of the main influence parameters on the accumulated dose. The key parameters with the greatest influence on dose were the mean amplitude and the pattern of motion. Investigations on necessary safety margins to compensate for dose reduction have shown that smaller safety margins are sufficient, if the developed concept with optimized margins (OPT concept) was used instead of the standard internal target volume (ITV) concept. Both calculation methods were a reasonable approximation of the measured dose with the voxel based method being in better agreement with the measurements. Conclusions: Further evaluation of available systems and algorithms for dose accumulation are needed to create guidelines for the verification of the accumulated dose.« less
Future Scenarios for Plant Virus Pathogens as Climate Change Progresses.
Jones, R A C
2016-01-01
Knowledge of how climate change is likely to influence future virus disease epidemics in cultivated plants and natural vegetation is of great importance to both global food security and natural ecosystems. However, obtaining such knowledge is hampered by the complex effects of climate alterations on the behavior of diverse types of vectors and the ease by which previously unknown viruses can emerge. A review written in 2011 provided a comprehensive analysis of available data on the effects of climate change on virus disease epidemics worldwide. This review summarizes its findings and those of two earlier climate change reviews and focuses on describing research published on the subject since 2011. It describes the likely effects of the full range of direct and indirect climate change parameters on hosts, viruses and vectors, virus control prospects, and the many information gaps and deficiencies. Recently, there has been encouraging progress in understanding the likely effects of some climate change parameters, especially over the effects of elevated CO2, temperature, and rainfall-related parameters, upon a small number of important plant viruses and several key insect vectors, especially aphids. However, much more research needs to be done to prepare for an era of (i) increasingly severe virus epidemics and (ii) increasing difficulties in controlling them, so as to mitigate their detrimental effects on future global food security and plant biodiversity. © 2016 Elsevier Inc. All rights reserved.
Needleless Electrospinning Experimental Study and Nanofiber Application in Semiconductor Packaging
NASA Astrophysics Data System (ADS)
Sun, Tianwei
Electronics especially mobile electronics such as smart phones, tablet PCs, notebooks and digital cameras are undergoing rapid development nowadays and have thoroughly changed our lives. With the requirement of more transistors, higher power, smaller size, lighter weight and even bendability, thermal management of these devices became one of the key challenges. Compared to active heat management system, heat pipe, which is a passive fluidic system, is considered promising to solve this problem. However, traditional heat pipes have size, weight and capillary limitation. Thus new type of heat pipe with smaller size, lighter weight and higher capillary pressure is needed. Nanofiber has been proved with superior properties and has been applied in multiple areas. This study discussed the possibility of applying nanofiber in heat pipe as new wick structure. In this study, a needleless electrospinning device with high productivity rate was built onsite to systematically investigate the effect of processing parameters on fiber properties as well as to generate nanofiber mat to evaluate its capability in electronics cooling. Polyethylene oxide (PEO) and Polyvinyl Alcohol (PVA) nanofibers were generated. Tensiometer was used for wettability measurement. The results show that independent parameters including spinneret type, working distance, solution concentration and polymer type are strongly correlated with fiber morphology compared to other parameters. The results also show that the fabricated nanofiber mat has high capillary pressure.
Thin Ice Clouds in Far IR Experiment: TICFIRE
NASA Astrophysics Data System (ADS)
Blanchet, Jean-Pierre
The TICFIRE mission concept developed with the support of the Canadian Space Agency aims: 1) to improve measurements of water-vapor concentration in the low limit, where cold regions are most sensitive and 2) to determine the contribution of Thin Ice Clouds (TIC) to the energy balance and the role of their microphysical properties on atmospheric cooling. TICFIRE is a process-oriented mission on a micro-satellite platform dedicated to observe key parameters of TIC forming in the cold regions of the Poles and globally, in the upper troposphere. It locates cloud top profiles at the limb and measures at nadir the corresponding upwelling radiance of the atmosphere directly in the thermal window and in the Far Infrared (FIR) spectrum over cold geographical regions, precisely where most of the atmospheric thermal cooling takes place. Due to technological limitations, the FIR spectrum (17 to 50 m) is not regularly monitored by conventional sensors despite its major importance. This deficiency in key data also impacts operational weather forecasting. TICFIRE will provide on a global scale a needed contribution in calibrated radiance assimilation near the IR maximum emission to improve weather forecast. Therefore, TICFIRE is a science-driven mission with a strong operational component.
Integrated Human-in-the-Loop Ground Testing - Value, History, and the Future
NASA Technical Reports Server (NTRS)
Henninger, Donald L.
2016-01-01
Systems for very long-duration human missions to Mars will be designed to operate reliably for many years and many of these systems will never be returned to Earth. The need for high reliability is driven by the requirement for safe functioning of remote, long-duration crewed systems and also by unsympathetic abort scenarios. Abort from a Mars mission could be as long as 450 days to return to Earth. The key to developing a human-in-the-loop architecture is a development process that allows for a logical sequence of validating successful development in a stepwise manner, with assessment of key performance parameters (KPPs) at each step; especially important are KPPs for technologies evaluated in a full systems context with human crews on Earth and on space platforms such as the ISS. This presentation will explore the implications of such an approach to technology development and validation including the roles of ground and space-based testing necessary to develop a highly reliable system for long duration human exploration missions. Historical development and systems testing from Mercury to the International Space Station (ISS) to ground testing will be reviewed. Current work as well as recommendations for future work will be described.
Effect of the Key Mixture Parameters on Shrinkage of Reactive Powder Concrete
Zubair, Ahmed
2014-01-01
Reactive powder concrete (RPC) mixtures are reported to have excellent mechanical and durability characteristics. However, such concrete mixtures having high amount of cementitious materials may have high early shrinkage causing cracking of concrete. In the present work, an attempt has been made to study the simultaneous effects of three key mixture parameters on shrinkage of the RPC mixtures. Considering three different levels of the three key mixture factors, a total of 27 mixtures of RPC were prepared according to 33 factorial experiment design. The specimens belonging to all 27 mixtures were monitored for shrinkage at different ages over a total period of 90 days. The test results were plotted to observe the variation of shrinkage with time and to see the effects of the key mixture factors. The experimental data pertaining to 90-day shrinkage were used to conduct analysis of variance to identify significance of each factor and to obtain an empirical equation correlating the shrinkage of RPC with the three key mixture factors. The rate of development of shrinkage at early ages was higher. The water to binder ratio was found to be the most prominent factor followed by cement content with the least effect of silica fume content. PMID:25050395
Effect of the key mixture parameters on shrinkage of reactive powder concrete.
Ahmad, Shamsad; Zubair, Ahmed; Maslehuddin, Mohammed
2014-01-01
Reactive powder concrete (RPC) mixtures are reported to have excellent mechanical and durability characteristics. However, such concrete mixtures having high amount of cementitious materials may have high early shrinkage causing cracking of concrete. In the present work, an attempt has been made to study the simultaneous effects of three key mixture parameters on shrinkage of the RPC mixtures. Considering three different levels of the three key mixture factors, a total of 27 mixtures of RPC were prepared according to 3(3) factorial experiment design. The specimens belonging to all 27 mixtures were monitored for shrinkage at different ages over a total period of 90 days. The test results were plotted to observe the variation of shrinkage with time and to see the effects of the key mixture factors. The experimental data pertaining to 90-day shrinkage were used to conduct analysis of variance to identify significance of each factor and to obtain an empirical equation correlating the shrinkage of RPC with the three key mixture factors. The rate of development of shrinkage at early ages was higher. The water to binder ratio was found to be the most prominent factor followed by cement content with the least effect of silica fume content.
Performance of device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Cao, Zhu; Zhao, Qi; Ma, Xiongfeng
2016-07-01
Quantum key distribution provides information-theoretically-secure communication. In practice, device imperfections may jeopardise the system security. Device-independent quantum key distribution solves this problem by providing secure keys even when the quantum devices are untrusted and uncharacterized. Following a recent security proof of the device-independent quantum key distribution, we improve the key rate by tightening the parameter choice in the security proof. In practice where the system is lossy, we further improve the key rate by taking into account the loss position information. From our numerical simulation, our method can outperform existing results. Meanwhile, we outline clear experimental requirements for implementing device-independent quantum key distribution. The maximal tolerable error rate is 1.6%, the minimal required transmittance is 97.3%, and the minimal required visibility is 96.8 % .
A key factor to the spin parameter of uniformly rotating compact stars: crust structure
NASA Astrophysics Data System (ADS)
Qi, Bin; Zhang, Nai-Bo; Sun, Bao-Yuan; Wang, Shou-Yu; Gao, Jian-Hua
2016-04-01
We study the dimensionless spin parameter j ≡ cJ/(GM2) of different kinds of uniformly rotating compact stars, including traditional neutron stars, hyperonic neutron stars and hybrid stars, based on relativistic mean field theory and the MIT bag model. It is found that jmax ˜ 0.7, which had been suggested in traditional neutron stars, is sustained for hyperonic neutron stars and hybrid stars with M > 0.5 M⊙. Not the interior but rather the crust structure of the stars is a key factor to determine jmax for three kinds of selected compact stars. Furthermore, a universal formula j = 0.63(f/fK) - 0.42(f/fK)2 + 0.48(f/fK)3 is suggested to determine the spin parameter at any rotational frequency f smaller than the Keplerian frequency fK.
The selection criteria elements of X-ray optics system
NASA Astrophysics Data System (ADS)
Plotnikova, I. V.; Chicherina, N. V.; Bays, S. S.; Bildanov, R. G.; Stary, O.
2018-01-01
At the design of new modifications of x-ray tomography there are difficulties in the right choice of elements of X-ray optical system. Now this problem is solved by practical consideration, selection of values of the corresponding parameters - tension on an x-ray tube taking into account the thickness and type of the studied material. For reduction of time and labor input of design it is necessary to create the criteria of the choice, to determine key parameters and characteristics of elements. In the article two main elements of X-ray optical system - an x-ray tube and the detector of x-ray radiation - are considered. Criteria of the choice of elements, their key characteristics, the main dependences of parameters, quality indicators and also recommendations according to the choice of elements of x-ray systems are received.
Magnifications of Single and Dual Element Accommodative Intraocular Lenses: Paraxial Optics Analysis
Ale, Jit B; Manns, Fabrice; Ho, Arthur
2010-01-01
Purpose Using an analytical approach of paraxial optics, we evaluated the magnification of a model eye implanted with single-element (1E) and dual-element (2E) translating-optics accommodative intraocular lenses (AIOL) with an objective of understanding key control parameters relevant to their design. Potential clinical implications of the results arising from pseudophakic accommodation were also considered. Methods Lateral and angular magnifications in a pseudophakic model eye were analyzed using the matrix method of paraxial optics. The effects of key control parameters such as direction (forward or backward) and distance (0 to 2 mm) of translation, power combinations of the 2E-AIOL elements (front element power range +20.0 D to +40.0 D), and amplitudes of accommodation (0 to 4 D) were tested. Relative magnification, defined as the ratio of the retinal image size of the accommodated eye to that of unaccommodated phakic (rLM1) or pseudophakic (rLM2) model eyes, was computed to determine how retinal image size changes with pseudophakic accommodation. Results Both lateral and angular magnifications increased with increased power of the front element in 2E-AIOL and amplitude of accommodation. For a 2E-AIOL with front element power of +35 D, rLM1 and rLM2 increased by 17.0% and 16.3%, respectively, per millimetre of forward translation of the element, compared to the magnification at distance focus (unaccommodated). These changes correspond to a change of 9.4% and 6.5% per dioptre of accommodation, respectively. Angular magnification also increased with pseudophakic accommodation. 1E-AIOLs produced consistently less magnification than 2E-AIOLs. Relative retinal image size decreased at a rate of 0.25% with each dioptre of accommodation in the phakic model eye. The position of the image space nodal point shifted away from the retina (towards the cornea) with both phakic and pseudophakic accommodation. Conclusion Power of the mobile element, and amount and direction of the translation (or the achieved accommodative amplitude) are important parameters in determining the magnifications of the AIOLs. The results highlight the need for caution in the prescribing of AIOL. Aniso-accommodation or inter-ocular differences in AIOL designs (or relative to the natural lens of the contralateral eye) may introduce dynamic aniseikonia and consequent impaired binocular vision. Nevertheless, some designs, offering greater increases in magnification on accommodation, may provide enhanced near vision depending on patient needs. PMID:21054469
Gallego-Schmid, Alejandro; Jeswani, Harish Kumar; Mendoza, Joan Manuel F; Azapagic, Adisa
2018-06-01
Between 117 and 200 million kettles are used in the European Union (EU) every year. However, the full environmental impacts of kettles remain largely unknown. This paper presents a comprehensive life cycle assessment of conventional plastic and metallic kettles in comparison with eco-kettles. The results show that the use stage contributes 80% to the impacts. For this reason, the eco-kettle has over 30% lower environmental impacts due to a greater water efficiency and related lower energy consumption. These results have been extrapolated to the EU level to consider the implications for proposed eco-design regulations. For these purposes, the effects on the impacts of durability of kettles and improvements in their energy and water efficiency have been assessed as they have been identified as two key parameters in the proposed regulations. The results suggest that increasing the current average durability from 4.4 to seven years would reduce the impacts by less than 5%. Thus, improving durability is not a key issue for improving the environmental performance of kettles and does not justify the need for an eco-design regulation based exclusively on it. However, improvements in water and energy efficiency through eco-design can bring relevant environmental savings. Boiling the exact amount of water needed would reduce the impacts by around a third and using water temperature control by further 2%-5%. The study has also considered the effects of reducing significantly the number of kettles in use after the UK (large user of kettles) leaves the EU and reducing the excess water typically boiled by the consumer. Even under these circumstances, the environmental savings justify the development of a specific EU eco-design regulation for kettles. However, consumer engagement will be key to the implementation and achievement of the expected environmental benefits. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin
2017-03-01
We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.
NASA Astrophysics Data System (ADS)
Hassan, Waleed K.; Al-Assam, Hisham
2017-05-01
The main problem associated with using symmetric/ asymmetric keys is how to securely store and exchange the keys between the parties over open networks particularly in the open environment such as cloud computing. Public Key Infrastructure (PKI) have been providing a practical solution for session key exchange for loads of web services. The key limitation of PKI solution is not only the need for a trusted third partly (e.g. certificate authority) but also the absent link between data owner and the encryption keys. The latter is arguably more important where accessing data needs to be linked with identify of the owner. Currently available key exchange protocols depend on using trusted couriers or secure channels, which can be subject to man-in-the-middle attack and various other attacks. This paper proposes a new protocol for Key Exchange using Biometric Identity Based Encryption (KE-BIBE) that enables parties to securely exchange cryptographic keys even an adversary is monitoring the communication channel between the parties. The proposed protocol combines biometrics with IBE in order to provide a secure way to access symmetric keys based on the identity of the users in unsecure environment. In the KE-BIOBE protocol, the message is first encrypted by the data owner using a traditional symmetric key before migrating it to a cloud storage. The symmetric key is then encrypted using public biometrics of the users selected by data owner to decrypt the message based on Fuzzy Identity-Based Encryption. Only the selected users will be able to decrypt the message by providing a fresh sample of their biometric data. The paper argues that the proposed solution eliminates the needs for a key distribution centre in traditional cryptography. It will also give data owner the power of finegrained sharing of encrypted data by control who can access their data.
A study of the 3D radiative transfer effect in cloudy atmospheres
NASA Astrophysics Data System (ADS)
Okata, M.; Teruyuki, N.; Suzuki, K.
2015-12-01
Evaluation of the effect of clouds in the atmosphere is a significant problem in the Earth's radiation budget study with their large uncertainties of microphysics and the optical properties. In this situation, we still need more investigations of 3D cloud radiative transer problems using not only models but also satellite observational data.For this purpose, we have developed a 3D-Monte-Carlo radiative transfer code that is implemented with various functions compatible with the OpenCLASTR R-Star radiation code for radiance and flux computation, i.e. forward and backward tracing routines, non-linear k-distribution parameterization (Sekiguchi and Nakajima, 2008) for broad band solar flux calculation, and DM-method for flux and TMS-method for upward radiance (Nakajima and Tnaka 1998). We also developed a Minimum cloud Information Deviation Profiling Method (MIDPM) as a method for a construction of 3D cloud field with MODIS/AQUA and CPR/CloudSat data. We then selected a best-matched radar reflectivity factor profile from the library for each of off-nadir pixels of MODIS where CPR profile is not available, by minimizing the deviation between library MODIS parameters and those at the pixel. In this study, we have used three cloud microphysical parameters as key parameters for the MIDPM, i.e. effective particle radius, cloud optical thickness and top of cloud temperature, and estimated 3D cloud radiation budget. We examined the discrepancies between satellite observed and mode-simulated radiances and three cloud microphysical parameter's pattern for studying the effects of cloud optical and microphysical properties on the radiation budget of the cloud-laden atmospheres.
He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min
2013-01-01
Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.
NASA Astrophysics Data System (ADS)
Buxbaum, T. M.; Thoman, R.; Romanovsky, V. E.
2015-12-01
Permafrost is ground at or below freezing for at least two consecutive years. It currently occupies 80% of Alaska. Permafrost temperature and active layer thickness (ALT) are key climatic variables for monitoring permafrost conditions. Active layer thickness is the depth that the top layer of ground above the permafrost thaws each summer season and permafrost temperature is the temperature of the frozen permafrost under this active layer. Knowing permafrost conditions is key for those individuals working and living in Alaska and the Arctic. The results of climate models predict vast changes and potential permafrost degradation across Alaska and the Arctic. NOAA is working to implement its 2014 Arctic Action Plan and permafrost forecasting is a missing piece of this plan. The Alaska Center for Climate Assessment and Policy (ACCAP), using our webinar software and our diverse network of statewide stakeholder contacts, hosted a listening session to bring together a select group of key stakeholders. During this listening session the National Weather Service (NWS) and key permafrost researchers explained what is possible in the realm of permafrost forecasting and participants had the opportunity to discuss and share with the group (NWS, researchers, other stakeholders) what is needed for usable permafrost forecasting. This listening session aimed to answer the questions: Is permafrost forecasting needed? If so, what spatial scale is needed by stakeholders? What temporal scales do stakeholders need/want? Are there key times (winter, fall freeze-up, etc.) or locations (North Slope, key oil development areas, etc.) where forecasting would be most applicable and useful? Are there other considerations or priority needs we haven't thought of regarding permafrost forecasting? This presentation will present the results of that listening session.
Sensitivity of DIVWAG to Variations in Weather Parameters
1976-04-01
1 18. SUPPLEMENTARY NOTES 1 19. KEY WORDS (Continue on reverse aide if necessary and Identify by block number) DIVWAG WAR GAME SIMULATION...simulation of a Division Level War Game , to determine the signif- icance of varying battlefield parameters; i.e., artillery parameters, troop and...The only Red artillery weapons doing better in bad weather are the 130MM guns , but this statistic is tempered by the few casualties occuring in
NASA Astrophysics Data System (ADS)
Godinez, H. C.; Rougier, E.; Osthus, D.; Srinivasan, G.
2017-12-01
Fracture propagation play a key role for a number of application of interest to the scientific community. From dynamic fracture processes like spall and fragmentation in metals and detection of gas flow in static fractures in rock and the subsurface, the dynamics of fracture propagation is important to various engineering and scientific disciplines. In this work we implement a global sensitivity analysis test to the Hybrid Optimization Software Suite (HOSS), a multi-physics software tool based on the combined finite-discrete element method, that is used to describe material deformation and failure (i.e., fracture and fragmentation) under a number of user-prescribed boundary conditions. We explore the sensitivity of HOSS for various model parameters that influence how fracture are propagated through a material of interest. The parameters control the softening curve that the model relies to determine fractures within each element in the mesh, as well a other internal parameters which influence fracture behavior. The sensitivity method we apply is the Fourier Amplitude Sensitivity Test (FAST), which is a global sensitivity method to explore how each parameter influence the model fracture and to determine the key model parameters that have the most impact on the model. We present several sensitivity experiments for different combination of model parameters and compare against experimental data for verification.
NASA Astrophysics Data System (ADS)
Yuan, Chunhua; Wang, Jiang; Yi, Guosheng
2017-03-01
Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.
Key Performance Parameter Driven Technology Goals for Electric Machines and Power Systems
NASA Technical Reports Server (NTRS)
Bowman, Cheryl; Jansen, Ralph; Brown, Gerald; Duffy, Kirsten; Trudell, Jeffrey
2015-01-01
Transitioning aviation to low carbon propulsion is one of the crucial strategic research thrust and is a driver in the search for alternative propulsion system for advanced aircraft configurations. This work requires multidisciplinary skills coming from multiple entities. The feasibility of scaling up various electric drive system technologies to meet the requirements of a large commercial transport is discussed in terms of key parameters. Functional requirements are identified that impact the power system design. A breakeven analysis is presented to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.
Practice Parameter for Child and Adolescent Forensic Evaluations
ERIC Educational Resources Information Center
Journal of the American Academy of Child & Adolescent Psychiatry, 2011
2011-01-01
This Parameter addresses the key concepts that differentiate the forensic evaluation of children and adolescents from a clinical assessment. There are ethical issues unique to the forensic evaluation, because the forensic evaluator's duty is to the person, court, or agency requesting the evaluation, rather than to the patient. The forensic…
Crystal growth of device quality GaAs in space
NASA Technical Reports Server (NTRS)
Gatos, H. C.; Lagowski, J.
1979-01-01
The optimization of space processing of GaAs is described. The detailed compositional, structural, and electronic characterization of GaAs on a macro- and microscale and the relationships between growth parameters and the properties of GaAs are among the factors discussed. The key parameters limiting device performance are assessed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Scott; Haslauer, Claus P.; Cirpka, Olaf A.
2017-01-05
The key points of this presentation were to approach the problem of linking breakthrough curve shape (RP-CTRW transition distribution) to structural parameters from a Monte Carlo approach and to use the Monte Carlo analysis to determine any empirical error
[Technological development: a weak link in vaccine innovation in Brazil].
Homma, Akira; Martins, Reinaldo M; Jessouroum, Ellen; Oliva, Otavio
2003-01-01
In very recent years, the federal government has launched important initiatives mean to strengthen science, technology, and innovation in Brazil and thus enhance the results of technological innovation in key areas of the country's economy. Yet these initiatives have not been enough to reduce Brazil's heavy dependence on goods and technology from more developed nations. The article describes the current state of vaccination, production, and technological development of vaccines both internationally and nationally. Some thoughts are also offered on the complexity of vaccine innovation and the various stages whose completion is essential to the whole process of technological development. An analysis is made of the parameters and factors involved in each stage; technical requirements for facilities and equipment; good manufacturing practice guidelines; organizational, infrastructural, and managerial needs; and the lengthy time periods adn high costs entailed in these activities.
Analysis and Management of Animal Populations: Modeling, Estimation and Decision Making
Williams, B.K.; Nichols, J.D.; Conroy, M.J.
2002-01-01
This book deals with the processes involved in making informed decisions about the management of animal populations. It covers the modeling of population responses to management actions, the estimation of quantities needed in the modeling effort, and the application of these estimates and models to the development of sound management decisions. The book synthesizes and integrates in a single volume the methods associated with these themes, as they apply to ecological assessment and conservation of animal populations. KEY FEATURES * Integrates population modeling, parameter estimation and * decision-theoretic approaches to management in a single, cohesive framework * Provides authoritative, state-of-the-art descriptions of quantitative * approaches to modeling, estimation and decision-making * Emphasizes the role of mathematical modeling in the conduct of science * and management * Utilizes a unifying biological context, consistent mathematical notation, * and numerous biological examples
Emittance preservation during bunch compression with a magnetized beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stratakis, Diktys
2015-09-02
The deleterious effects of coherent synchrotron radiation (CSR) on the phase-space and energy spread of high-energy beams in accelerator light sources can significantly constrain the machine design and performance. In this paper, we present a simple method to preserve the beam emittance by means of using magnetized beams that exhibit a large aspect ratio on their transverse dimensions. The concept is based on combining a finite solenoid field where the beam is generated together with a special optics adapter. Numerical simulations of this new type of beam source show that the induced phase-space density growth can be notably suppressed tomore » less than 1% for any bunch charge. This work elucidates the key parameters that are needed for emittance preservation, such as the required field and aspect ratio for a given bunch charge.« less
Flow-Boiling Critical Heat Flux Experiments Performed in Reduced Gravity
NASA Technical Reports Server (NTRS)
Hasan, Mohammad M.; Mudawar, Issam
2005-01-01
Poor understanding of flow boiling in microgravity has recently emerged as a key obstacle to the development of many types of power generation and advanced life support systems intended for space exploration. The critical heat flux (CHF) is perhaps the most important thermal design parameter for boiling systems involving both heatflux-controlled devices and intense heat removal. Exceeding the CHF limit can lead to permanent damage, including physical burnout of the heat-dissipating device. The importance of the CHF limit creates an urgent need to develop predictive design tools to ensure both the safe and reliable operation of a two-phase thermal management system under the reduced-gravity (like that on the Moon and Mars) and microgravity environments of space. At present, very limited information is available on flow-boiling heat transfer and the CHF under these conditions.
NASA Technical Reports Server (NTRS)
Demerdash, Nabeel A. O.; Wang, Ren-Hong
1988-01-01
The main purpose of this project is the development of computer-aided models for purposes of studying the effects of various design changes on the parameters and performance characteristics of the modified Lundell class of alternators (MLA) as components of a solar dynamic power system supplying electric energy needs in the forthcoming space station. Key to this modeling effort is the computation of magnetic field distribution in MLAs. Since the nature of the magnetic field is three-dimensional, the first step in the investigation was to apply the finite element method to discretize volume, using the tetrahedron as the basic 3-D element. Details of the stator 3-D finite element grid are given. A preliminary look at the early stage of a 3-D rotor grid is presented.
Laser-induced damage of coatings on Yb:YAG crystals at cryogenic condition
NASA Astrophysics Data System (ADS)
Wang, He; Zhang, Weili; Chen, Shunli; Zhu, Meiping; He, Hongbo; Fan, Zhengxiu
2011-12-01
As large amounts of heat need to be dissipated during laser operation, some diode pumped solid state lasers (DPSSL), especially Yb:YAG laser, operate at cryogenic condition. This work investigated the laser induced damage of coatings (high-reflective and anti-reflective coatings) on Yb:YAG crystals at cryogenic temperature and room temperature. The results show that the damage threshold of coatings at cryogenic temperature is lower than the one at room temperature. Field-emission scanning electron microscopy (FESEM), optical profiler, step profiler and Atomic force microscope (AFM) were used to obtain the damage morphology, size and depth. Taking alteration of physical parameters, microstructure of coatings and the environmental pollution into consideration, we analyzed the key factor of lowering the coating damage threshold at cryogenic conditions. The results are important to understand the mechanisms leading to damage at cryogenic condition.
Modeling the stock price returns volatility using GARCH(1,1) in some Indonesia stock prices
NASA Astrophysics Data System (ADS)
Awalludin, S. A.; Ulfah, S.; Soro, S.
2018-01-01
In the financial field, volatility is one of the key variables to make an appropriate decision. Moreover, modeling volatility is needed in derivative pricing, risk management, and portfolio management. For this reason, this study presented a widely used volatility model so-called GARCH(1,1) for estimating the volatility of daily returns of stock prices of Indonesia from July 2007 to September 2015. The returns can be obtained from stock price by differencing log of the price from one day to the next. Parameters of the model were estimated by Maximum Likelihood Estimation. After obtaining the volatility, natural cubic spline was employed to study the behaviour of the volatility over the period. The result shows that GARCH(1,1) indicate evidence of volatility clustering in the returns of some Indonesia stock prices.
Target tracking system based on preliminary and precise two-stage compound cameras
NASA Astrophysics Data System (ADS)
Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie
2018-02-01
Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.
Sterba, Sonya K; Rights, Jason D
2016-01-01
Item parceling remains widely used under conditions that can lead to parcel-allocation variability in results. Hence, researchers may be interested in quantifying and accounting for parcel-allocation variability within sample. To do so in practice, three key issues need to be addressed. First, how can we combine sources of uncertainty arising from sampling variability and parcel-allocation variability when drawing inferences about parameters in structural equation models? Second, on what basis can we choose the number of repeated item-to-parcel allocations within sample? Third, how can we diagnose and report proportions of total variability per estimate arising due to parcel-allocation variability versus sampling variability? This article addresses these three methodological issues. Developments are illustrated using simulated and empirical examples, and software for implementing them is provided.
A Geosynchronous Lidar System for Atmospheric Winds and Moisture Measurements
NASA Technical Reports Server (NTRS)
Emmitt, G. D.
2001-01-01
An observing system comprised of two lidars in geosychronous orbit would enable the synoptic and meso-scale measurement of atmospheric winds and moisture both of which are key first-order variables of the Earth's weather equation. Simultaneous measurement of these parameters at fast revisit rates promises large advancements in our weather prediction skills. Such capabilities would be unprecedented and a) yield greatly improved and finer resolution initial conditions for models, b) make existing costly and cumbersome measurement approaches obsolete, and c) obviate the use of numerical techniques needed to correct data obtained using present observing systems. Additionally, simultaneous synoptic wind and moisture observations would lead to improvements in model parameterizations, and in our knowledge of small-scale weather processes. Technology and science data product assessments are ongoing. Results will be presented during the conference.
NASA Technical Reports Server (NTRS)
Walker, Ryan Thomas; Holland, David; Parizek, Byron R.; Alley, Richard B.; Nowicki, Sophie M. J.; Jenkins, Adrian
2013-01-01
Thermodynamic flowline and plume models for the ice shelf-ocean system simplify the ice and ocean dynamics sufficiently to allow extensive exploration of parameters affecting ice-sheet stability while including key physical processes. Comparison between geophysically and laboratory-based treatments of ice-ocean interface thermodynamics shows reasonable agreement between calculated melt rates, except where steep basal slopes and relatively high ocean temperatures are present. Results are especially sensitive to the poorly known drag coefficient, highlighting the need for additional field experiments to constrain its value. These experiments also suggest that if the ice-ocean interface near the grounding line is steeper than some threshold, further steepening of the slope may drive higher entrainment that limits buoyancy, slowing the plume and reducing melting; if confirmed, this will provide a stabilizing feedback on ice sheets under some circumstances.
Vertex detectors: The state of the art and future prospects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damerell, C.J.S.
1997-01-01
We review the current status of vertex detectors (tracking microscopes for the recognition of charm and bottom particle decays). The reasons why silicon has become the dominant detector medium are explained. Energy loss mechanisms are reviewed, as well as the physics and technology of semiconductor devices, emphasizing the areas of most relevance for detectors. The main design options (microstrips and pixel devices, both CCD`s and APS`s) are discussed, as well as the issue of radiation damage, which probably implies the need to change to detector media beyond silicon for some vertexing applications. Finally, the evolution of key performance parameters overmore » the past 15 years is reviewed, and an attempt is made to extrapolate to the likely performance of detectors working at the energy frontier ten years from now.« less
Ijaz, Nadine; Boon, Heather
2018-04-01
The World Health Organization (WHO) has called for the increased statutory regulation of traditional and complementary medicine practitioners and practices, currently implemented in about half of nations surveyed. According to recent WHO data, however, the absence of policy guidelines in this area represents a significant barrier to implementation of such professional regulations. This commentary reviews several key challenges that distinguish the statutory regulation of traditional medicine practitioners and practices from biomedical professional regulation, providing a foundation for the development of policy making parameters in this area. Foremost in this regard are the ongoing impacts of the European colonial encounter, which reinforce biomedicine's disproportionate political dominance across the globe despite traditional medicine's ongoing widespread use (particularly in the global South). In this light, the authors discuss the conceptual and historical underpinnings of contemporary professional regulatory structures, the tensions between institutional and informal traditional medicine training pathways, and the policy challenges presented by the prospect of standardizing internally diverse indigenous healing approaches. Epistemic and evidentiary tensions, as well as the policy complexities surrounding the intersection of cultural and clinical considerations, present additional challenges to regulators. Conceptualizing professional regulation as an intellectual property claim under the law, the authors further consider what it means to protect traditional knowledge and prevent misappropriation in this context. Overall, the authors propose that innovative professional regulatory approaches are needed in this area to address safety, quality of care, and accessibility as key public interest concerns, while prioritizing the redress of historical inequities, protection of diverse indigenous knowledges, and delivery of care to underserved populations.
Developing micro-level urban ecosystem indicators for sustainability assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dizdaroglu, Didem, E-mail: dizdaroglu@bilkent.edu.tr
Sustainability assessment is increasingly being viewed as an important tool to aid in the shift towards sustainable urban ecosystems. An urban ecosystem is a dynamic system and requires regular monitoring and assessment through a set of relevant indicators. An indicator is a parameter which provides information about the state of the environment by producing a quantitative value. Indicator-based sustainability assessment needs to be considered on all spatial scales to provide efficient information of urban ecosystem sustainability. The detailed data is necessary to assess environmental change in urban ecosystems at local scale and easily transfer this information to the national andmore » global scales. This paper proposes a set of key micro-level urban ecosystem indicators for monitoring the sustainability of residential developments. The proposed indicator framework measures the sustainability performance of urban ecosystem in 3 main categories including: natural environment, built environment, and socio-economic environment which are made up of 9 sub-categories, consisting of 23 indicators. This paper also describes theoretical foundations for the selection of each indicator with reference to the literature [Turkish] Highlights: • As the impacts of environmental problems have multi-scale characteristics, sustainability assessment needs to be considered on all scales. • The detailed data is necessary to assess local environmental change in urban ecosystems to provide insights into the national and global scales. • This paper proposes a set of key micro-level urban ecosystem indicators for monitoring the sustainability of residential developments. • This paper also describes theoretical foundations for the selection of each indicator with reference to the literature.« less
Träger, Karl; Skrabal, Christian; Fischer, Guenther; Datzmann, Thomas; Schroeder, Janpeter; Fritzler, Daniel; Hartmann, Jan; Liebold, Andreas; Reinelt, Helmut
2017-05-29
Infective endocarditis is a serious disease condition. Depending on the causative microorganism and clinical symptoms, cardiac surgery and valve replacement may be needed, posing additional risks to patients who may simultaneously suffer from septic shock. The combination of surgery bacterial spreadout and artificial cardiopulmonary bypass (CPB) surfaces results in a release of key inflammatory mediators leading to an overshooting systemic hyperinflammatory state frequently associated with compromised hemodynamic and organ function. Hemoadsorption might represent a potential approach to control the hyperinflammatory systemic reaction associated with the procedure itself and subsequent clinical conditions by reducing a broad range of immuno-regulatory mediators. We describe 39 cardiac surgery patients with proven acute infective endocarditis obtaining valve replacement during CPB surgery in combination with intraoperative CytoSorb hemoadsorption. In comparison, we evaluated a historical group of 28 patients with infective endocarditis undergoing CPB surgery without intraoperative hemoadsorption. CytoSorb treatment was associated with a mitigated postoperative response of key cytokines and clinical metabolic parameters. Moreover, patients showed hemodynamic stability during and after the operation while the need for vasopressors was less pronounced within hours after completion of the procedure, which possibly could be attributed to the additional CytoSorb treatment. Intraoperative hemoperfusion treatment was well tolerated and safe without the occurrence of any CytoSorb device-related adverse event. Thus, this interventional approach may open up potentially promising therapeutic options for critically-ill patients with acute infective endocarditis during and after cardiac surgery, with cytokine reduction, improved hemodynamic stability and organ function as seen in our patients.
Boon, Heather
2018-01-01
Abstract The World Health Organization (WHO) has called for the increased statutory regulation of traditional and complementary medicine practitioners and practices, currently implemented in about half of nations surveyed. According to recent WHO data, however, the absence of policy guidelines in this area represents a significant barrier to implementation of such professional regulations. This commentary reviews several key challenges that distinguish the statutory regulation of traditional medicine practitioners and practices from biomedical professional regulation, providing a foundation for the development of policy making parameters in this area. Foremost in this regard are the ongoing impacts of the European colonial encounter, which reinforce biomedicine's disproportionate political dominance across the globe despite traditional medicine's ongoing widespread use (particularly in the global South). In this light, the authors discuss the conceptual and historical underpinnings of contemporary professional regulatory structures, the tensions between institutional and informal traditional medicine training pathways, and the policy challenges presented by the prospect of standardizing internally diverse indigenous healing approaches. Epistemic and evidentiary tensions, as well as the policy complexities surrounding the intersection of cultural and clinical considerations, present additional challenges to regulators. Conceptualizing professional regulation as an intellectual property claim under the law, the authors further consider what it means to protect traditional knowledge and prevent misappropriation in this context. Overall, the authors propose that innovative professional regulatory approaches are needed in this area to address safety, quality of care, and accessibility as key public interest concerns, while prioritizing the redress of historical inequities, protection of diverse indigenous knowledges, and delivery of care to underserved populations. PMID:29359948
NASA Astrophysics Data System (ADS)
Baranov, O.; Bazaka, K.; Kersten, H.; Keidar, M.; Cvelbar, U.; Xu, S.; Levchenko, I.
2017-12-01
Given the vast number of strategies used to control the behavior of laboratory and industrially relevant plasmas for material processing and other state-of-the-art applications, a potential user may find themselves overwhelmed with the diversity of physical configurations used to generate and control plasmas. Apparently, a need for clearly defined, physics-based classification of the presently available spectrum of plasma technologies is pressing, and the critically summary of the individual advantages, unique benefits, and challenges against key application criteria is a vital prerequisite for the further progress. To facilitate selection of the technological solutions that provide the best match to the needs of the end user, this work systematically explores plasma setups, focusing on the most significant family of the processes—control of plasma fluxes—which determine the distribution and delivery of mass and energy to the surfaces of materials being processed and synthesized. A novel classification based on the incorporation of substrates into plasma-generating circuitry is also proposed and illustrated by its application to a wide variety of plasma reactors, where the effect of substrate incorporation on the plasma fluxes is emphasized. With the key process and material parameters, such as growth and modification rates, phase transitions, crystallinity, density of lattice defects, and others being linked to plasma and energy fluxes, this review offers direction to physicists, engineers, and materials scientists engaged in the design and development of instrumentation for plasma processing and diagnostics, where the selection of the correct tools is critical for the advancement of emerging and high-performance applications.
Finite-size analysis of continuous-variable measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Xueying; Zhang, Yichen; Zhao, Yijia; Wang, Xiangyu; Yu, Song; Guo, Hong
2017-10-01
We study the impact of the finite-size effect on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocol, mainly considering the finite-size effect on the parameter estimation procedure. The central-limit theorem and maximum likelihood estimation theorem are used to estimate the parameters. We also analyze the relationship between the number of exchanged signals and the optimal modulation variance in the protocol. It is proved that when Charlie's position is close to Bob, the CV-MDI QKD protocol has the farthest transmission distance in the finite-size scenario. Finally, we discuss the impact of finite-size effects related to the practical detection in the CV-MDI QKD protocol. The overall results indicate that the finite-size effect has a great influence on the secret-key rate of the CV-MDI QKD protocol and should not be ignored.
Requirements Document for Development of a Livermore Tomography Tools Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seetho, I. M.
In this document, we outline an exercise performed at LLNL to evaluate the user interface deficits of a LLNL-developed CT reconstruction software package, Livermore Tomography Tools (LTT). We observe that a difficult-to-use command line interface and the lack of support functions compound to generate a bottleneck in the CT reconstruction process when input parameters to key functions are not well known. Through the exercise of systems engineering best practices, we generate key performance parameters for a LTT interface refresh, and specify a combination of back-end (“test-mode” functions) and front-end (graphical user interface visualization and command scripting tools) solutions to LTT’smore » poor user interface that aim to mitigate issues and lower costs associated with CT reconstruction using LTT. Key functional and non-functional requirements and risk mitigation strategies for the solution are outlined and discussed.« less
Metabolic and Subjective Results Review of the Integrated Suit Test Series
NASA Technical Reports Server (NTRS)
Norcross, J.R.; Stroud, L.C.; Klein, J.; Desantis, L.; Gernhardt, M.L.
2009-01-01
Crewmembers will perform a variety of exploration and construction activities on the lunar surface. These activities will be performed while inside an extravehicular activity (EVA) spacesuit. In most cases, human performance is compromised while inside an EVA suit as compared to a crewmember s unsuited performance baseline. Subjects completed different EVA type tasks, ranging from ambulation to geology and construction activities, in different lunar analog environments including overhead suspension, underwater and 1-g lunar-like terrain, in both suited and unsuited conditions. In the suited condition, the Mark III (MKIII) EVA technology demonstrator suit was used and suit pressure and suit weight were parameters tested. In the unsuited conditions, weight, mass, center of gravity (CG), terrain type and navigation were the parameters. To the extent possible, one parameter was varied while all others were held constant. Tests were not fully crossed, but rather one parameter was varied while all others were left in the most nominal setting. Oxygen consumption (VO2), modified Cooper-Harper (CH) ratings of operator compensation and ratings of perceived exertion (RPE) were measured for each trial. For each variable, a lower value correlates to more efficient task performance. Due to a low sample size, statistical significance was not attainable. Initial findings indicate that suit weight, CG and the operational environment can have a large impact on human performance during EVA. Systematic, prospective testing series such as those performed to date will enable a better understanding of the crucial interactions of the human and the EVA suit system and their environment. However, work remains to be done to confirm these findings. These data have been collected using only unsuited subjects and one EVA suit prototype that is known to fit poorly on a large demographic of the astronaut population. Key findings need to be retested using an EVA suit prototype better suited to a larger anthropometric portion of the astronaut population, and elements tested only in the unsuited condition need to be evaluated with an EVA suit and appropriate analog environment.
Performance enhancement of linear stirling cryocoolers
NASA Astrophysics Data System (ADS)
Korf, Herbert; Ruehlich, Ingo; Wiedmann, Th.
2000-12-01
Performance and reliability parameters of the AIM Stirling coolers have been presented in several previous publications. This paper focuses on recent developments at AIM for the COP improvement of cryocoolers in IR-detectors and systems applications. Improved COP of cryocoolers is a key for optimized form factors, weight and reliability. In addition, some systems are critical for minimum input power and consequently minimum electromagnetic interference or magnetic stray fields, heat sinking or minimum stress under high g-level, etc. Although performance parameters and loss mechanism are well understood and can be calculated precisely, several losses still had been excessive and needed to be minimized. The AIM program is based on the SADA I cryocooler, which now is optimized to carry 4.3 W net heat load at 77K. As this program will lead into applications on a space platform, in a next step AIM is introducing flexure bearings and in a final step, an advanced pulse tube cold head will be implemented. The performance of the SADA II cooler is also improved by using the same tools and methods than used for the performance increase of the SADA I cooler by a factor of two. The main features are summarized together with measured or calculated performance data.
Exploring the Parameters Controlling the Crystallinity-Conductivity Correlation of PFSA Ionomers
NASA Astrophysics Data System (ADS)
Kusoglu, Ahmet; Shi, Shouwen; Weber, Adam
Perfluorosulfonic-acid (PFSA) ionomers are the most commonly used solid-electrolyte in electrochemical energy devices because of their remarkable conductivity and chemical/mechanical stability, with the latter imparted by their semi-crystalline fluorocarbon backbone. PFSAs owe this unique combination of transport/stability functionalities to their phase-separated morphology of conductive hydrophilic ionic domains and the non-conductive hydrophobic backbone, which are connected via pendant chains. Thus, phase-separation is governed by fractions of backbone and ionic groups, which is controlled by the equivalent weight (EW). Therefore, EW, along with the pendant chain chemistry, directly impact the conductive vs non-conductive regions, and consequently the interrelation between transport and stability. Driven by the need to achieve higher conductivities without disrupting the crystallinity, various pendant-chain chemistries have been developed. In this talk, we will report the results of a systematic investigation on hydration, conductivity, mechanical properties and crystallinity of various types and EWs of PFSA ionomers to (i) develop a structure/property map, and (ii) identify the key parameters controlling morphology and properties. It will be discussed how the pendant-chain and backbone lengths affect the conductivity and crystallinity, respectively. Lastly, the data set will be analyzed to explore universal structure/property relationships for PFSAs.
Probabilistic projections of 21st century climate change over Northern Eurasia
NASA Astrophysics Data System (ADS)
Monier, E.; Sokolov, A. P.; Schlosser, C. A.; Scott, J. R.; Gao, X.
2013-12-01
We present probabilistic projections of 21st century climate change over Northern Eurasia using the Massachusetts Institute of Technology (MIT) Integrated Global System Model (IGSM), an integrated assessment model that couples an earth system model of intermediate complexity, with a two-dimensional zonal-mean atmosphere, to a human activity model. Regional climate change is obtained by two downscaling methods: a dynamical downscaling, where the IGSM is linked to a three dimensional atmospheric model; and a statistical downscaling, where a pattern scaling algorithm uses climate-change patterns from 17 climate models. This framework allows for key sources of uncertainty in future projections of regional climate change to be accounted for: emissions projections; climate system parameters (climate sensitivity, strength of aerosol forcing and ocean heat uptake rate); natural variability; and structural uncertainty. Results show that the choice of climate policy and the climate parameters are the largest drivers of uncertainty. We also nd that dierent initial conditions lead to dierences in patterns of change as large as when using different climate models. Finally, this analysis reveals the wide range of possible climate change over Northern Eurasia, emphasizing the need to consider all sources of uncertainty when modeling climate impacts over Northern Eurasia.
Behavior analysis for elderly care using a network of low-resolution visual sensors
NASA Astrophysics Data System (ADS)
Eldib, Mohamed; Deboeverie, Francis; Philips, Wilfried; Aghajan, Hamid
2016-07-01
Recent advancements in visual sensor technologies have made behavior analysis practical for in-home monitoring systems. The current in-home monitoring systems face several challenges: (1) visual sensor calibration is a difficult task and not practical in real-life because of the need for recalibration when the visual sensors are moved accidentally by a caregiver or the senior citizen, (2) privacy concerns, and (3) the high hardware installation cost. We propose to use a network of cheap low-resolution visual sensors (30×30 pixels) for long-term behavior analysis. The behavior analysis starts by visual feature selection based on foreground/background detection to track the motion level in each visual sensor. Then a hidden Markov model (HMM) is used to estimate the user's locations without calibration. Finally, an activity discovery approach is proposed using spatial and temporal contexts. We performed experiments on 10 months of real-life data. We show that the HMM approach outperforms the k-nearest neighbor classifier against ground truth for 30 days. Our framework is able to discover 13 activities of daily livings (ADL parameters). More specifically, we analyze mobility patterns and some of the key ADL parameters to detect increasing or decreasing health conditions.
UV fatigue investigations with non-destructive tools in silica
NASA Astrophysics Data System (ADS)
Natoli, Jean-Yves; Beaudier, Alexandre; Wagner, Frank R.
2017-08-01
A fatigue effect is often observed under multiple laser irradiations, overall in UV. This decrease of LIDT, is a critical parameter for laser sources with high repetition rates and with a need of long-term life, as in spatial applications at 355nm. A challenge is also to replace excimer lasers by solid laser sources, this challenge requires to improve drastically the lifetime of optical materials at 266nm. Main applications of these sources are devoted to material surface nanostructuration, spectroscopy and medical surgeries. In this work we focus on the understanding of the laser matter interaction at 266nm in silica in order to predict the lifetime of components and study parameters links to these lifetimes to give keys of improvement for material suppliers. In order to study the mechanism involved in the case of multiple irradiations, an interesting approach is to involve the evolution of fluorescence, in order to observe the first stages of material changes just before breakdown. We will show that it is sometime possible to estimate the lifetime of component only with the fluorescence measurement, saving time and materials. Moreover, the data from the diagnostics give relevant informations to highlight "defects" induced by multiple laser irradiations.
Study on key technologies of optimization of big data for thermal power plant performance
NASA Astrophysics Data System (ADS)
Mao, Mingyang; Xiao, Hong
2018-06-01
Thermal power generation accounts for 70% of China's power generation, the pollutants accounted for 40% of the same kind of emissions, thermal power efficiency optimization needs to monitor and understand the whole process of coal combustion and pollutant migration, power system performance data show explosive growth trend, The purpose is to study the integration of numerical simulation of big data technology, the development of thermal power plant efficiency data optimization platform and nitrogen oxide emission reduction system for the thermal power plant to improve efficiency, energy saving and emission reduction to provide reliable technical support. The method is big data technology represented by "multi-source heterogeneous data integration", "large data distributed storage" and "high-performance real-time and off-line computing", can greatly enhance the energy consumption capacity of thermal power plants and the level of intelligent decision-making, and then use the data mining algorithm to establish the boiler combustion mathematical model, mining power plant boiler efficiency data, combined with numerical simulation technology to find the boiler combustion and pollutant generation rules and combustion parameters of boiler combustion and pollutant generation Influence. The result is to optimize the boiler combustion parameters, which can achieve energy saving.
Hakeem Said, Inamullah; Gencer, Selin; Ullrich, Matthias S; Kuhnert, Nikolai
2018-06-01
Dietary phenolic compounds are often transformed by gut microbiota prior to absorption. This transformation may modulate their biological activities. Many fundamental questions still need to be addressed to understand how the gut microbiota-diet interactions affect human health. Herein, a UHPLC-QTOF mass spectrometry-based method for the quantification of uptake and determination of intracellular bacterial concentrations of dietary phenolics from coffee and tea was developed. Quantitative uptake data for selected single purified phenolics were determined. The specific uptake from mixtures containing up to four dietary relevant compounds was investigated to assess changes of uptake parameters in a mixture model system. Indeed, perturbation of bacteria by several compounds alters uptake parameter in particular t max . Finally, model bacteria were dosed with complex dietary mixtures such as diluted tea or coffee extracts. The uptake kinetics of the twenty most abundant phenolics was quantified and the findings are discussed. For the first time, quantitative data on in-vitro uptake of dietary phenolics from food matrices were obtained indicating a time-dependent differential uptake of nutritional compounds. Copyright © 2018. Published by Elsevier Ltd.
Injection locking at 2f of spin torque oscillators under influence of thermal noise.
Tortarolo, M; Lacoste, B; Hem, J; Dieudonné, C; Cyrille, M-C; Katine, J A; Mauri, D; Zeltser, A; Buda-Prejbeanu, L D; Ebels, U
2018-01-29
Integration of Spin Torque Nano-Oscillators STNO's in conventional microwave circuits means that the devices have to meet certain specifications. One of the most important criteria is the phase noise, being the key parameter to evaluate the performance and define possible applications. Phase locking several oscillators together has been suggested as a possible means to decrease phase noise and consequently, the linewidth. In this work we present experiments, numerical simulations and an analytic model to describe the effects of thermal noise in the injection locking of a tunnel junction based STNO. The analytics show the relation of the intrinsic parameters of the STNO with the phase noise level, opening the path to tailor the spectral characteristics by the magnetic configuration. Experiments and simulations demonstrate that in the in-plane magnetized structure, while the frequency is locked, much higher reference currents are needed to reduce the noise by phase locking. Moreover, our analysis shows that it is possible to control the phase noise by the reference microwave current (I RF ) and that it can be further reduced by increasing the bias current (I DC ) of the oscillator, keeping the reference current in feasible limits for applications.
Transcranial Direct Current Stimulation in Stroke Rehabilitation: A Review of Recent Advancements
Gomez Palacio Schjetnan, Andrea; Faraji, Jamshid; Metz, Gerlinde A.; Tatsuno, Masami; Luczak, Artur
2013-01-01
Transcranial direct current stimulation (tDCS) is a promising technique to treat a wide range of neurological conditions including stroke. The pathological processes following stroke may provide an exemplary system to investigate how tDCS promotes neuronal plasticity and functional recovery. Changes in synaptic function after stroke, such as reduced excitability, formation of aberrant connections, and deregulated plastic modifications, have been postulated to impede recovery from stroke. However, if tDCS could counteract these negative changes by influencing the system's neurophysiology, it would contribute to the formation of functionally meaningful connections and the maintenance of existing pathways. This paper is aimed at providing a review of underlying mechanisms of tDCS and its application to stroke. In addition, to maximize the effectiveness of tDCS in stroke rehabilitation, future research needs to determine the optimal stimulation protocols and parameters. We discuss how stimulation parameters could be optimized based on electrophysiological activity. In particular, we propose that cortical synchrony may represent a biomarker of tDCS efficacy to indicate communication between affected areas. Understanding the mechanisms by which tDCS affects the neural substrate after stroke and finding ways to optimize tDCS for each patient are key to effective rehabilitation approaches. PMID:23533955
Das, Raibatak; Cairo, Christopher W.; Coombs, Daniel
2009-01-01
The extraction of hidden information from complex trajectories is a continuing problem in single-particle and single-molecule experiments. Particle trajectories are the result of multiple phenomena, and new methods for revealing changes in molecular processes are needed. We have developed a practical technique that is capable of identifying multiple states of diffusion within experimental trajectories. We model single particle tracks for a membrane-associated protein interacting with a homogeneously distributed binding partner and show that, with certain simplifying assumptions, particle trajectories can be regarded as the outcome of a two-state hidden Markov model. Using simulated trajectories, we demonstrate that this model can be used to identify the key biophysical parameters for such a system, namely the diffusion coefficients of the underlying states, and the rates of transition between them. We use a stochastic optimization scheme to compute maximum likelihood estimates of these parameters. We have applied this analysis to single-particle trajectories of the integrin receptor lymphocyte function-associated antigen-1 (LFA-1) on live T cells. Our analysis reveals that the diffusion of LFA-1 is indeed approximately two-state, and is characterized by large changes in cytoskeletal interactions upon cellular activation. PMID:19893741
An Analysis of the U.S. Army’s T-11 Advanced Tactical Parachute System and Potential Path Forward
2016-12-01
Oversight Council JRTC Joint Readiness Training Center JWG Joint Working Group kias Knots indicated air speed KPP Key Performance Parameter KSA Key...AGL +/- 125 feet altitude holding error) at 130 - 150 knots indicated airspeed ( KIAS ) with a parachutist weighing 332 pounds including equipment
The Differentiation of Response Numerosities in the Pigeon
ERIC Educational Resources Information Center
Machado, Armando; Rodrigues, Paulo
2007-01-01
Two experiments examined how pigeons differentiate response patterns along the dimension of number. In Experiment 1, 5 pigeons received food after pecking the left key at least N times and then switching to the right key (Mechner's Fixed Consecutive Number schedule). Parameter N varied across conditions from 4 to 32. Results showed that run length…
Systems Analysis of the Hydrogen Transition with HyTrans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leiby, Paul Newsome; Greene, David L; Bowman, David Charles
2007-01-01
The U.S. Federal government is carefully considering the merits and long-term prospects of hydrogen-fueled vehicles. NAS (1) has called for the careful application of systems analysis tools to structure the complex assessment required. Others, raising cautionary notes, question whether a consistent and plausible transition to hydrogen light-duty vehicles can identified (2) and whether that transition would, on balance, be environmentally preferred. Modeling the market transition to hydrogen-powered vehicles is an inherently complex process, encompassing hydrogen production, delivery and retailing, vehicle manufacturing, and vehicle choice and use. We describe the integration of key technological and market factors in a dynamic transitionmore » model, HyTrans. The usefulness of HyTrans and its predictions depends on three key factors: (1) the validity of the economic theories that underpin the model, (2) the authenticity with which the key processes are represented, and (3) the accuracy of specific parameter values used in the process representations. This paper summarizes the theoretical basis of HyTrans, and highlights the implications of key parameter specifications with sensitivity analysis.« less
Anomaly Monitoring Method for Key Components of Satellite
Fan, Linjun; Xiao, Weidong; Tang, Jun
2014-01-01
This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703
Non-stationary (13)C-metabolic flux ratio analysis.
Hörl, Manuel; Schnidder, Julian; Sauer, Uwe; Zamboni, Nicola
2013-12-01
(13)C-metabolic flux analysis ((13)C-MFA) has become a key method for metabolic engineering and systems biology. In the most common methodology, fluxes are calculated by global isotopomer balancing and iterative fitting to stationary (13)C-labeling data. This approach requires a closed carbon balance, long-lasting metabolic steady state, and the detection of (13)C-patterns in a large number of metabolites. These restrictions mostly reduced the application of (13)C-MFA to the central carbon metabolism of well-studied model organisms grown in minimal media with a single carbon source. Here we introduce non-stationary (13)C-metabolic flux ratio analysis as a novel method for (13)C-MFA to allow estimating local, relative fluxes from ultra-short (13)C-labeling experiments and without the need for global isotopomer balancing. The approach relies on the acquisition of non-stationary (13)C-labeling data exclusively for metabolites in the proximity of a node of converging fluxes and a local parameter estimation with a system of ordinary differential equations. We developed a generalized workflow that takes into account reaction types and the availability of mass spectrometric data on molecular ions or fragments for data processing, modeling, parameter and error estimation. We demonstrated the approach by analyzing three key nodes of converging fluxes in central metabolism of Bacillus subtilis. We obtained flux estimates that are in agreement with published results obtained from steady state experiments, but reduced the duration of the necessary (13)C-labeling experiment to less than a minute. These results show that our strategy enables to formally estimate relative pathway fluxes on extremely short time scale, neglecting cellular carbon balancing. Hence this approach paves the road to targeted (13)C-MFA in dynamic systems with multiple carbon sources and towards rich media. © 2013 Wiley Periodicals, Inc.
Hope, Kirsty; Butler, Michelle; Durrheim, David; Gupta, Leena; Najjar, Zeina; Conaty, Stephen; Boonwatt, Leng; Fletcher, Stephanie
2016-01-01
Background There was a record number (n = 111) of influenza outbreaks in aged care facilities in New South Wales, Australia during 2014. To determine the impact of antiviral prophylaxis recommendations in practice, influenza outbreak data were compared for facilities in which antiviral prophylaxis and treatment were recommended and for those in which antivirals were recommended for treatment only. Methods Routinely collected outbreak data were extracted from the Notifiable Conditions Information Management System for two Local Health Districts where antiviral prophylaxis was routinely recommended and one Local Health District where antivirals were recommended for treatment but not routinely for prophylaxis. Data collected on residents included counts of influenza-like illness, confirmed influenza, hospitalizations and related deaths. Dates of onset, notification, influenza confirmation and antiviral recommendations were also collected for analysis. The Mann–Whitney U test was used to assess the significance of differences between group medians for key parameters. Results A total of 41 outbreaks (12 in the prophylaxis group and 29 in the treatment-only group) were included in the analysis. There was no significant difference in overall outbreak duration; outbreak duration after notification; or attack, hospitalization or case fatality rates between the two groups. The prophylaxis group had significantly higher cases with influenza-like illness (P = 0.03) and cases recommended antiviral treatment per facility (P = 0.01). Discussion This study found no significant difference in key outbreak parameters between the two groups. However, further high quality evidence is needed to guide the use of antivirals in responding to influenza outbreaks in aged care facilities. PMID:27757249
Merritt, Tony; Hope, Kirsty; Butler, Michelle; Durrheim, David; Gupta, Leena; Najjar, Zeina; Conaty, Stephen; Boonwatt, Leng; Fletcher, Stephanie
2016-01-01
There was a record number ( n = 111) of influenza outbreaks in aged care facilities in New South Wales, Australia during 2014. To determine the impact of antiviral prophylaxis recommendations in practice, influenza outbreak data were compared for facilities in which antiviral prophylaxis and treatment were recommended and for those in which antivirals were recommended for treatment only. Routinely collected outbreak data were extracted from the Notifiable Conditions Information Management System for two Local Health Districts where antiviral prophylaxis was routinely recommended and one Local Health District where antivirals were recommended for treatment but not routinely for prophylaxis. Data collected on residents included counts of influenza-like illness, confirmed influenza, hospitalizations and related deaths. Dates of onset, notification, influenza confirmation and antiviral recommendations were also collected for analysis. The Mann-Whitney U test was used to assess the significance of differences between group medians for key parameters. A total of 41 outbreaks (12 in the prophylaxis group and 29 in the treatment-only group) were included in the analysis. There was no significant difference in overall outbreak duration; outbreak duration after notification; or attack, hospitalization or case fatality rates between the two groups. The prophylaxis group had significantly higher cases with influenza-like illness ( P = 0.03) and cases recommended antiviral treatment per facility ( P = 0.01). This study found no significant difference in key outbreak parameters between the two groups. However, further high quality evidence is needed to guide the use of antivirals in responding to influenza outbreaks in aged care facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mundy, D; Tryggestad, E; Beltran, C
Purpose: To develop daily and monthly quality assurance (QA) programs in support of a new spot-scanning proton treatment facility using a combination of commercial and custom equipment and software. Emphasis was placed on efficiency and evaluation of key quality parameters. Methods: The daily QA program was developed to test output, spot size and position, proton beam energy, and image guidance using the Sun Nuclear Corporation rf-DQA™3 device and Atlas QA software. The program utilizes standard Atlas linear accelerator tests repurposed for proton measurements and a custom jig for indexing the device to the treatment couch. The monthly QA program wasmore » designed to test mechanical performance, image quality, radiation quality, isocenter coincidence, and safety features. Many of these tests are similar to linear accelerator QA counterparts, but many require customized test design and equipment. Coincidence of imaging, laser marker, mechanical, and radiation isocenters, for instance, is verified using a custom film-based device devised and manufactured at our facility. Proton spot size and position as a function of energy are verified using a custom spot pattern incident on film and analysis software developed in-house. More details concerning the equipment and software developed for monthly QA are included in the supporting document. Thresholds for daily and monthly tests were established via perturbation analysis, early experience, and/or proton system specifications and associated acceptance test results. Results: The periodic QA program described here has been in effect for approximately 9 months and has proven efficient and sensitive to sub-clinical variations in treatment delivery characteristics. Conclusion: Tools and professional guidelines for periodic proton system QA are not as well developed as their photon and electron counterparts. The program described here efficiently evaluates key quality parameters and, while specific to the needs of our facility, could be readily adapted to other proton centers.« less
NASA Technical Reports Server (NTRS)
Fijany, Amir; Collier, James B.; Citak, Ari
1997-01-01
A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.
NASA Astrophysics Data System (ADS)
Song, X.; Chen, X.; Dai, H.; Hammond, G. E.; Song, H. S.; Stegen, J.
2016-12-01
The hyporheic zone is an active region for biogeochemical processes such as carbon and nitrogen cycling, where the groundwater and surface water mix and interact with each other with distinct biogeochemical and thermal properties. The biogeochemical dynamics within the hyporheic zone are driven by both river water and groundwater hydraulic dynamics, which are directly affected by climate change scenarios. Besides that, the hydraulic and thermal properties of local sediments and microbial and chemical processes also play important roles in biogeochemical dynamics. Thus for a comprehensive understanding of the biogeochemical processes in the hyporheic zone, a coupled thermo-hydro-biogeochemical model is needed. As multiple uncertainty sources are involved in the integrated model, it is important to identify its key modules/parameters through sensitivity analysis. In this study, we develop a 2D cross-section model in the hyporheic zone at the DOE Hanford site adjacent to Columbia River and use this model to quantify module and parametric sensitivity on assessment of climate change. To achieve this purpose, We 1) develop a facies-based groundwater flow and heat transfer model that incorporates facies geometry and heterogeneity characterized from a field data set, 2) derive multiple reaction networks/pathways from batch experiments with in-situ samples and integrate temperate dependent reactive transport modules to the flow model, 3) assign multiple climate change scenarios to the coupled model by analyzing historical river stage data, 4) apply a variance-based global sensitivity analysis to quantify scenario/module/parameter uncertainty in hierarchy level. The objectives of the research include: 1) identifing the key control factors of the coupled thermo-hydro-biogeochemical model in the assessment of climate change, and 2) quantify the carbon consumption in different climate change scenarios in the hyporheic zone.
Space physiology IV: mathematical modeling of the cardiovascular system in space exploration.
Keith Sharp, M; Batzel, Jerry Joseph; Montani, Jean-Pierre
2013-08-01
Mathematical modeling represents an important tool for analyzing cardiovascular function during spaceflight. This review describes how modeling of the cardiovascular system can contribute to space life science research and illustrates this process via modeling efforts to study postflight orthostatic intolerance (POI), a key issue for spaceflight. Examining this application also provides a context for considering broader applications of modeling techniques to the challenges of bioastronautics. POI, which affects a large fraction of astronauts in stand tests upon return to Earth, presents as dizziness, fainting and other symptoms, which can diminish crew performance and cause safety hazards. POI on the Moon or Mars could be more critical. In the field of bioastronautics, POI has been the dominant application of cardiovascular modeling for more than a decade, and a number of mechanisms for POI have been investigated. Modeling approaches include computational models with a range of incorporated factors and hemodynamic sophistication, and also physical models tested in parabolic and orbital flight. Mathematical methods such as parameter sensitivity analysis can help identify key system mechanisms. In the case of POI, this could lead to more effective countermeasures. Validation is a persistent issue in modeling efforts, and key considerations and needs for experimental data to synergistically improve understanding of cardiovascular responses are outlined. Future directions in cardiovascular modeling include subject-specific assessment of system status, as well as research on integrated physiological responses, leading, for instance, to assessment of subject-specific susceptibility to POI or effects of cardiovascular alterations on muscular, vision and cognitive function.
Hunt, Jennifer; Bristowe, Katherine; Chidyamatare, Sybille; Harding, Richard
2017-01-01
Objectives To examine experiences of key populations (lesbian, gay, bisexual, trans and intersex (LGBTI) people, men who have sex with men (MSM) and sex workers) in Zimbabwe regarding access to, and experiences of, healthcare. Design Qualitative study using in-depth interviews and focus groups, with thematic analysis. Participants Sixty individuals from key populations in Zimbabwe. Setting Participants were recruited from four locations (Harare, Bulawayo, Mutare, Beitbridge/Masvingo). Results Participants described considerable unmet needs and barriers to accessing basic healthcare due to discrimination regarding key population status, exacerbated by the sociopolitical/legal environment. Three main themes emerged: (1) key populations' illnesses were caused by their behaviour; (2) equal access to healthcare is conditional on key populations conforming to ‘sexual norms’ and (3) perceptions that healthcare workers were ill-informed about key populations, and that professionals' personal attitudes affected care delivery. Participants felt unable to discuss their key population status with healthcare workers. Their healthcare needs were expected to be met almost entirely by their own communities. Conclusions This is one of very few studies of healthcare access beyond HIV for key populations in Africa. Discrimination towards key populations discourages early diagnosis, limits access to healthcare/treatment and increases risk of transmission of infectious diseases. Key populations experience unnecessary suffering from untreated conditions, exclusion from healthcare and extreme psychological distress. Education is needed to reduce stigma and enhance sensitive clinical interviewing skills. Clinical and public health implications of discrimination in healthcare must be addressed through evidence-based interventions for professionals, particularly in contexts with sociopolitical/legal barriers to equality. PMID:28589012
ERIC Educational Resources Information Center
Smidt, Jon
2011-01-01
What are the "key competencies" needed in our time? What literacy is needed to make students active participants in their societies and contributors to changing cultures? This article offers a contribution to the ongoing discussion about these questions. It takes as its point of departure the "key competencies" formulated in…
Local Responses to Global Problems: A Key to Meeting Basic Human Needs. Worldwatch Paper 17.
ERIC Educational Resources Information Center
Stokes, Bruce
The booklet maintains that the key to meeting basic human needs is the participation of individuals and communities in local problem solving. Some of the most important achievements in providing food, upgrading housing, improving human health, and tapping new energy sources, comes through local self-help projects. Proponents of local efforts at…
The Mismatch between Children's Health Needs and School Resources
ERIC Educational Resources Information Center
Knauer, Heather; Baker, Dian L.; Hebbeler, Kathleen; Davis-Alldritt, Linda
2015-01-01
There are increasing numbers of children with special health care needs (CSHCN) who require various levels of care each school day. The purpose of this study was to examine the role of public schools in supporting CSHCN through in-depth key informant interviews. For this qualitative study, the authors interviewed 17 key informants to identify key…
Training: Who Needs It? Research Report 1995. Key Issues for Providers.
ERIC Educational Resources Information Center
Hotel and Catering Training Co., London (England).
Aimed at all those involved in the supply of training and vocational education for the hospitality industry, this report summarizes findings of the research report, "Training Who Needs It?" It draws out and explores in more detail key issues relating to the provision of training, support, and related initiatives for the industry. Section…
Mapping of multiple parameter m-health scenarios to mobile WiMAX QoS variables.
Alinejad, Ali; Philip, N; Istepanian, R S H
2011-01-01
Multiparameter m-health scenarios with bandwidth demanding requirements will be one of key applications in future 4 G mobile communication systems. These applications will potentially require specific spectrum allocations with higher quality of service requirements. Furthermore, one of the key 4 G technologies targeting m-health will be medical applications based on WiMAX systems. Hence, it is timely to evaluate such multiple parametric m-health scenarios over mobile WiMAX networks. In this paper, we address the preliminary performance analysis of mobile WiMAX network for multiparametric telemedical scenarios. In particular, we map the medical QoS to typical WiMAX QoS parameters to optimise the performance of these parameters in typical m-health scenario. Preliminary performance analyses of the proposed multiparametric scenarios are evaluated to provide essential information for future medical QoS requirements and constraints in these telemedical network environments.
NASA Astrophysics Data System (ADS)
Nelson, Hunter Barton
A simplified second-order transfer function actuator model used in most flight dynamics applications cannot easily capture the effects of different actuator parameters. The present work integrates a nonlinear actuator model into a nonlinear state space rotorcraft model to determine the effect of actuator parameters on key flight dynamics. The completed actuator model was integrated with a swashplate kinematics where step responses were generated over a range of key hydraulic parameters. The actuator-swashplate system was then introduced into a nonlinear state space rotorcraft simulation where flight dynamics quantities such as bandwidth and phase delay analyzed. Frequency sweeps were simulated for unique actuator configurations using the coupled nonlinear actuator-rotorcraft system. The software package CIFER was used for system identification and compared directly to the linearized models. As the actuator became rate saturated, the effects on bandwidth and phase delay were apparent on the predicted handling qualities specifications.
Financial gains and risks in pay-for-performance bonus algorithms.
Cromwell, Jerry; Drozd, Edward M; Smith, Kevin; Trisolini, Michael
2007-01-01
Considerable attention has been given to evidence-based process indicators associated with quality of care, while much less attention has been given to the structure and key parameters of the various pay-for-performance (P4P) bonus and penalty arrangements using such measures. In this article we develop a general model of quality payment arrangements and discuss the advantages and disadvantages of the key parameters. We then conduct simulation analyses of four general P4P payment algorithms by varying seven parameters, including indicator weights, indicator intercorrelation, degree of uncertainty regarding intervention effectiveness, and initial baseline rates. Bonuses averaged over several indicators appear insensitive to weighting, correlation, and the number of indicators. The bonuses are sensitive to disease manager perceptions of intervention effectiveness, facing challenging targets, and the use of actual-to-target quality levels versus rates of improvement over baseline.
Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R
2012-09-10
A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.
NASA Astrophysics Data System (ADS)
Harper, E. B.; Stella, J. C.; Fremier, A. K.
2009-12-01
Fremont cottonwood (Populus fremontii) is an important component of semi-arid riparian ecosystems throughout western North America, but its populations are in decline due to flow regulation. Achieving a balance between human resource needs and riparian ecosystem function requires a mechanistic understanding of the multiple geomorphic and biological factors affecting tree recruitment and survival, including the timing and magnitude of river flows, and the concomitant influence on suitable habitat creation and mortality from scour and sedimentation burial. Despite a great deal of empirical research on some components of the system, such as factors affecting cottonwood recruitment, other key components are less studied. Yet understanding the relative influence of the full suite of physical and life-history drivers is critical to modeling whole-population dynamics under changing environmental conditions. We addressed these issues for the Fremont cottonwood population along the Sacramento River, CA using a sensitivity analysis approach to quantify uncertainty in parameters on the outcomes of a patch-based, dynamic population model. Using a broad range of plausible values for 15 model parameters that represent key physical, biological and climatic components of the ecosystem, we ran 1,000 population simulations that consisted of a subset of 14.3 million possible combinations of parameter estimates to predict the frequency of patch colonization and total forest habitat predicted to occur under current hydrologic conditions after 175 years. Results indicate that Fremont cottonwood populations are highly sensitive to the interactions among flow regime, sedimentation rate and the depth of the capillary fringe (Fig. 1). Estimates of long-term floodplain sedimentation rate would substantially improve model accuracy. Spatial variation in sediment texture was also important to the extent that it determines the depth of the capillary fringe, which regulates the availability of water for germination and adult tree growth. Our sensitivity analyses suggest that models of future scenarios should incorporate regional climate change projections because changes in temperature and the timing and volume of precipitation affects sensitive aspects of the system, including the timing of seed release and spring snowmelt runoff. Figure 1. The relative effects on model predictions of uncertainty around each parameter included in the patch-based population model for Fremont cottonwood.
Arauzo, Mercedes
2017-01-01
This research was undertaken to further our understanding of the factors involved in nonpoint-source nitrate pollution of groundwater. The shortcomings of some of the most commonly used methods for assessing groundwater vulnerability have been analysed and a new procedure that incorporates key improvements has been proposed. The new approach (LU-IV procedure) allows us to assess and map groundwater vulnerability to nitrate pollution and to accurately delimit the Nitrate Vulnerable Zones. The LU-IV procedure proved more accurate than the most widely used methods to assess groundwater vulnerability (DRASTIC, GOD), when compared with nitrate distribution in the groundwater of 46 aquifers included in the study (using the drainage basin as the unit of analysis). The proposed procedure stands out by meeting the following requirements: (1) it uses readily available parameters that provide enough data to feed the model, (2) it excludes redundant parameters, (3) it avoids the need to assign insufficiently contrasted weights to parameters, (4) it assess the whole catchment area that potentially drains N-polluted waters into the receptor aquifer, (5) it can be implemented within a GIS, and (6) it provides a multi-scale representation. As the LU-IV procedure has been demonstrated to be a reliable tool for delimiting NVZ, it could be particularly interesting to use it in countries where certain types of environmental data are either not available or have only limited availability. Based on this study (and according to the LU-IV procedure), it was concluded that an area of at least 1728km 2 should be considered as NVZ. This sharply contrasts with the current 328km 2 officially designated in the study area by the Spain's regional administrations. These results highlight the need to redefine the current NVZ designation, which is essential for an appropriate implementation of action programmes designed to restore water quality in line with Directive 91/676/EEC. Copyright © 2016 Elsevier B.V. All rights reserved.
Nodes on ropes: a comprehensive data and control flow for steering ensemble simulations.
Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Hirsch, Christian; Schindler, Benjamin; Blöschl, Günther; Gröller, M Eduard
2011-12-01
Flood disasters are the most common natural risk and tremendous efforts are spent to improve their simulation and management. However, simulation-based investigation of actions that can be taken in case of flood emergencies is rarely done. This is in part due to the lack of a comprehensive framework which integrates and facilitates these efforts. In this paper, we tackle several problems which are related to steering a flood simulation. One issue is related to uncertainty. We need to account for uncertain knowledge about the environment, such as levee-breach locations. Furthermore, the steering process has to reveal how these uncertainties in the boundary conditions affect the confidence in the simulation outcome. Another important problem is that the simulation setup is often hidden in a black-box. We expose system internals and show that simulation steering can be comprehensible at the same time. This is important because the domain expert needs to be able to modify the simulation setup in order to include local knowledge and experience. In the proposed solution, users steer parameter studies through the World Lines interface to account for input uncertainties. The transport of steering information to the underlying data-flow components is handled by a novel meta-flow. The meta-flow is an extension to a standard data-flow network, comprising additional nodes and ropes to abstract parameter control. The meta-flow has a visual representation to inform the user about which control operations happen. Finally, we present the idea to use the data-flow diagram itself for visualizing steering information and simulation results. We discuss a case-study in collaboration with a domain expert who proposes different actions to protect a virtual city from imminent flooding. The key to choosing the best response strategy is the ability to compare different regions of the parameter space while retaining an understanding of what is happening inside the data-flow system. © 2011 IEEE
Optimization modeling of U.S. renewable electricity deployment using local input variables
NASA Astrophysics Data System (ADS)
Bernstein, Adam
For the past five years, state Renewable Portfolio Standard (RPS) laws have been a primary driver of renewable electricity (RE) deployments in the United States. However, four key trends currently developing: (i) lower natural gas prices, (ii) slower growth in electricity demand, (iii) challenges of system balancing intermittent RE within the U.S. transmission regions, and (iv) fewer economical sites for RE development, may limit the efficacy of RPS laws over the remainder of the current RPS statutes' lifetime. An outsized proportion of U.S. RE build occurs in a small number of favorable locations, increasing the effects of these variables on marginal RE capacity additions. A state-by-state analysis is necessary to study the U.S. electric sector and to generate technology specific generation forecasts. We used LP optimization modeling similar to the National Renewable Energy Laboratory (NREL) Renewable Energy Development System (ReEDS) to forecast RE deployment across the 8 U.S. states with the largest electricity load, and found state-level RE projections to Year 2031 significantly lower than thoseimplied in the Energy Information Administration (EIA) 2013 Annual Energy Outlook forecast. Additionally, the majority of states do not achieve their RPS targets in our forecast. Combined with the tendency of prior research and RE forecasts to focus on larger national and global scale models, we posit that further bottom-up state and local analysis is needed for more accurate policy assessment, forecasting, and ongoing revision of variables as parameter values evolve through time. Current optimization software eliminates much of the need for algorithm coding and programming, allowing for rapid model construction and updating across many customized state and local RE parameters. Further, our results can be tested against the empirical outcomes that will be observed over the coming years, and the forecast deviation from the actuals can be attributed to discrete parameter variances.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
State and Parameter Estimation for a Coupled Ocean--Atmosphere Model
NASA Astrophysics Data System (ADS)
Ghil, M.; Kondrashov, D.; Sun, C.
2006-12-01
The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.
High-Speed Surface Reconstruction of Flying Birds Using Structured Light
NASA Astrophysics Data System (ADS)
Deetjen, Marc; Lentink, David
2017-11-01
Birds fly effectively through complex environments, and in order to understand the strategies that enable them to do so, we need to determine the shape and movement of their wings. Previous studies show that even small perturbations in wing shape have dramatic aerodynamic effects, but these shape changes have not been quantified automatically at high temporal and spatial resolutions. Hence, we developed a custom 3D surface mapping method which uses a high-speed camera to view a grid of stripes projected onto a flying bird. Because the light is binary rather than grayscale, and each frame is separately analyzed, this method can function at any frame rate with sufficient light. The method is automated, non-invasive, and able to measure a volume by simultaneously reconstructing from multiple views. We use this technique to reconstruct the 3D shape of the surface of a parrotlet during flapping flight at 3200 fps. We then analyze key dynamic parameters such as wing twist and angle of attack, and compute aerodynamic parameters such as lift and drag. While this novel system is designed to quantify bird wing shape and motion, it is adaptable for tracking other objects such as quickly deforming fish, especially those which are difficult to reconstruct using other 3D tracking methods. The presenter needs to leave by 3 pm on the final day of the conference (11/21) in order to make his flight. Please account for this in the scheduling if possible by scheduling the presentation earlier in the day or a different day.
The role of series ankle elasticity in bipedal walking
Zelik, Karl E.; Huang, Tzu-Wei P.; Adamczyk, Peter G.; Kuo, Arthur D.
2014-01-01
The elastic stretch-shortening cycle of the Achilles tendon during walking can reduce the active work demands on the plantarflexor muscles in series. However, this does not explain why or when this ankle work, whether by muscle or tendon, needs to be performed during gait. We therefore employ a simple bipedal walking model to investigate how ankle work and series elasticity impact economical locomotion. Our model shows that ankle elasticity can use passive dynamics to aid push-off late in single support, redirecting the body's center-of-mass (COM) motion upward. An appropriately timed, elastic push-off helps to reduce dissipative collision losses at contralateral heelstrike, and therefore the positive work needed to offset those losses and power steady walking. Thus, the model demonstrates how elastic ankle work can reduce the total energetic demands of walking, including work required from more proximal knee and hip muscles. We found that the key requirement for using ankle elasticity to achieve economical gait is the proper ratio of ankle stiffness to foot length. Optimal combination of these parameters ensures proper timing of elastic energy release prior to contralateral heelstrike, and sufficient energy storage to redirect the COM velocity. In fact, there exist parameter combinations that theoretically yield collision-free walking, thus requiring zero active work, albeit with relatively high ankle torques. Ankle elasticity also allows the hip to power economical walking by contributing indirectly to push-off. Whether walking is powered by the ankle or hip, ankle elasticity may aid walking economy by reducing collision losses. PMID:24365635
The role of series ankle elasticity in bipedal walking.
Zelik, Karl E; Huang, Tzu-Wei P; Adamczyk, Peter G; Kuo, Arthur D
2014-04-07
The elastic stretch-shortening cycle of the Achilles tendon during walking can reduce the active work demands on the plantarflexor muscles in series. However, this does not explain why or when this ankle work, whether by muscle or tendon, needs to be performed during gait. We therefore employ a simple bipedal walking model to investigate how ankle work and series elasticity impact economical locomotion. Our model shows that ankle elasticity can use passive dynamics to aid push-off late in single support, redirecting the body's center-of-mass (COM) motion upward. An appropriately timed, elastic push-off helps to reduce dissipative collision losses at contralateral heelstrike, and therefore the positive work needed to offset those losses and power steady walking. Thus, the model demonstrates how elastic ankle work can reduce the total energetic demands of walking, including work required from more proximal knee and hip muscles. We found that the key requirement for using ankle elasticity to achieve economical gait is the proper ratio of ankle stiffness to foot length. Optimal combination of these parameters ensures proper timing of elastic energy release prior to contralateral heelstrike, and sufficient energy storage to redirect the COM velocity. In fact, there exist parameter combinations that theoretically yield collision-free walking, thus requiring zero active work, albeit with relatively high ankle torques. Ankle elasticity also allows the hip to power economical walking by contributing indirectly to push-off. Whether walking is powered by the ankle or hip, ankle elasticity may aid walking economy by reducing collision losses. Copyright © 2013 Elsevier Ltd. All rights reserved.
Variations in embodied energy and carbon emission intensities of construction materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan Omar, Wan-Mohd-Sabki; School of Environmental Engineering, Universiti Malaysia Perlis, 02600 Arau, Perlis; Doh, Jeung-Hwan, E-mail: j.doh@griffith.edu.au
2014-11-15
Identification of parameter variation allows us to conduct more detailed life cycle assessment (LCA) of energy and carbon emission material over their lifecycle. Previous research studies have demonstrated that hybrid LCA (HLCA) can generally overcome the problems of incompleteness and accuracy of embodied energy (EE) and carbon (EC) emission assessment. Unfortunately, the current interpretation and quantification procedure has not been extensively and empirically studied in a qualitative manner, especially in hybridising between the process LCA and I-O LCA. To determine this weakness, this study empirically demonstrates the changes in EE and EC intensities caused by variations to key parameters inmore » material production. Using Australia and Malaysia as a case study, the results are compared with previous hybrid models to identify key parameters and issues. The parameters considered in this study are technological changes, energy tariffs, primary energy factors, disaggregation constant, emission factors, and material price fluctuation. It was found that changes in technological efficiency, energy tariffs and material prices caused significant variations in the model. Finally, the comparison of hybrid models revealed that non-energy intensive materials greatly influence the variations due to high indirect energy and carbon emission in upstream boundary of material production, and as such, any decision related to these materials should be considered carefully. - Highlights: • We investigate the EE and EC intensity variation in Australia and Malaysia. • The influences of parameter variations on hybrid LCA model were evaluated. • Key significant contribution to the EE and EC intensity variation were identified. • High indirect EE and EC content caused significant variation in hybrid LCA models. • Non-energy intensive material caused variation between hybrid LCA models.« less
A comparative study on cotton fiber length parameters’ effects on modeling yarn property
USDA-ARS?s Scientific Manuscript database
Fiber length is one of the key properties of cotton and has important influences on yarn production and yarn quality. Various parameters have been developed to characterize cotton fiber length in the past decades. This study was carried out to investigate the effects of these parameters and their ...
Using Spreadsheets to Discover Meaning for Parameters in Nonlinear Models
ERIC Educational Resources Information Center
Green, Kris H.
2008-01-01
This paper explores the use of spreadsheets to develop an exploratory environment where mathematics students can develop their own understanding of the parameters of commonly encountered families of functions: linear, logarithmic, exponential and power. The key to this understanding involves opening up the definition of rate of change from the…
Conceptual Study of Rotary-Wing Microrobotics
2008-03-27
tensile residual stress, respectively [78-80]. ......... 48 Table 8: Wing-T design parameters compared to Tsuzuki’s recommendations. ....... 73...Table 13: Summary of key parameters for a feasible rotary-wing MEMS robot design...Direct Methanol Fuel Cell DOF Degrees of Freedom DRIE Deep Reactive Ion Etch FEA Finite Element Analysis FEM Finite Element Modeling FOM Figure
Neuroeconomics and public health
Larsen, Torben
2010-01-01
Objective To design an economic evaluation strategy for general health promotion projects. Method Identification of key parameters of behavioral health from neuroeconomic studies. Results The Frontal Power of Concentration (C) is a quadripartite executive integrator depending on four key parameters: 1) The Limbic system originating ambivalent emotions (L). 2) Volition in the Prefrontal Cortex (c) controlling cognitive prediction and emotions with a view on Frontopolar long-term goals. 3) Semantic memories in the Temporal lobe (R). 4) An intuitive visuospatial sketchpad in the Parietal lobe (I). C aiming to minimize error between preferences and predictions is directly determined by the following equation including I as a stochastic knowledge component: C =Rc2/L +εI→ 1 Discussion All of the parameters of C are object to improvement by training: Cognitive predictions are improved by open-mindedness towards feedback (R).The effect of emotional regrets is reinforced by an appropriate level of fitness (c, L).Our imagination may be unfolded by in-depth-relaxation-procedures and visualization (I). Conclusion Economic evaluation of general public health should focus on the subset of separate and integrated interventions that directly affect the parameters of Formula C in individuals.
Tag Content Access Control with Identity-based Key Exchange
NASA Astrophysics Data System (ADS)
Yan, Liang; Rong, Chunming
2010-09-01
Radio Frequency Identification (RFID) technology that used to identify objects and users has been applied to many applications such retail and supply chain recently. How to prevent tag content from unauthorized readout is a core problem of RFID privacy issues. Hash-lock access control protocol can make tag to release its content only to reader who knows the secret key shared between them. However, in order to get this shared secret key required by this protocol, reader needs to communicate with a back end database. In this paper, we propose to use identity-based secret key exchange approach to generate the secret key required for hash-lock access control protocol. With this approach, not only back end database connection is not needed anymore, but also tag cloning problem can be eliminated at the same time.
Furrer, F; Franz, T; Berta, M; Leverrier, A; Scholz, V B; Tomamichel, M; Werner, R F
2012-09-07
We provide a security analysis for continuous variable quantum key distribution protocols based on the transmission of two-mode squeezed vacuum states measured via homodyne detection. We employ a version of the entropic uncertainty relation for smooth entropies to give a lower bound on the number of secret bits which can be extracted from a finite number of runs of the protocol. This bound is valid under general coherent attacks, and gives rise to keys which are composably secure. For comparison, we also give a lower bound valid under the assumption of collective attacks. For both scenarios, we find positive key rates using experimental parameters reachable today.
Coherent attacking continuous-variable quantum key distribution with entanglement in the middle
NASA Astrophysics Data System (ADS)
Zhang, Zhaoyuan; Shi, Ronghua; Zeng, Guihua; Guo, Ying
2018-06-01
We suggest an approach on the coherent attack of continuous-variable quantum key distribution (CVQKD) with an untrusted entangled source in the middle. The coherent attack strategy can be performed on the double links of quantum system, enabling the eavesdropper to steal more information from the proposed scheme using the entanglement correlation. Numeric simulation results show the improved performance of the attacked CVQKD system in terms of the derived secret key rate with the controllable parameters maximizing the stolen information.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the atmosphere. (ii) Car-seal or lock-and-key valve closures. Secure any bypass line valve in the closed position with a car-seal or a lock-and-key type configuration. You must visually inspect the seal... sensor. (vii) At least monthly, inspect components for integrity and electrical connections for...
Code of Federal Regulations, 2012 CFR
2012-07-01
... device to the atmosphere. (ii) Car-seal or lock-and-key valve closures. Secure any bypass line valve in the closed position with a car-seal or a lock-and-key type configuration. You must visually inspect... components for integrity and electrical connections for continuity, oxidation, and galvanic corrosion. (d...
Code of Federal Regulations, 2014 CFR
2014-07-01
... the atmosphere. (ii) Car-seal or lock-and-key valve closures. Secure any bypass line valve in the closed position with a car-seal or a lock-and-key type configuration. You must visually inspect the seal... sensor. (vii) At least monthly, inspect components for integrity and electrical connections for...
NASA Astrophysics Data System (ADS)
Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.
2016-12-01
Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.
A Statewide Key Informant Survey and Social Indicators Analysis.
ERIC Educational Resources Information Center
Fleischer, Mitchell
This needs assessment study of mental health needs of the elderly in Pennsylvania used a three-part approach. These parts were a review of existing data sources, an extensive key informant study, and a review of service delivery models. A recent study found a prevalence rate for mental illness in the elderly of 12.8%, more than 5% lower than the…
ERIC Educational Resources Information Center
Linder, Kathryn E.; Fontaine-Rainen, Danielle L.; Behling, Kirsten
2015-01-01
This article reports on a national study conducted in the United States on the current institutional practices, structures, resources and policies that are needed to ensure online accessibility for all students at colleges and universities. Key findings include the need to better articulate who is responsible for online accessibility initiatives…
ERIC Educational Resources Information Center
Clark, Charlotte
2017-01-01
Given the focus on phonological attainment in the National Phonics Screening Check, small-scale school-based action research was undertaken to improve phonological recognition and assess the impact on progress and attainment in a sample drawn from Key Stage 1 which included pupils on the Special Educational Needs (SEN) Register. The research…
Effect of geometrical parameters on pressure distributions of impulse manufacturing technologies
NASA Astrophysics Data System (ADS)
Brune, Ryan Carl
Impulse manufacturing techniques constitute a growing field of methods that utilize high-intensity pressure events to conduct useful mechanical operations. As interest in applying this technology continues to grow, greater understanding must be achieved with respect to output pressure events in both magnitude and distribution. In order to address this need, a novel pressure measurement has been developed called the Profile Indentation Pressure Evaluation (PIPE) method that systematically analyzes indentation patterns created with impulse events. Correlation with quasi-static test data and use of software-assisted analysis techniques allows for colorized pressure maps to be generated for both electromagnetic and vaporizing foil actuator (VFA) impulse forming events. Development of this technique aided introduction of a design method for electromagnetic path actuator systems, where key geometrical variables are considered using a newly developed analysis method, which is called the Path Actuator Proximal Array (PAPA) pressure model. This model considers key current distribution and proximity effects and interprets generated pressure by considering the adjacent conductor surfaces as proximal arrays of individual conductors. According to PIPE output pressure analysis, the PAPA model provides a reliable prediction of generated pressure for path actuator systems as local geometry is changed. Associated mechanical calculations allow for pressure requirements to be calculated for shearing, flanging, and hemming operations, providing a design process for such cases. Additionally, geometry effect is investigated through a formability enhancement study using VFA metalworking techniques. A conical die assembly is utilized with both VFA high velocity and traditional quasi-static test methods on varied Hasek-type sample geometries to elicit strain states consistent with different locations on a forming limit diagram. Digital image correlation techniques are utilized to measure major and minor strains for each sample type to compare limit strain results. Overall testing indicated decreased formability at high velocity for 304 DDQ stainless steel and increased formability at high velocity for 3003-H14 aluminum. Microstructural and fractographic analysis helped dissect and analyze the observed differences in these cases. Overall, these studies comprehensively explore the effects of geometrical parameters on magnitude and distribution of impulse manufacturing generated pressure, establishing key guidelines and models for continued development and implementation in commercial applications.
A Public-Key Based Authentication and Key Establishment Protocol Coupled with a Client Puzzle.
ERIC Educational Resources Information Center
Lee, M. C.; Fung, Chun-Kan
2003-01-01
Discusses network denial-of-service attacks which have become a security threat to the Internet community and suggests the need for reliable authentication protocols in client-server applications. Presents a public-key based authentication and key establishment protocol coupled with a client puzzle protocol and validates it through formal logic…
What is the Optimal Strategy for Adaptive Servo-Ventilation Therapy?
Imamura, Teruhiko; Kinugawa, Koichiro
2018-05-23
Clinical advantages in the adaptive servo-ventilation (ASV) therapy have been reported in selected heart failure patients with/without sleep-disorder breathing, whereas multicenter randomized control trials could not demonstrate such advantages. Considering this discrepancy, optimal patient selection and device setting may be a key for the successful ASV therapy. Hemodynamic and echocardiographic parameters indicating pulmonary congestion such as elevated pulmonary capillary wedge pressure were reported as predictors of good response to ASV therapy. Recently, parameters indicating right ventricular dysfunction also have been reported as good predictors. Optimal device setting with appropriate pressure setting during appropriate time may also be a key. Large-scale prospective trial with optimal patient selection and optimal device setting is warranted.
Manole, Claudiu Constantin; Pîrvu, C; Maury, F; Demetrescu, I
2016-06-01
In a Surface Plasmon Resonance (SPR) experiment two key parameters are classically recorded: the time and the angle of SPR reflectivity. This paper brings into focus a third key parameter: SPR reflectivity. The SPR reflectivity is proved to be related to surface roughness changes. Practical investigations on (i) gold anodizing and (ii) polypyrrole film growth in presence of oxalic acid is detailed under potentiostatic conditions. These experimental results reveal the potential of using the SPR technique to investigate real-time changes both on the gold surface, but also in the gold film itself. This extends the versatility of the technique in particular as sensitive in-situ diagnostic tool.
Mass-number and excitation-energy dependence of the spin cutoff parameter
Grimes, S. M.; Voinov, A. V.; Massey, T. N.
2016-07-12
Here, the spin cutoff parameter determining the nuclear level density spin distribution ρ(J) is defined through the spin projection as < J 2 z > 1/2 or equivalently for spherical nuclei, (< J(J+1) >/3) 1/2. It is needed to divide the total level density into levels as a function of J. To obtain the total level density at the neutron binding energy from the s-wave resonance count, the spin cutoff parameter is also needed. The spin cutoff parameter has been calculated as a function of excitation energy and mass with a super-conducting Hamiltonian. Calculations have been compared with two commonlymore » used semiempirical formulas. A need for further measurements is also observed. Some complications for deformed nuclei are discussed. The quality of spin cut off parameter data derived from isomeric ratio measurement is examined.« less
Al-Amri, Mohammad; Al Balushi, Hilal; Mashabi, Abdulrhman
2017-12-01
Self-paced treadmill walking is becoming increasingly popular for the gait assessment and re-education, in both research and clinical settings. Its day-to-day repeatability is yet to be established. This study scrutinised the test-retest repeatability of key gait parameters, obtained from the Gait Real-time Analysis Interactive Lab (GRAIL) system. Twenty-three male able-bodied adults (age: 34.56 ± 5.12 years) completed two separate gait assessments on the GRAIL system, separated by 5 ± 3 days. Key gait kinematic, kinetic, and spatial-temporal parameters were analysed. The Intraclass-Correlation Coefficients (ICC), Standard Error Measurement (SEM), Minimum Detectable Change (MDC), and the 95% limits of agreements were calculated to evaluate the repeatability of these gait parameters. Day-to-day agreements were excellent (ICCs > 0.87) for spatial-temporal parameters with low MDC and SEM values, <0.153 and <0.055, respectively. The repeatability was higher for joint kinetic than kinematic parameters, as reflected in small values of SEM (<0.13 Nm/kg and <3.4°) and MDC (<0.335 Nm/kg and <9.44°). The obtained values of all parameters fell within the 95% limits of agreement. Our findings demonstrate the repeatability of the GRAIL system available in our laboratory. The SEM and MDC values can be used to assist researchers and clinicians to distinguish 'real' changes in gait performance over time.
Differential Fault Analysis on CLEFIA with 128, 192, and 256-Bit Keys
NASA Astrophysics Data System (ADS)
Takahashi, Junko; Fukunaga, Toshinori
This paper describes a differential fault analysis (DFA) attack against CLEFIA. The proposed attack can be applied to CLEFIA with all supported keys: 128, 192, and 256-bit keys. DFA is a type of side-channel attack. This attack enables the recovery of secret keys by injecting faults into a secure device during its computation of the cryptographic algorithm and comparing the correct ciphertext with the faulty one. CLEFIA is a 128-bit blockcipher with 128, 192, and 256-bit keys developed by the Sony Corporation in 2007. CLEFIA employs a generalized Feistel structure with four data lines. We developed a new attack method that uses this characteristic structure of the CLEFIA algorithm. On the basis of the proposed attack, only 2 pairs of correct and faulty ciphertexts are needed to retrieve the 128-bit key, and 10.78 pairs on average are needed to retrieve the 192 and 256-bit keys. The proposed attack is more efficient than any previously reported. In order to verify the proposed attack and estimate the calculation time to recover the secret key, we conducted an attack simulation using a PC. The simulation results show that we can obtain each secret key within three minutes on average. This result shows that we can obtain the entire key within a feasible computational time.
NASA Astrophysics Data System (ADS)
Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott
2017-09-01
We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.
Crop Damage by Primates: Quantifying the Key Parameters of Crop-Raiding Events
Wallace, Graham E.; Hill, Catherine M.
2012-01-01
Human-wildlife conflict often arises from crop-raiding, and insights regarding which aspects of raiding events determine crop loss are essential when developing and evaluating deterrents. However, because accounts of crop-raiding behaviour are frequently indirect, these parameters are rarely quantified or explicitly linked to crop damage. Using systematic observations of the behaviour of non-human primates on farms in western Uganda, this research identifies number of individuals raiding and duration of raid as the primary parameters determining crop loss. Secondary factors include distance travelled onto farm, age composition of the raiding group, and whether raids are in series. Regression models accounted for greater proportions of variation in crop loss when increasingly crop and species specific. Parameter values varied across primate species, probably reflecting differences in raiding tactics or perceptions of risk, and thereby providing indices of how comfortable primates are on-farm. Median raiding-group sizes were markedly smaller than the typical sizes of social groups. The research suggests that key parameters of raiding events can be used to measure the behavioural impacts of deterrents to raiding. Furthermore, farmers will benefit most from methods that discourage raiding by multiple individuals, reduce the size of raiding groups, or decrease the amount of time primates are on-farm. This study demonstrates the importance of directly relating crop loss to the parameters of raiding events, using systematic observations of the behaviour of multiple primate species. PMID:23056378
NASA Astrophysics Data System (ADS)
Verbeke, C.; Asvestari, E.; Scolini, C.; Pomoell, J.; Poedts, S.; Kilpua, E.
2017-12-01
Coronal Mass Ejections (CMEs) are one of the big influencers on the coronal and interplanetary dynamics. Understanding their origin and evolution from the Sun to the Earth is crucial in order to determine the impact on our Earth and society. One of the key parameters that determine the geo-effectiveness of the coronal mass ejection is its internal magnetic configuration. We present a detailed parameter study of the Gibson-Low flux rope model. We focus on changes in the input parameters and how these changes affect the characteristics of the CME at Earth. Recently, the Gibson-Low flux rope model has been implemented into the inner heliosphere model EUHFORIA, a magnetohydrodynamics forecasting model of large-scale dynamics from 0.1 AU up to 2 AU. Coronagraph observations can be used to constrain the kinematics and morphology of the flux rope. One of the key parameters, the magnetic field, is difficult to determine directly from observations. In this work, we approach the problem by conducting a parameter study in which flux ropes with varying magnetic configurations are simulated. We then use the obtained dataset to look for signatures in imaging observations and in-situ observations in order to find an empirical way of constraining the parameters related to the magnetic field of the flux rope. In particular, we focus on events observed by at least two spacecraft (STEREO + L1) in order to discuss the merits of using observations from multiple viewpoints in constraining the parameters.